digger yu
e2d81eba0d
[nfc] fix typo colossalai/ applications/ ( #3831 )
...
* fix typo colossalai/autochunk auto_parallel amp
* fix typo colossalai/auto_parallel nn utils etc.
* fix typo colossalai/auto_parallel autochunk fx/passes etc.
* fix typo docs/
* change placememt_policy to placement_policy in docs/ and examples/
* fix typo colossalai/ applications/
2 years ago
digger yu
7f8203af69
fix typo colossalai/auto_parallel autochunk fx/passes etc. ( #3808 )
2 years ago
digger yu
9265f2d4d7
[NFC]fix typo colossalai/auto_parallel nn utils etc. ( #3779 )
...
* fix typo colossalai/autochunk auto_parallel amp
* fix typo colossalai/auto_parallel nn utils etc.
2 years ago
digger yu
32f81f14d4
[NFC] fix typo colossalai/amp auto_parallel autochunk ( #3756 )
2 years ago
digger yu
1baeb39c72
[NFC] fix typo with colossalai/auto_parallel/tensor_shard ( #3742 )
...
* fix typo applications/ and colossalai/ date 5.11
* fix typo colossalai/
2 years ago
digger-yu
ad6460cf2c
[NFC] fix typo applications/ and colossalai/ ( #3735 )
2 years ago
digger-yu
b9a8dff7e5
[doc] Fix typo under colossalai and doc( #3618 )
...
* Fixed several spelling errors under colossalai
* Fix the spelling error in colossalai and docs directory
* Cautious Changed the spelling error under the example folder
* Update runtime_preparation_pass.py
revert autograft to autograd
* Update search_chunk.py
utile to until
* Update check_installation.py
change misteach to mismatch in line 91
* Update 1D_tensor_parallel.md
revert to perceptron
* Update 2D_tensor_parallel.md
revert to perceptron in line 73
* Update 2p5D_tensor_parallel.md
revert to perceptron in line 71
* Update 3D_tensor_parallel.md
revert to perceptron in line 80
* Update README.md
revert to resnet in line 42
* Update reorder_graph.py
revert to indice in line 7
* Update p2p.py
revert to megatron in line 94
* Update initialize.py
revert to torchrun in line 198
* Update routers.py
change to detailed in line 63
* Update routers.py
change to detailed in line 146
* Update README.md
revert random number in line 402
2 years ago
YuliangLiu0306
ffcdbf0f65
[autoparallel]integrate auto parallel feature with new tracer ( #3408 )
...
* [autoparallel] integrate new analyzer in module level
* unify the profiling method
* polish
* fix no codegen bug
* fix pass bug
* fix liveness test
* polish
2 years ago
ver217
26b7aac0be
[zero] reorganize zero/gemini folder structure ( #3424 )
...
* [zero] refactor low-level zero folder structure
* [zero] fix legacy zero import path
* [zero] fix legacy zero import path
* [zero] remove useless import
* [zero] refactor gemini folder structure
* [zero] refactor gemini folder structure
* [zero] refactor legacy zero import path
* [zero] refactor gemini folder structure
* [zero] refactor gemini folder structure
* [zero] refactor gemini folder structure
* [zero] refactor legacy zero import path
* [zero] fix test import path
* [zero] fix test
* [zero] fix circular import
* [zero] update import
2 years ago
Frank Lee
638a07a7f9
[test] fixed gemini plugin test ( #3411 )
...
* [test] fixed gemini plugin test
* polish code
* polish code
2 years ago
YuliangLiu0306
fee2af8610
[autoparallel] adapt autoparallel with new analyzer ( #3261 )
...
* [autoparallel] adapt autoparallel with new analyzer
* fix all node handler tests
* polish
* polish
2 years ago
Zihao
18dbe76cae
[auto-parallel] add auto-offload feature ( #3154 )
...
* add auto-offload feature
* polish code
* fix syn offload runtime pass bug
* add offload example
* fix offload testing bug
* fix example testing bug
2 years ago
YuliangLiu0306
47fb214b3b
[hotfix] add shard dim to aviod backward communication error ( #2954 )
2 years ago
YuliangLiu0306
197d0bf4ed
[autoparallel] apply repeat block to reduce solving time ( #2912 )
2 years ago
YuliangLiu0306
819e25d8b1
[hotfix] fix autoparallel compatibility test issues ( #2754 )
2 years ago
YuliangLiu0306
0f392d7403
[autoparallel] find repeat blocks ( #2854 )
...
* [autoparallel] find repeat blocks
* polish
* polish
* polish
2 years ago
Boyuan Yao
eae77c831d
[autoparallel] Patch meta information for nodes that will not be handled by SPMD solver ( #2823 )
...
* [autoparallel] non spmd meta information generator
* [autoparallel] patch meta information for non spmd nodes
2 years ago
Boyuan Yao
c7764d3f22
[autoparallel] Patch meta information of `torch.where` ( #2822 )
...
* [autoparallel] patch meta information of torch.where
* [autoparallel] pre-commit modified
2 years ago
Boyuan Yao
fcc4097efa
[autoparallel] Patch meta information of `torch.tanh()` and `torch.nn.Dropout` ( #2773 )
...
* [autoparallel] tanh meta information
* [autoparallel] remove redundant code
* [autoparallel] patch meta information of torch.nn.Dropout
2 years ago
Boyuan Yao
7ea6bc7f69
[autoparallel] Patch tensor related operations meta information ( #2789 )
...
* [autoparallel] tensor related meta information prototype
* [autoparallel] tensor related meta information
* [autoparallel] tensor related meta information
* [autoparallel] tensor related meta information
* [autoparallel] tensor related meta information
2 years ago
YuliangLiu0306
2059fdd6b0
[hotfix] add copyright for solver and device mesh ( #2803 )
...
* [hotfix] add copyright for solver and device mesh
* add readme
* add alpa license
* polish
2 years ago
Boyuan Yao
8593ae1a3f
[autoparallel] rotor solver refactor ( #2813 )
...
* [autoparallel] rotor solver refactor
* [autoparallel] rotor solver refactor
2 years ago
Boyuan Yao
a2b43e393d
[autoparallel] Patch meta information of `torch.nn.Embedding` ( #2760 )
...
* [autoparallel] embedding metainfo
* [autoparallel] fix function name in test_activation_metainfo
* [autoparallel] undo changes in activation metainfo and related tests
2 years ago
YuliangLiu0306
1dc003c169
[autoparallel] distinguish different parallel strategies ( #2699 )
2 years ago
YuliangLiu0306
21d6a48f4d
[autoparallel] add shard option ( #2696 )
...
* [autoparallel] add shard option
* polish
2 years ago
YuliangLiu0306
5b24987fa7
[autoparallel] fix parameters sharding bug ( #2716 )
2 years ago
YuliangLiu0306
cb2c6a2415
[autoparallel] refactor runtime pass ( #2644 )
...
* [autoparallel] refactor runtime pass
* add unit test
* polish
2 years ago
YuliangLiu0306
0b2a738393
[autoparallel] remove deprecated codes ( #2664 )
2 years ago
YuliangLiu0306
7fa6be49d2
[autoparallel] test compatibility for gemini and auto parallel ( #2700 )
2 years ago
Boyuan Yao
40c916b192
[autoparallel] Patch meta information of `torch.nn.functional.softmax` and `torch.nn.Softmax` ( #2674 )
...
* [autoparallel] softmax metainfo
* [autoparallel] softmax metainfo
2 years ago
Boyuan Yao
0385b26ebf
[autoparallel] Patch meta information of `torch.nn.LayerNorm` ( #2647 )
...
* [autoparallel] layernorm metainfo patch
* [autoparallel] polish test
2 years ago
YuliangLiu0306
37df666f38
[autoparallel] refactor handlers which reshape input tensors ( #2615 )
...
* [autoparallel] refactor handlers which reshape input tensors
* polish
2 years ago
YuliangLiu0306
28398f1c70
add overlap option ( #2613 )
2 years ago
YuliangLiu0306
cb3d1bef62
[autoparallel] adapt autoparallel tests with latest api ( #2626 )
2 years ago
Boyuan Yao
90a9fdd91d
[autoparallel] Patch meta information of `torch.matmul` ( #2584 )
...
* [autoparallel] matmul metainfo
* [auto_parallel] remove unused print
* [tests] skip test_matmul_handler when torch version is lower than 1.12.0
2 years ago
YuliangLiu0306
aa0f6686f9
[autoparallel] accelerate gpt2 training ( #2495 )
2 years ago
YuliangLiu0306
67e1912b59
[autoparallel] support origin activation ckpt on autoprallel system ( #2468 )
2 years ago
YuliangLiu0306
8221fd7485
[autoparallel] update binary elementwise handler ( #2451 )
...
* [autoparallel] update binary elementwise handler
* polish
2 years ago
YuliangLiu0306
2731531bc2
[autoparallel] integrate device mesh initialization into autoparallelize ( #2393 )
...
* [autoparallel] integrate device mesh initialization into autoparallelize
* add megatron solution
* update gpt autoparallel examples with latest api
* adapt beta value to fit the current computation cost
2 years ago
YuliangLiu0306
41429b9b28
[autoparallel] add shard option ( #2423 )
2 years ago
binmakeswell
a881d6d000
Revert "[NFC] polish code format" ( #2372 )
2 years ago
Jiarui Fang
0dcc410f57
[NFC] polish code format
2 years ago
binmakeswell
d634eae05b
Revert "[NFC] polish code format ( #2367 )" ( #2371 )
...
This reverts commit 1f8ab6f1f5
.
2 years ago
Shawn-Kong
d42aecdda1
[NFC] polish colossalai/auto_parallel/tensor_shard/deprecated/op_handler/embedding_handler.py code style ( #2368 )
2 years ago
binmakeswell
1f8ab6f1f5
[NFC] polish code format ( #2367 )
2 years ago
ExtremeViscent
ac0d30fe2e
[NFC] polish batch_norm_handler.py code style ( #2359 )
2 years ago
ziyuhuang123
7080a8edb0
[workflow]New version: Create workflow files for examples' auto check ( #2298 )
...
* [workflows]bug_repair
* [workflow]new_pr_fixing_bugs
Co-authored-by: binmakeswell <binmakeswell@gmail.com>
2 years ago
LuGY
e11a005c02
[NFC] polish colossalai/auto_parallel/tensor_shard/utils/factory.py code style ( #2349 )
2 years ago
yuxuan-lou
28e2d16794
[NFC] polish colossalai/auto_parallel/tensor_shard/deprecated/graph_analysis.py code style ( #2340 )
2 years ago
YuliangLiu0306
9c9246c0d9
[device] alpha beta profiler ( #2311 )
...
* [device] alpha beta profiler
* add usage
* fix variable name
2 years ago