Commit Graph

112 Commits (abf6a262dc5a210c1f7d62617154eeebdc38253e)

Author SHA1 Message Date
Jiarui Fang 372f791444
[refactor] move chunk and chunkmgr to directory gemini (#1182)
2 years ago
ver217 6b2f2ab9bb
[ddp] ColoDDP uses bucket all-reduce (#1177)
2 years ago
Jiarui Fang 1b657f9ce1
[tensor] revert local view back (#1178)
2 years ago
Jiarui Fang 0dd4e2bbfb
[Tensor] rename some APIs in TensorSpec and Polish view unittest (#1176)
2 years ago
Ziyue Jiang dd0420909f
[Tensor] rename parallel_action (#1174)
2 years ago
Jiarui Fang aa7bef73d4
[Tensor] distributed view supports inter-process hybrid parallel (#1169)
2 years ago
Jiarui Fang 4b9bba8116
[ColoTensor] rename APIs and add output_replicate to ComputeSpec (#1168)
2 years ago
Jiarui Fang f4ef224358
[Tensor] remove ParallelAction, use ComputeSpec instread (#1166)
2 years ago
Jiarui Fang 177c374401
remove gather out in parallel action (#1163)
2 years ago
Ziyue Jiang 955ac912de
remove log (#1160)
2 years ago
Jiarui Fang 07f9c781f9
[graph] improve the graph building. (#1157)
2 years ago
ver217 22717a856f
[tensor] add embedding bag op (#1156)
2 years ago
ver217 ae86151968
[tensor] add more element-wise ops (#1155)
2 years ago
ver217 54aabb8da4
[gemini] refactor gemini mgr (#1151)
2 years ago
ver217 8106d7b8c7
[ddp] refactor ColoDDP and ZeroDDP (#1146)
2 years ago
ver217 ccf3c58c89
embedding op use gather_out (#1143)
2 years ago
Frank Lee 15aab1476e
[zero] avoid zero hook spam by changing log to debug level (#1137)
2 years ago
ver217 e4f555f29a
[optim] refactor fused sgd (#1134)
2 years ago
ver217 d26902645e
[ddp] add save/load state dict for ColoDDP (#1127)
2 years ago
ver217 f0a954f16d
[ddp] add set_params_to_ignore for ColoDDP (#1122)
2 years ago
ver217 e127b4375b
cast colo ddp v2 inputs/outputs (#1120)
2 years ago
ver217 7d14b473f0
[gemini] gemini mgr supports "cpu" placement policy (#1118)
2 years ago
ver217 895c1c5ee7
[tensor] refactor param op hook (#1097)
2 years ago
Frank Lee cb18922c47
[doc] added documentation to chunk and chunk manager (#1094)
3 years ago
ver217 1f894e033f
[gemini] zero supports gemini (#1093)
3 years ago
Frank Lee 2b2dc1c86b
[pipeline] refactor the pipeline module (#1087)
3 years ago
ver217 be01db37c8
[tensor] refactor chunk mgr and impl MemStatsCollectorV2 (#1077)
3 years ago
Ziyue Jiang 0653c63eaa
[Tensor] 1d row embedding (#1075)
3 years ago
Ziyue Jiang 4fc748f69b
[Tensor] fix optimizer for CPU parallel (#1069)
3 years ago
Jiarui Fang 49832b2344
[refactory] add nn.parallel module (#1068)
3 years ago
Ziyue Jiang 6754f1b77f
fix module utils bug (#1066)
3 years ago
Jiarui Fang a00644079e
reorgnize colotensor directory (#1062)
3 years ago
Ziyue Jiang df9dcbbff6
[Tensor] add hybrid device demo and fix bugs (#1059)
3 years ago
ver217 51b9a49655
[zero] add zero optimizer for ColoTensor (#1046)
3 years ago
ver217 9492a561c3
[tensor] ColoTensor supports ZeRo (#1015)
3 years ago
ver217 cefc29ff06
[tensor] impl ColoDDP for ColoTensor (#1009)
3 years ago
Ziheng Qin 571f12eff3 [NFC] polish colossalai/nn/layer/utils/common.py code style (#983)
3 years ago
shenggan 18542b47fc [NFC] polish colossalai/nn/layer/parallel_2d/layers.py code style (#976)
3 years ago
Zirui Zhu 598cde4a0f [NFC] polish colossalai/nn/layer/parallel_2p5d/layers.py code style (#972)
3 years ago
LuGY fb5bc6cb28 [NFC] polish colossalai/nn/layer/parallel_3d/layers.py code style (#966)
3 years ago
ver217 58580b50fe
Revert "[NFC] Hotfix/format (#984)" (#986)
3 years ago
binmakeswell 0772828fba
[NFC] Hotfix/format (#984)
3 years ago
HELSON e5ea3fdeef
[gemini] add GeminiMemoryManger (#832)
3 years ago
Ziyue Jiang 4b01da24cd
[TP] change the check assert in split batch 2d (#772)
3 years ago
アマデウス b8899e0905
[TP] allow layernorm without bias (#750)
3 years ago
Frank Lee eda30a058e
[compatibility] fixed tensor parallel compatibility with torch 1.9 (#700)
3 years ago
HELSON a9b8300d54
[zero] improve adaptability for not-shard parameters (#708)
3 years ago
アマデウス 3fc8a204dc
[]Corrected 3d vocab parallel embedding (#707)
3 years ago
HELSON b31daed4cf
fix bugs in CPU adam (#633)
3 years ago
Liang Bowen 828e465622
[hotfix] Raise messages for indivisible batch sizes with tensor parallelism (#622)
3 years ago