65 Commits (dec24561cf4048219dac98401b70e9fc35e985ad)

Author SHA1 Message Date
Jiarui Fang 11bddb6e55 [zero] update zero context init with the updated test utils (#327) 3 years ago
HELSON 4f26fabe4f fixed strings in profiler outputs (#325) 3 years ago
Jiarui Fang de0468c7a8 [zero] zero init context (#321) 3 years ago
1SAA 73bff11288 Added profiler communication operations 3 years ago
LuGY a3269de5c9 [zero] cpu adam kernel (#288) 3 years ago
Jiarui Fang 90d3aef62c [zero] yet an improved sharded param (#311) 3 years ago
Jiarui Fang c9e7d9582d [zero] polish shard strategy (#310) 3 years ago
ver217 3092317b80 polish code 3 years ago
ver217 36f9a74ab2 fix sharded param hook and unit test 3 years ago
ver217 001ca624dd impl shard optim v2 and add unit test 3 years ago
Jiarui Fang 74f77e314b [zero] a shard strategy in granularity of tensor (#307) 3 years ago
Jiarui Fang 80364c7686 [zero] sharded tensor (#305) 3 years ago
Jie Zhu d344689274 [profiler] primary memory tracer 3 years ago
ver217 b105371ace rename shared adam to sharded optim v2 3 years ago
ver217 70814dc22f fix master params dtype 3 years ago
ver217 795210dd99 add fp32 master params in sharded adam 3 years ago
ver217 a109225bc2 add sharded adam 3 years ago
Jiarui Fang e17e92c54d Polish sharded parameter (#297) 3 years ago
ver217 7aef75ca42 [zero] add sharded grad and refactor grad hooks for ShardedModel (#287) 3 years ago
Frank Lee 9afb5c8b2d fixed typo in ShardParam (#294) 3 years ago
Frank Lee e17e54e32a added buffer sync to naive amp model wrapper (#291) 3 years ago
Jiarui Fang 8d653af408 add a common util for hooks registered on parameter. (#292) 3 years ago
Jie Zhu f867365aba bug fix: pass hook_list to engine (#273) 3 years ago
Jiarui Fang 5a560a060a Feature/zero (#279) 3 years ago
1SAA 82023779bb Added TPExpert for special situation 3 years ago
HELSON 36b8477228 Fixed parameter initialization in FFNExpert (#251) 3 years ago
アマデウス e13293bb4c fixed CI dataset directory; fixed import error of 2.5d accuracy (#255) 3 years ago
1SAA 219df6e685 Optimized MoE layer and fixed some bugs; 3 years ago
zbian 3dba070580 fixed padding index issue for vocab parallel embedding layers; updated 3D linear to be compatible with examples in the tutorial 3 years ago
Frank Lee f5ca88ec97 fixed apex import (#227) 3 years ago
Frank Lee 3a1a9820b0 fixed mkdir conflict and align yapf config with flake (#220) 3 years ago
アマデウス 9ee197d0e9 moved env variables to global variables; (#215) 3 years ago
Frank Lee 812357d63c
fixed utils docstring and add example to readme (#200) 3 years ago
Frank Lee 765db512b5
fixed ddp bug on torch 1.8 (#194) 3 years ago
Jiarui Fang 569357fea0
add pytorch hooks (#179) 3 years ago
ver217 708404d5f8
fix pipeline forward return tensors (#176) 3 years ago
HELSON 0f8c7f9804
Fixed docstring in colossalai (#171) 3 years ago
Frank Lee e2089c5c15
adapted for sequence parallel (#163) 3 years ago
puck_WCR 9473a1b9c8
AMP docstring/markdown update (#160) 3 years ago
Frank Lee f3802d6b06
fixed jit default setting (#154) 3 years ago
ver217 7bf1e98b97
pipeline last stage supports multi output (#151) 3 years ago
ver217 f68eddfb3d
refactor kernel (#142) 3 years ago
BoxiangW 4a3d3446b0
Update layer integration documentations (#108) 3 years ago
ver217 9ef05ed1fc
try import deepspeed when using zero (#130) 3 years ago
HELSON dceae85195
Added MoE parallel (#127) 3 years ago
ver217 293fb40c42
add scatter/gather optim for pipeline (#123) 3 years ago
Jiarui Fang 2c0c85d3d3
fix a bug in timer (#114) 3 years ago
ver217 7904baf6e1
fix layers/schedule for hybrid parallelization (#111) (#112) 3 years ago
ver217 a951bc6089
update default logger (#100) (#101) 3 years ago
ver217 96780e6ee4
Optimize pipeline schedule (#94) 3 years ago