1201 Commits (main)

Author SHA1 Message Date
ver217 1388671699 [zero] Update sharded model v2 using sharded param v2 (#323) 3 years ago
jiaruifang 799d105bb4 using pytest parametrize 3 years ago
jiaruifang dec24561cf show pytest parameterize 3 years ago
Jiarui Fang 11bddb6e55 [zero] update zero context init with the updated test utils (#327) 3 years ago
Frank Lee 6268446b81 [test] refactored testing components (#324) 3 years ago
Jiarui Fang de0468c7a8 [zero] zero init context (#321) 3 years ago
1SAA 73bff11288 Added profiler communication operations 3 years ago
LuGY a3269de5c9 [zero] cpu adam kernel (#288) 3 years ago
Jiarui Fang 90d3aef62c [zero] yet an improved sharded param (#311) 3 years ago
Jiarui Fang c9e7d9582d [zero] polish shard strategy (#310) 3 years ago
ver217 36f9a74ab2 fix sharded param hook and unit test 3 years ago
ver217 001ca624dd impl shard optim v2 and add unit test 3 years ago
Jiarui Fang 74f77e314b [zero] a shard strategy in granularity of tensor (#307) 3 years ago
Jiarui Fang 80364c7686 [zero] sharded tensor (#305) 3 years ago
Jie Zhu d344689274 [profiler] primary memory tracer 3 years ago
Jiarui Fang e17e92c54d Polish sharded parameter (#297) 3 years ago
ver217 7aef75ca42 [zero] add sharded grad and refactor grad hooks for ShardedModel (#287) 3 years ago
Frank Lee 27155b8513 added unit test for sharded optimizer (#293) 3 years ago
Frank Lee e17e54e32a added buffer sync to naive amp model wrapper (#291) 3 years ago
Jiarui Fang 8d653af408 add a common util for hooks registered on parameter. (#292) 3 years ago
Jiarui Fang 5a560a060a Feature/zero (#279) 3 years ago
1SAA 82023779bb Added TPExpert for special situation 3 years ago
1SAA 219df6e685 Optimized MoE layer and fixed some bugs; 3 years ago
zbian 3dba070580 fixed padding index issue for vocab parallel embedding layers; updated 3D linear to be compatible with examples in the tutorial 3 years ago
アマデウス 9ee197d0e9 moved env variables to global variables; (#215) 3 years ago
Jiarui Fang 569357fea0
add pytorch hooks (#179) 3 years ago
Frank Lee e2089c5c15
adapted for sequence parallel (#163) 3 years ago
ver217 7bf1e98b97
pipeline last stage supports multi output (#151) 3 years ago
ver217 96780e6ee4
Optimize pipeline schedule (#94) 3 years ago
アマデウス 01a80cd86d
Hotfix/Colossalai layers (#92) 3 years ago
アマデウス 0fedef4f3c
Layer integration (#83) 3 years ago
ver217 8f02a88db2
add interleaved pipeline, fix naive amp and update pipeline model initializer (#80) 3 years ago
Frank Lee 91c327cb44
fixed zero level 3 dtype bug (#76) 3 years ago
Frank Lee cd9c28e055
added CI for unit testing (#69) 3 years ago
Frank Lee da01c234e1
Develop/experiments (#59) 3 years ago
Frank Lee 3defa32aee
Support TP-compatible Torch AMP and Update trainer API (#27) 3 years ago
アマデウス 3245a69fc2
cleaned test scripts 3 years ago
zbian 404ecbdcc6 Migrated project 3 years ago