Commit Graph

2161 Commits (162251ab7844e4116a36d6e0fec2ac7ccd03f74d)

Author SHA1 Message Date
アマデウス 6302069c0e
[model checkpoint] updated communication ops for cpu tensors (#590)
3 years ago
アマデウス c50bfb807b
[model checkpoint] updated saving/loading for 1d layers (#594)
3 years ago
アマデウス 7636d518e1
[model checkpoint] updated saving/loading for 2d layers (#595)
3 years ago
アマデウス cd13b63832
[model checkpoint] reworked unified layers for ease of save/load states (#593)
3 years ago
アマデウス acae68eb04
[model checkpoint] updated checkpoint save/load utils (#592)
3 years ago
Ziyue Jiang 1c40ee8749
[TP] add assert for tp1d (#621)
3 years ago
ver217 369a288bf3
polish utils docstring (#620)
3 years ago
ver217 e619a651fb
polish optimizer docstring (#619)
3 years ago
ver217 8432dc7080
polish moe docsrting (#618)
3 years ago
ver217 c5b488edf8
polish amp docstring (#616)
3 years ago
ver217 0ef8819c67
polish docstring of zero (#612)
3 years ago
LuGY 02b187c14f
[zero] add sampling time for memstats collector (#610)
3 years ago
ver217 9bee119104
[hotfix] fix sharded optim zero grad (#604)
3 years ago
アマデウス 297b8baae2
[model checkpoint] add gloo groups for cpu tensor communication (#589)
3 years ago
アマデウス 54e688b623
moved ensure_path_exists to utils.common (#591)
3 years ago
Jiarui Fang e956d93ac2
[refactor] memory utils (#577)
3 years ago
ver217 104cbbb313
[hotfix] add hybrid adam to __init__ (#584)
3 years ago
HELSON e6d50ec107
[zero] adapt zero for unsharded parameters (#561)
3 years ago
Wesley 46c9ba33da update code format
3 years ago
Wesley 666cfd094a fix parallel_input flag for Linear1D_Col gather_output
3 years ago
ver217 7c6c427db1
[zero] trace states of fp16/32 grad and fp32 param (#571)
3 years ago
Jiarui Fang 7675366fce
[polish] rename col_attr -> colo_attr (#558)
3 years ago
Liang Bowen 2c45efc398
html refactor (#555)
3 years ago
Jiarui Fang d1211148a7
[utils] update colo tensor moving APIs (#553)
3 years ago
LuGY c44d797072
[docs] updatad docs of hybrid adam and cpu adam (#552)
3 years ago
ver217 014bac0c49
[zero] hijack p.grad in sharded model (#554)
3 years ago
Jiarui Fang f552b11294
[zero] label state for param fp16 and grad (#551)
3 years ago
Jiarui Fang 214da761d4
[zero] add stateful tensor (#549)
3 years ago
Jiarui Fang 107b99ddb1
[zero] dump memory stats for sharded model (#548)
3 years ago
Ziyue Jiang 763dc325f1
[TP] Add gather_out arg to Linear (#541)
3 years ago
HELSON 8c90d4df54
[zero] add zero context manager to change config during initialization (#546)
3 years ago
Liang Bowen ec5086c49c Refactored docstring to google style
3 years ago
Jiarui Fang 53b1b6e340
[zero] non model data tracing (#545)
3 years ago
Jie Zhu 73d36618a6
[profiler] add MemProfiler (#356)
3 years ago
ver217 fb841dd5c5
[zero] optimize grad offload (#539)
3 years ago
Jiarui Fang 7d81b5b46e
[logging] polish logger format (#543)
3 years ago
ver217 1f90a3b129
[zero] polish ZeroInitContext (#540)
3 years ago
Jiarui Fang c11ff81b15
[zero] get memory usage of sharded optim v2. (#542)
3 years ago
HELSON a30e2b4c24
[zero] adapt for no-leaf module in zero (#535)
3 years ago
Jiarui Fang 705f56107c
[zero] refactor model data tracing (#537)
3 years ago
Jiarui Fang a590ed0ba3
[zero] improve the accuracy of get_memory_usage of sharded param (#538)
3 years ago
Jiarui Fang 37cb70feec
[zero] get memory usage for sharded param (#536)
3 years ago
Jiarui Fang 05e33b2578
[zero] fix grad offload (#528)
3 years ago
LuGY 105c5301c3
[zero]added hybrid adam, removed loss scale in adam (#527)
3 years ago
Jiarui Fang 8d8c5407c0
[zero] refactor model data tracing (#522)
3 years ago
Frank Lee 3601b2bad0
[test] fixed rerun_on_exception and adapted test cases (#487)
3 years ago
Jiarui Fang 4d322b79da
[refactor] remove old zero code (#517)
3 years ago
LuGY 6a3f9fda83
[cuda] modify the fused adam, support hybrid of fp16 and fp32 (#497)
3 years ago
Jiarui Fang 920c5889a7
[zero] add colo move inline (#521)
3 years ago
ver217 7be397ca9c
[log] polish disable_existing_loggers (#519)
3 years ago
Jiarui Fang 0bebda6ea5
[zero] fix init device bug in zero init context unittest (#516)
3 years ago
Jiarui Fang 7ef3507ace
[zero] show model data cuda memory usage after zero context init. (#515)
3 years ago
ver217 a2e61d61d4
[zero] zero init ctx enable rm_torch_payload_on_the_fly (#512)
3 years ago
Jiarui Fang 81145208d1
[install] run with out rich (#513)
3 years ago
Jiarui Fang bca0c49a9d
[zero] use colo model data api in optimv2 (#511)
3 years ago
Jiarui Fang 9330be0f3c
[memory] set cuda mem frac (#506)
3 years ago
Jiarui Fang 0035b7be07
[memory] add model data tensor moving api (#503)
3 years ago
Jiarui Fang a445e118cf
[polish] polish singleton and global context (#500)
3 years ago
ver217 9ec1ce6ab1
[zero] sharded model support the reuse of fp16 shard (#495)
3 years ago
HELSON f24b5ed201
[MOE] remove old MoE legacy (#493)
3 years ago
ver217 c4c02424f3
[zero] sharded model manages ophooks individually (#492)
3 years ago
HELSON c9023d4078
[MOE] support PR-MOE (#488)
3 years ago
ver217 a9ecb4b244
[zero] polish sharded optimizer v2 (#490)
3 years ago
ver217 62b0a8d644
[zero] sharded optim support hybrid cpu adam (#486)
3 years ago
Jiarui Fang b334822163
[zero] polish sharded param name (#484)
3 years ago
HELSON d7ea63992b
[MOE] add FP32LinearGate for MOE in NaiveAMP context (#480)
3 years ago
Jiarui Fang 65c0f380c2
[format] polish name format for MOE (#481)
3 years ago
ver217 8d3250d74b
[zero] ZeRO supports pipeline parallel (#477)
3 years ago
Frank Lee 83a847d058
[test] added rerun on exception for testing (#475)
3 years ago
HELSON 7544347145
[MOE] add unitest for MOE experts layout, gradient handler and kernel (#469)
3 years ago
ver217 3cb3fc275e
zero init ctx receives a dp process group (#471)
3 years ago
HELSON aff9d354f7
[MOE] polish moe_env (#467)
3 years ago
HELSON bccbc15861
[MOE] changed parallelmode to dist process group (#460)
3 years ago
ver217 fc8e6db005
[doc] Update docstring for ZeRO (#459)
3 years ago
HELSON 84fd7c1d4d
add moe context, moe utilities and refactor gradient handler (#455)
3 years ago
ver217 a241f61b34
[zero] Update initialize for ZeRO (#458)
3 years ago
ver217 642846d6f9
update sharded optim and fix zero init ctx (#457)
3 years ago
Jiarui Fang e2e9f82588
Revert "[zero] update sharded optim and fix zero init ctx" (#456)
3 years ago
ver217 e99af94ab8 rename variables
3 years ago
ver217 57567ee768 update sharded optim and fix zero init ctx
3 years ago
Jiarui Fang 0fcfb1e00d
[test] make zero engine test really work (#447)
3 years ago
Jiarui Fang 237d08e7ee
[zero] hybrid cpu adam (#445)
3 years ago
Frank Lee b72b8445c6
optimized context test time consumption (#446)
3 years ago
Jiarui Fang 496cbb0760
[hotfix] fix initialize bug with zero (#442)
3 years ago
Jiarui Fang 640a6cd304
[refactory] refactory the initialize method for new zero design (#431)
3 years ago
Frank Lee bffd85bf34
added testing module (#435)
3 years ago
HELSON dbdc9a7783
added Multiply Jitter and capacity factor eval for MOE (#434)
3 years ago
Frank Lee b03b3ae99c
fixed mem monitor device (#433)
3 years ago
Frank Lee 14a7094243
fixed fp16 optimizer none grad bug (#432)
3 years ago
ver217 fce9432f08 sync before creating empty grad
3 years ago
ver217 ea6905a898 free param.grad
3 years ago
ver217 9506a8beb2 use double buffer to handle grad
3 years ago
Jiarui Fang 54229cd33e
[log] better logging display with rich (#426)
3 years ago
HELSON 3f70a2b12f
removed noisy function during evaluation of MoE router (#419)
3 years ago
Jiarui Fang adebb3e041
[zero] cuda margin space for OS (#418)
3 years ago
Jiarui Fang 56bb412e72
[polish] use GLOBAL_MODEL_DATA_TRACER (#417)
3 years ago
Jiarui Fang 23ba3fc450
[zero] refactory ShardedOptimV2 init method (#416)
3 years ago
Frank Lee e79ea44247
[fp16] refactored fp16 optimizer (#392)
3 years ago
Jiarui Fang 21dc54e019
[zero] memtracer to record cuda memory usage of model data and overall system (#395)
3 years ago
Jiarui Fang 370f567e7d
[zero] new interface for ShardedOptimv2 (#406)
3 years ago
LuGY a9c27be42e
Added tensor detector (#393)
3 years ago
1SAA 907ac4a2dc fixed error when no collective communication in CommProfiler
3 years ago
Frank Lee 2fe68b359a
Merge pull request #403 from ver217/feature/shard-strategy
3 years ago
HELSON dfd0363f68
polished output format for communication profiler and pcie profiler (#404)
3 years ago
ver217 63469c0f91 polish code
3 years ago
ver217 88804aee49 add bucket tensor shard strategy
3 years ago
HELSON 7c079d9c33
[hotfix] fixed bugs in ShardStrategy and PcieProfiler (#394)
3 years ago
Frank Lee 1e4bf85cdb fixed bug in activation checkpointing test (#387)
3 years ago
Jiarui Fang 3af13a2c3e [zero] polish ShardedOptimV2 unittest (#385)
3 years ago
Jiang Zhuo 5a4a3b77d9 fix format (#376)
3 years ago
LuGY de46450461 Added activation offload (#331)
3 years ago
Jiarui Fang 272ebfb57d [bug] shard param during initializing the ShardedModelV2 (#381)
3 years ago
HELSON 8c18eb0998 [profiler] Fixed bugs in CommProfiler and PcieProfiler (#377)
3 years ago
Jiarui Fang b5f43acee3 [zero] find miss code (#378)
3 years ago
Jiarui Fang 6b6002962a [zero] zero init context collect numel of model (#375)
3 years ago
HELSON 1ed7c24c02 Added PCIE profiler to dectect data transmission (#373)
3 years ago
jiaruifang d9217e1960 Revert "[zero] bucketized tensor cpu gpu copy (#368)"
3 years ago
RichardoLuo 8539898ec6 flake8 style change (#363)
3 years ago
Kai Wang (Victor Kai) 53bb3bcc0a fix format (#362)
3 years ago
ziyu huang a77d73f22b fix format parallel_context.py (#359)
3 years ago
Zangwei c695369af0 fix format constants.py (#358)
3 years ago
Yuer867 4a0f8c2c50 fix format parallel_2p5d (#357)
3 years ago
Liang Bowen 7eb87f516d flake8 style (#352)
3 years ago
Xu Kai 54ee8d1254 Fix/format colossalai/engine/paramhooks/(#350)
3 years ago
Maruyama_Aya e83970e3dc fix format ColossalAI\colossalai\context\process_group_initializer
3 years ago
yuxuan-lou 3b88eb2259 Flake8 code restyle
3 years ago
xuqifan897 148207048e Qifan formated file ColossalAI\colossalai\nn\layer\parallel_1d\layers.py (#342)
3 years ago
Cautiousss 3a51d909af fix format (#332)
3 years ago
DouJS cbb6436ff0 fix format for dir-[parallel_3d] (#333)
3 years ago
ExtremeViscent eaac03ae1d [formart] format fixed for kernel\cuda_native codes (#335)
3 years ago
Jiarui Fang 00670c870e [zero] bucketized tensor cpu gpu copy (#368)
3 years ago
Jiarui Fang 44e4891f57 [zero] able to place params on cpu after zero init context (#365)
3 years ago
ver217 253e54d98a fix grad shape
3 years ago
Jiarui Fang ea2872073f [zero] global model data memory tracer (#360)
3 years ago
Jiarui Fang cb34cd384d [test] polish zero related unitest (#351)
3 years ago
HELSON 534e0bb118 Fixed import bug for no-tensorboard environment (#354)
3 years ago
HELSON c57e089824 [profile] added example for ProfilerContext (#349)
3 years ago
Jiarui Fang 10e2826426 move async memory to an individual directory (#345)
3 years ago
HELSON 425bb0df3f Added Profiler Context to manage all profilers (#340)
3 years ago
ver217 d0ae0f2215 [zero] update sharded optim v2 (#334)
3 years ago
jiaruifang 5663616921 polish code
3 years ago
jiaruifang 7977422aeb add bert for unitest and sharded model is not able to pass the bert case
3 years ago
Frank Lee 3d5d64bd10 refactored grad scaler (#338)
3 years ago
Frank Lee 6a3188167c set criterion as optional in colossalai initialize (#336)
3 years ago
Jie Zhu 3213554cc2 [profiler] add adaptive sampling to memory profiler (#330)
3 years ago
ver217 1388671699 [zero] Update sharded model v2 using sharded param v2 (#323)
3 years ago
Jiarui Fang 11bddb6e55 [zero] update zero context init with the updated test utils (#327)
3 years ago
HELSON 4f26fabe4f fixed strings in profiler outputs (#325)
3 years ago
Jiarui Fang de0468c7a8 [zero] zero init context (#321)
3 years ago
1SAA 73bff11288 Added profiler communication operations
3 years ago
LuGY a3269de5c9 [zero] cpu adam kernel (#288)
3 years ago
Jiarui Fang 90d3aef62c [zero] yet an improved sharded param (#311)
3 years ago
Jiarui Fang c9e7d9582d [zero] polish shard strategy (#310)
3 years ago
ver217 3092317b80 polish code
3 years ago
ver217 36f9a74ab2 fix sharded param hook and unit test
3 years ago
ver217 001ca624dd impl shard optim v2 and add unit test
3 years ago
Jiarui Fang 74f77e314b [zero] a shard strategy in granularity of tensor (#307)
3 years ago
Jiarui Fang 80364c7686 [zero] sharded tensor (#305)
3 years ago
Jie Zhu d344689274 [profiler] primary memory tracer
3 years ago
ver217 b105371ace rename shared adam to sharded optim v2
3 years ago
ver217 70814dc22f fix master params dtype
3 years ago
ver217 795210dd99 add fp32 master params in sharded adam
3 years ago
ver217 a109225bc2 add sharded adam
3 years ago
Jiarui Fang e17e92c54d Polish sharded parameter (#297)
3 years ago
ver217 7aef75ca42 [zero] add sharded grad and refactor grad hooks for ShardedModel (#287)
3 years ago
Frank Lee 9afb5c8b2d fixed typo in ShardParam (#294)
3 years ago
Frank Lee e17e54e32a added buffer sync to naive amp model wrapper (#291)
3 years ago
Jiarui Fang 8d653af408 add a common util for hooks registered on parameter. (#292)
3 years ago
Jie Zhu f867365aba bug fix: pass hook_list to engine (#273)
3 years ago
Jiarui Fang 5a560a060a Feature/zero (#279)
3 years ago
1SAA 82023779bb Added TPExpert for special situation
3 years ago
HELSON 36b8477228 Fixed parameter initialization in FFNExpert (#251)
3 years ago
アマデウス e13293bb4c fixed CI dataset directory; fixed import error of 2.5d accuracy (#255)
3 years ago
1SAA 219df6e685 Optimized MoE layer and fixed some bugs;
3 years ago
zbian 3dba070580 fixed padding index issue for vocab parallel embedding layers; updated 3D linear to be compatible with examples in the tutorial
3 years ago
Frank Lee f5ca88ec97 fixed apex import (#227)
3 years ago
Frank Lee 3a1a9820b0 fixed mkdir conflict and align yapf config with flake (#220)
3 years ago
アマデウス 9ee197d0e9 moved env variables to global variables; (#215)
3 years ago
Frank Lee 812357d63c
fixed utils docstring and add example to readme (#200)
3 years ago
Frank Lee 765db512b5
fixed ddp bug on torch 1.8 (#194)
3 years ago
Jiarui Fang 569357fea0
add pytorch hooks (#179)
3 years ago
ver217 708404d5f8
fix pipeline forward return tensors (#176)
3 years ago
HELSON 0f8c7f9804
Fixed docstring in colossalai (#171)
3 years ago
Frank Lee e2089c5c15
adapted for sequence parallel (#163)
3 years ago
puck_WCR 9473a1b9c8
AMP docstring/markdown update (#160)
3 years ago
Frank Lee f3802d6b06
fixed jit default setting (#154)
3 years ago
ver217 7bf1e98b97
pipeline last stage supports multi output (#151)
3 years ago
ver217 f68eddfb3d
refactor kernel (#142)
3 years ago
BoxiangW 4a3d3446b0
Update layer integration documentations (#108)
3 years ago
ver217 9ef05ed1fc
try import deepspeed when using zero (#130)
3 years ago
HELSON dceae85195
Added MoE parallel (#127)
3 years ago
ver217 293fb40c42
add scatter/gather optim for pipeline (#123)
3 years ago
Jiarui Fang 2c0c85d3d3
fix a bug in timer (#114)
3 years ago
ver217 7904baf6e1
fix layers/schedule for hybrid parallelization (#111) (#112)
3 years ago
ver217 a951bc6089
update default logger (#100) (#101)
3 years ago
ver217 96780e6ee4
Optimize pipeline schedule (#94)
3 years ago
アマデウス 01a80cd86d
Hotfix/Colossalai layers (#92)
3 years ago
アマデウス 0fedef4f3c
Layer integration (#83)
3 years ago
shenggan 5c3843dc98
add colossalai kernel module (#55)
3 years ago
ver217 8f02a88db2
add interleaved pipeline, fix naive amp and update pipeline model initializer (#80)
3 years ago
Frank Lee 91c327cb44
fixed zero level 3 dtype bug (#76)
3 years ago
HELSON 632e622de8
overlap computation and communication in 2d operations (#75)
3 years ago
Frank Lee cd9c28e055
added CI for unit testing (#69)
3 years ago
Frank Lee 35813ed3c4
update examples and sphnix docs for the new api (#63)
3 years ago
ver217 7d3711058f
fix zero3 fp16 and add zero3 model context (#62)
3 years ago
Frank Lee 9a0466534c
update markdown docs (english) (#60)
3 years ago
Frank Lee da01c234e1
Develop/experiments (#59)
3 years ago
ver217 dbe62c67b8
add an example of ViT-B/16 and remove w_norm clipping in LAMB (#29)
3 years ago
Frank Lee 3defa32aee
Support TP-compatible Torch AMP and Update trainer API (#27)
3 years ago
ver217 3c7604ba30 update documentation
3 years ago
zbian 404ecbdcc6 Migrated project
3 years ago