Commit Graph

336 Commits (colossalchat)

Author SHA1 Message Date
HELSON a1ce02d740
[zero] test gradient accumulation (#1964)
2 years ago
Jiarui Fang cc0ed7cf33
[Gemini] ZeROHookV2 -> GeminiZeROHook (#1972)
2 years ago
Jiarui Fang c4739a725a
[Gemini] polish memstats collector (#1962)
2 years ago
Jiarui Fang f7e276fa71
[Gemini] add GeminiAdamOptimizer (#1960)
2 years ago
HELSON 7066dfbf82
[zero] fix memory leak for zero2 (#1955)
2 years ago
HELSON 6e51d296f0
[zero] migrate zero1&2 (#1878)
2 years ago
Zihao 20e255d4e8
MemStatsCollectorStatic (#1765)
2 years ago
HELSON c6a1a62636
[hotfix] fix zero's incompatibility with checkpoint in torch-1.12 (#1786)
2 years ago
CsRic ea961d8fd1 [NFC] polish colossalai/zero/sharded_param/__init__.py code style (#1717)
2 years ago
HELSON 1468e4bcfc
[zero] add constant placement policy (#1705)
2 years ago
HELSON b28991dd0a
[feature] A new ZeRO implementation (#1644)
2 years ago
Jiarui Fang c5d39215f6
Revert "[feature] new zero implementation (#1623)" (#1643)
2 years ago
HELSON 5be118f405
[feature] new zero implementation (#1623)
2 years ago
HELSON f7f2248771
[moe] fix MoE bugs (#1628)
2 years ago
ver217 c9e8ce67b8
fix move fp32 shards (#1604)
2 years ago
Fazzie-Maqianli 06dccdde44 [NFC] polish colossalai/zero/sharded_model/reduce_scatter.py code style (#1554)
2 years ago
ver217 821c6172e2
[utils] Impl clip_grad_norm for ColoTensor and ZeroOptimizer (#1442)
2 years ago
ver217 6df3e19be9
[hotfix] zero optim prevents calling inner optim.zero_grad (#1422)
2 years ago
ver217 8dced41ad0
[zero] zero optim state_dict takes only_rank_0 (#1384)
2 years ago
ver217 828b9e5e0d
[hotfix] fix zero optim save/load state dict (#1381)
2 years ago
ver217 6b43c789fd
fix zero optim backward_by_grad and save/load (#1353)
2 years ago
ver217 d068af81a3
[doc] update rst and docstring (#1351)
2 years ago
ver217 ce470ba37e
[checkpoint] sharded optim save/load grad scaler (#1350)
2 years ago
ver217 7a05367101
[hotfix] shared model returns cpu state_dict (#1328)
2 years ago
Jiarui Fang 4165eabb1e
[hotfix] remove potiential circle import (#1307)
2 years ago
ver217 a45ddf2d5f
[hotfix] fix sharded optim step and clip_grad_norm (#1226)
2 years ago
Jiarui Fang a444633d13
warmup ratio configration (#1192)
2 years ago
Jiarui Fang 372f791444
[refactor] move chunk and chunkmgr to directory gemini (#1182)
2 years ago
ver217 9e1daa63d2
[zero] sharded optim supports loading local state dict (#1170)
2 years ago
ver217 561e90493f
[zero] zero optim supports loading local state dict (#1171)
2 years ago
ver217 8106d7b8c7
[ddp] refactor ColoDDP and ZeroDDP (#1146)
2 years ago
ver217 6690a61b4d
[hotfix] prevent nested ZeRO (#1140)
2 years ago
Frank Lee 15aab1476e
[zero] avoid zero hook spam by changing log to debug level (#1137)
2 years ago
ver217 a1a7899cae
[hotfix] fix zero init ctx numel (#1128)
2 years ago
ver217 f0a954f16d
[ddp] add set_params_to_ignore for ColoDDP (#1122)
2 years ago
Frank Lee 14e5b11d7f
[zero] fixed api consistency (#1098)
2 years ago
Frank Lee cb18922c47
[doc] added documentation to chunk and chunk manager (#1094)
2 years ago
ver217 1f894e033f
[gemini] zero supports gemini (#1093)
2 years ago
ver217 be01db37c8
[tensor] refactor chunk mgr and impl MemStatsCollectorV2 (#1077)
2 years ago
ver217 c5cd3b0f35
[zero] zero optim copy chunk rather than copy tensor (#1070)
3 years ago
Jiarui Fang 49832b2344
[refactory] add nn.parallel module (#1068)
3 years ago
ver217 e3fde4ee6b
fix import error in sharded model v2 (#1053)
3 years ago
ver217 51b9a49655
[zero] add zero optimizer for ColoTensor (#1046)
3 years ago
ver217 9492a561c3
[tensor] ColoTensor supports ZeRo (#1015)
3 years ago
ver217 7cfd6c827e
[zero] add load_state_dict for sharded model (#894)
3 years ago
ver217 c4d903e64a
[gemini] accelerate adjust_layout() (#878)
3 years ago
HELSON 425b4a96b8
[gemini] polish stateful_tensor_mgr (#876)
3 years ago
ver217 d7e0303d1e
[zero] use GeminiMemoryManager when sampling model data (#850)
3 years ago
ver217 0f7ed8c192
fix _post_init_method of zero init ctx (#847)
3 years ago
HELSON e5ea3fdeef
[gemini] add GeminiMemoryManger (#832)
3 years ago
Jiarui Fang 595bedf767
revert zero tensors back (#829)
3 years ago
Jiarui Fang 294a6060d0
[tensor] ZeRO use ColoTensor as the base class. (#828)
3 years ago
Jiarui Fang eb1b89908c
[refactor] moving InsertPostInitMethodToModuleSubClasses to utils. (#824)
3 years ago
Jiarui Fang 3ddbd1bce1
[gemini] collect cpu-gpu moving volume in each iteration (#813)
3 years ago
Jiarui Fang 61c20b44bc
[log] local throughput metrics (#811)
3 years ago
ver217 dd92b90a68
[DO NOT MERGE] [zero] init fp16 params directly in ZeroInitContext (#808)
3 years ago
Jiarui Fang e761ad2cd7
Revert "[zero] add ZeroTensorShardStrategy (#793)" (#806)
3 years ago
HELSON 88759e289e
[zero] add ZeroTensorShardStrategy (#793)
3 years ago
Jiarui Fang 4d9332b4c5
[refactor] moving memtracer to gemini (#801)
3 years ago
Jiarui Fang 8711c706f4
[hotfix] fix grad offload when enabling reuse_fp16_shard
3 years ago
ver217 f1fa1a675f fix grad offload when enabling reuse_fp16_shard
3 years ago
HELSON 4c4388c46e
[hotfix] fix memory leak in zero (#781)
3 years ago
HELSON a65cbb7e4e
[zero] refactor shard and gather operation (#773)
3 years ago
ver217 6e553748a7
polish sharded optim docstr and warning (#770)
3 years ago
Jiarui Fang 10ef8afdd2
[gemini] init genimi individual directory (#754)
3 years ago
ver217 dcca614eee
[hotfix] fix test_stateful_tensor_mgr (#762)
3 years ago
ver217 a93a7d7364
[hotfix] fix reuse_fp16_shard of sharded model (#756)
3 years ago
ver217 8f7ce94b8e
[hotfix] fix auto tensor placement policy (#753)
3 years ago
HELSON 84c6700b2a
[zero] refactor memstats_collector (#746)
3 years ago
Jiarui Fang 3d7dc46d33
[zero] use factory pattern for tensor_placement_policy (#752)
3 years ago
ver217 4b048a8728
fix prepare grads in sharded optim (#749)
3 years ago
ver217 e396bb71f2
[zero] add tensor placement policies (#743)
3 years ago
HELSON 22c4b88d56
[zero] refactor ShardedParamV2 for convenience (#742)
3 years ago
ver217 e6212f56cd
[hotfix] fix memory leak in backward of sharded model (#741)
3 years ago
Jiarui Fang 7db3ccc79b
[hotfix] remove duplicated param register to stateful tensor manager (#728)
3 years ago
Jiarui Fang 4d90a7b513
[refactor] zero directory (#724)
3 years ago
Jiarui Fang 193dc8dacb
[refactor] refactor the memory utils (#715)
3 years ago
HELSON dbd96fe90a
[zero] check whether gradients have inf and nan in gpu (#712)
3 years ago
ver217 715b86eadd
[hotfix] fix stm cuda model data size (#710)
3 years ago
HELSON a9b8300d54
[zero] improve adaptability for not-shard parameters (#708)
3 years ago
ver217 ab8c6b4a0e
[zero] refactor memstats collector (#706)
3 years ago
HELSON ee112fe1da
[zero] adapt zero hooks for unsharded module (#699)
3 years ago
ver217 3c9cd5bb5e
[zero] stateful tensor manager (#687)
3 years ago
HELSON d7ecaf362b
[zero] fix init bugs in zero context (#686)
3 years ago
Jiarui Fang 59bf2dc590
[zero] initialize a stateful tensor manager (#614)
3 years ago
HELSON 17e73e62cc
[hotfix] fix bugs for unsharded parameters when restore data (#664)
3 years ago
Jiarui Fang 0aab52301e
[hotfix] fix a bug in model data stats tracing (#655)
3 years ago
Jiarui Fang 036404ca8a
Revert "[zero] polish init context (#645)" (#657)
3 years ago
Jiarui Fang 67b4928244
[zero] polish init context (#645)
3 years ago
HELSON 055fbf5be6
[zero] adapt zero for unsharded paramters (Optimizer part) (#601)
3 years ago
ver217 0ef8819c67
polish docstring of zero (#612)
3 years ago
ver217 9bee119104
[hotfix] fix sharded optim zero grad (#604)
3 years ago
Jiarui Fang e956d93ac2
[refactor] memory utils (#577)
3 years ago
HELSON e6d50ec107
[zero] adapt zero for unsharded parameters (#561)
3 years ago
ver217 7c6c427db1
[zero] trace states of fp16/32 grad and fp32 param (#571)
3 years ago
Jiarui Fang 7675366fce
[polish] rename col_attr -> colo_attr (#558)
3 years ago
ver217 014bac0c49
[zero] hijack p.grad in sharded model (#554)
3 years ago
Jiarui Fang f552b11294
[zero] label state for param fp16 and grad (#551)
3 years ago
Jiarui Fang 214da761d4
[zero] add stateful tensor (#549)
3 years ago
Jiarui Fang 107b99ddb1
[zero] dump memory stats for sharded model (#548)
3 years ago
HELSON 8c90d4df54
[zero] add zero context manager to change config during initialization (#546)
3 years ago
Jiarui Fang 53b1b6e340
[zero] non model data tracing (#545)
3 years ago
ver217 fb841dd5c5
[zero] optimize grad offload (#539)
3 years ago
ver217 1f90a3b129
[zero] polish ZeroInitContext (#540)
3 years ago
Jiarui Fang c11ff81b15
[zero] get memory usage of sharded optim v2. (#542)
3 years ago
HELSON a30e2b4c24
[zero] adapt for no-leaf module in zero (#535)
3 years ago
Jiarui Fang 705f56107c
[zero] refactor model data tracing (#537)
3 years ago
Jiarui Fang a590ed0ba3
[zero] improve the accuracy of get_memory_usage of sharded param (#538)
3 years ago
Jiarui Fang 37cb70feec
[zero] get memory usage for sharded param (#536)
3 years ago
Jiarui Fang 05e33b2578
[zero] fix grad offload (#528)
3 years ago
Jiarui Fang 8d8c5407c0
[zero] refactor model data tracing (#522)
3 years ago
Jiarui Fang 4d322b79da
[refactor] remove old zero code (#517)
3 years ago
Jiarui Fang 920c5889a7
[zero] add colo move inline (#521)
3 years ago
Jiarui Fang 0bebda6ea5
[zero] fix init device bug in zero init context unittest (#516)
3 years ago
Jiarui Fang 7ef3507ace
[zero] show model data cuda memory usage after zero context init. (#515)
3 years ago
ver217 a2e61d61d4
[zero] zero init ctx enable rm_torch_payload_on_the_fly (#512)
3 years ago
Jiarui Fang bca0c49a9d
[zero] use colo model data api in optimv2 (#511)
3 years ago
Jiarui Fang 0035b7be07
[memory] add model data tensor moving api (#503)
3 years ago
ver217 9ec1ce6ab1
[zero] sharded model support the reuse of fp16 shard (#495)
3 years ago
ver217 c4c02424f3
[zero] sharded model manages ophooks individually (#492)
3 years ago
ver217 a9ecb4b244
[zero] polish sharded optimizer v2 (#490)
3 years ago
ver217 62b0a8d644
[zero] sharded optim support hybrid cpu adam (#486)
3 years ago
Jiarui Fang b334822163
[zero] polish sharded param name (#484)
3 years ago
ver217 8d3250d74b
[zero] ZeRO supports pipeline parallel (#477)
3 years ago
ver217 3cb3fc275e
zero init ctx receives a dp process group (#471)
3 years ago
ver217 fc8e6db005
[doc] Update docstring for ZeRO (#459)
3 years ago
ver217 a241f61b34
[zero] Update initialize for ZeRO (#458)
3 years ago
ver217 642846d6f9
update sharded optim and fix zero init ctx (#457)
3 years ago
Jiarui Fang e2e9f82588
Revert "[zero] update sharded optim and fix zero init ctx" (#456)
3 years ago
ver217 e99af94ab8 rename variables
3 years ago
ver217 57567ee768 update sharded optim and fix zero init ctx
3 years ago
Jiarui Fang 0fcfb1e00d
[test] make zero engine test really work (#447)
3 years ago
Jiarui Fang 237d08e7ee
[zero] hybrid cpu adam (#445)
3 years ago
Jiarui Fang 496cbb0760
[hotfix] fix initialize bug with zero (#442)
3 years ago
Jiarui Fang 640a6cd304
[refactory] refactory the initialize method for new zero design (#431)
3 years ago
ver217 fce9432f08 sync before creating empty grad
3 years ago
ver217 ea6905a898 free param.grad
3 years ago
ver217 9506a8beb2 use double buffer to handle grad
3 years ago
Jiarui Fang adebb3e041
[zero] cuda margin space for OS (#418)
3 years ago
Jiarui Fang 56bb412e72
[polish] use GLOBAL_MODEL_DATA_TRACER (#417)
3 years ago
Jiarui Fang 23ba3fc450
[zero] refactory ShardedOptimV2 init method (#416)
3 years ago
Frank Lee e79ea44247
[fp16] refactored fp16 optimizer (#392)
3 years ago
Jiarui Fang 21dc54e019
[zero] memtracer to record cuda memory usage of model data and overall system (#395)
3 years ago
Jiarui Fang 370f567e7d
[zero] new interface for ShardedOptimv2 (#406)
3 years ago
ver217 63469c0f91 polish code
3 years ago
ver217 88804aee49 add bucket tensor shard strategy
3 years ago
HELSON 7c079d9c33
[hotfix] fixed bugs in ShardStrategy and PcieProfiler (#394)
3 years ago
Jiarui Fang 3af13a2c3e [zero] polish ShardedOptimV2 unittest (#385)
3 years ago
Jiarui Fang 272ebfb57d [bug] shard param during initializing the ShardedModelV2 (#381)
3 years ago
Jiarui Fang b5f43acee3 [zero] find miss code (#378)
3 years ago
Jiarui Fang 6b6002962a [zero] zero init context collect numel of model (#375)
3 years ago
jiaruifang d9217e1960 Revert "[zero] bucketized tensor cpu gpu copy (#368)"
3 years ago
Jiarui Fang 00670c870e [zero] bucketized tensor cpu gpu copy (#368)
3 years ago
Jiarui Fang 44e4891f57 [zero] able to place params on cpu after zero init context (#365)
3 years ago
ver217 253e54d98a fix grad shape
3 years ago
Jiarui Fang ea2872073f [zero] global model data memory tracer (#360)
3 years ago
Jiarui Fang cb34cd384d [test] polish zero related unitest (#351)
3 years ago
ver217 d0ae0f2215 [zero] update sharded optim v2 (#334)
3 years ago
jiaruifang 5663616921 polish code
3 years ago
jiaruifang 7977422aeb add bert for unitest and sharded model is not able to pass the bert case
3 years ago
ver217 1388671699 [zero] Update sharded model v2 using sharded param v2 (#323)
3 years ago
Jiarui Fang 11bddb6e55 [zero] update zero context init with the updated test utils (#327)
3 years ago
Jiarui Fang de0468c7a8 [zero] zero init context (#321)
3 years ago
LuGY a3269de5c9 [zero] cpu adam kernel (#288)
3 years ago
Jiarui Fang 90d3aef62c [zero] yet an improved sharded param (#311)
3 years ago
Jiarui Fang c9e7d9582d [zero] polish shard strategy (#310)
3 years ago
ver217 3092317b80 polish code
3 years ago
ver217 36f9a74ab2 fix sharded param hook and unit test
3 years ago
ver217 001ca624dd impl shard optim v2 and add unit test
3 years ago
Jiarui Fang 74f77e314b [zero] a shard strategy in granularity of tensor (#307)
3 years ago
Jiarui Fang 80364c7686 [zero] sharded tensor (#305)
3 years ago
ver217 b105371ace rename shared adam to sharded optim v2
3 years ago
ver217 70814dc22f fix master params dtype
3 years ago
ver217 795210dd99 add fp32 master params in sharded adam
3 years ago
ver217 a109225bc2 add sharded adam
3 years ago
Jiarui Fang e17e92c54d Polish sharded parameter (#297)
3 years ago
ver217 7aef75ca42 [zero] add sharded grad and refactor grad hooks for ShardedModel (#287)
3 years ago
Frank Lee 9afb5c8b2d fixed typo in ShardParam (#294)
3 years ago
Frank Lee e17e54e32a added buffer sync to naive amp model wrapper (#291)
3 years ago
Jiarui Fang 5a560a060a Feature/zero (#279)
3 years ago
HELSON 0f8c7f9804
Fixed docstring in colossalai (#171)
3 years ago
ver217 9ef05ed1fc
try import deepspeed when using zero (#130)
3 years ago
Frank Lee 91c327cb44
fixed zero level 3 dtype bug (#76)
3 years ago
Frank Lee 35813ed3c4
update examples and sphnix docs for the new api (#63)
3 years ago
ver217 7d3711058f
fix zero3 fp16 and add zero3 model context (#62)
3 years ago
Frank Lee da01c234e1
Develop/experiments (#59)
3 years ago