286 Commits (7f8b16635b42013b73e1cb1ffdebc07b4d71ac93)

Author SHA1 Message Date
Jiarui Fang c11ff81b15
[zero] get memory usage of sharded optim v2. (#542) 3 years ago
HELSON a30e2b4c24
[zero] adapt for no-leaf module in zero (#535) 3 years ago
Jiarui Fang 705f56107c
[zero] refactor model data tracing (#537) 3 years ago
Jiarui Fang a590ed0ba3
[zero] improve the accuracy of get_memory_usage of sharded param (#538) 3 years ago
Jiarui Fang 37cb70feec
[zero] get memory usage for sharded param (#536) 3 years ago
Jiarui Fang 05e33b2578
[zero] fix grad offload (#528) 3 years ago
Jiarui Fang 8d8c5407c0
[zero] refactor model data tracing (#522) 3 years ago
Jiarui Fang 4d322b79da
[refactor] remove old zero code (#517) 3 years ago
Jiarui Fang 920c5889a7
[zero] add colo move inline (#521) 3 years ago
Jiarui Fang 0bebda6ea5
[zero] fix init device bug in zero init context unittest (#516) 3 years ago
Jiarui Fang 7ef3507ace
[zero] show model data cuda memory usage after zero context init. (#515) 3 years ago
ver217 a2e61d61d4
[zero] zero init ctx enable rm_torch_payload_on_the_fly (#512) 3 years ago
Jiarui Fang bca0c49a9d
[zero] use colo model data api in optimv2 (#511) 3 years ago
Jiarui Fang 0035b7be07
[memory] add model data tensor moving api (#503) 3 years ago
ver217 9ec1ce6ab1
[zero] sharded model support the reuse of fp16 shard (#495) 3 years ago
ver217 c4c02424f3
[zero] sharded model manages ophooks individually (#492) 3 years ago
ver217 a9ecb4b244
[zero] polish sharded optimizer v2 (#490) 3 years ago
ver217 62b0a8d644
[zero] sharded optim support hybrid cpu adam (#486) 3 years ago
Jiarui Fang b334822163
[zero] polish sharded param name (#484) 3 years ago
ver217 8d3250d74b
[zero] ZeRO supports pipeline parallel (#477) 3 years ago
ver217 3cb3fc275e
zero init ctx receives a dp process group (#471) 3 years ago
ver217 fc8e6db005
[doc] Update docstring for ZeRO (#459) 3 years ago
ver217 a241f61b34
[zero] Update initialize for ZeRO (#458) 3 years ago
ver217 642846d6f9
update sharded optim and fix zero init ctx (#457) 3 years ago
Jiarui Fang e2e9f82588
Revert "[zero] update sharded optim and fix zero init ctx" (#456) 3 years ago
ver217 e99af94ab8 rename variables 3 years ago
ver217 57567ee768 update sharded optim and fix zero init ctx 3 years ago
Jiarui Fang 0fcfb1e00d
[test] make zero engine test really work (#447) 3 years ago
Jiarui Fang 237d08e7ee
[zero] hybrid cpu adam (#445) 3 years ago
Jiarui Fang 496cbb0760
[hotfix] fix initialize bug with zero (#442) 3 years ago
Jiarui Fang 640a6cd304
[refactory] refactory the initialize method for new zero design (#431) 3 years ago
ver217 fce9432f08 sync before creating empty grad 3 years ago
ver217 ea6905a898 free param.grad 3 years ago
ver217 9506a8beb2 use double buffer to handle grad 3 years ago
Jiarui Fang adebb3e041
[zero] cuda margin space for OS (#418) 3 years ago
Jiarui Fang 56bb412e72
[polish] use GLOBAL_MODEL_DATA_TRACER (#417) 3 years ago
Jiarui Fang 23ba3fc450
[zero] refactory ShardedOptimV2 init method (#416) 3 years ago
Frank Lee e79ea44247
[fp16] refactored fp16 optimizer (#392) 3 years ago
Jiarui Fang 21dc54e019
[zero] memtracer to record cuda memory usage of model data and overall system (#395) 3 years ago
Jiarui Fang 370f567e7d
[zero] new interface for ShardedOptimv2 (#406) 3 years ago
ver217 63469c0f91 polish code 3 years ago
ver217 88804aee49 add bucket tensor shard strategy 3 years ago
HELSON 7c079d9c33
[hotfix] fixed bugs in ShardStrategy and PcieProfiler (#394) 3 years ago
Jiarui Fang 3af13a2c3e [zero] polish ShardedOptimV2 unittest (#385) 3 years ago
Jiarui Fang 272ebfb57d [bug] shard param during initializing the ShardedModelV2 (#381) 3 years ago
Jiarui Fang b5f43acee3 [zero] find miss code (#378) 3 years ago
Jiarui Fang 6b6002962a [zero] zero init context collect numel of model (#375) 3 years ago
jiaruifang d9217e1960 Revert "[zero] bucketized tensor cpu gpu copy (#368)" 3 years ago
Jiarui Fang 00670c870e [zero] bucketized tensor cpu gpu copy (#368) 3 years ago
Jiarui Fang 44e4891f57 [zero] able to place params on cpu after zero init context (#365) 3 years ago