Commit Graph

175 Commits (26d4ab8b03d1b7f23f80cf84fb7f169d0796c69e)

Author SHA1 Message Date
HELSON d7ecaf362b
[zero] fix init bugs in zero context (#686)
* adapt model weight initialization for methods in Pytorch nn.init
2022-04-07 17:38:45 +08:00
Jiarui Fang 0aab52301e
[hotfix] fix a bug in model data stats tracing (#655) 2022-04-03 21:48:06 +08:00
YuliangLiu0306 ade05a5d83
[refactor] pipeline, put runtime schedule into engine. (#627) 2022-04-03 20:46:45 +08:00
HELSON e5d615aeee
[hotfix] fix bugs in testing (#659)
* remove hybrid adam in test_moe_zero_optim

* fix activation checkpointing and its unitest
2022-04-02 21:58:47 +08:00
HELSON b31daed4cf
fix bugs in CPU adam (#633)
* add cpu adam counter for all cpu adam

* fixed updating error in adam kernel
2022-04-02 17:04:05 +08:00
HELSON 055fbf5be6
[zero] adapt zero for unsharded paramters (Optimizer part) (#601) 2022-04-01 20:10:47 +08:00
アマデウス 354b7954d1
[model checkpoint] added unit tests for checkpoint save/load (#599) 2022-04-01 16:53:32 +08:00
FredHuang99 93f14d2a33
[zero] test zero tensor utils (#609) 2022-04-01 15:16:59 +08:00
Jiarui Fang e956d93ac2
[refactor] memory utils (#577) 2022-04-01 09:22:33 +08:00
HELSON e6d50ec107
[zero] adapt zero for unsharded parameters (#561)
* support existing sharded and unsharded parameters in zero

* add unitest for moe-zero model init

* polish moe gradient handler
2022-03-31 18:34:11 +08:00
ver217 7c6c427db1
[zero] trace states of fp16/32 grad and fp32 param (#571) 2022-03-31 16:26:54 +08:00
Jiarui Fang 7675366fce
[polish] rename col_attr -> colo_attr (#558) 2022-03-31 12:25:45 +08:00
ver217 014bac0c49
[zero] hijack p.grad in sharded model (#554)
* hijack p.grad in sharded model

* polish comments

* polish comments
2022-03-30 18:14:50 +08:00
Jiarui Fang f552b11294
[zero] label state for param fp16 and grad (#551) 2022-03-30 15:57:46 +08:00
Jiarui Fang 214da761d4
[zero] add stateful tensor (#549) 2022-03-30 13:51:37 +08:00
HELSON 8c90d4df54
[zero] add zero context manager to change config during initialization (#546) 2022-03-29 17:57:59 +08:00
Liang Bowen ec5086c49c Refactored docstring to google style 2022-03-29 17:17:47 +08:00
Jiarui Fang 53b1b6e340
[zero] non model data tracing (#545) 2022-03-29 15:45:48 +08:00
ver217 1f90a3b129
[zero] polish ZeroInitContext (#540) 2022-03-29 09:09:04 +08:00
Jiarui Fang c11ff81b15
[zero] get memory usage of sharded optim v2. (#542) 2022-03-29 09:08:18 +08:00
HELSON a30e2b4c24
[zero] adapt for no-leaf module in zero (#535)
only process module's own parameters in Zero context

add zero hooks for all modules that contrain parameters

gather parameters only belonging to module itself
2022-03-28 17:42:18 +08:00
Jiarui Fang 705f56107c
[zero] refactor model data tracing (#537) 2022-03-28 16:38:18 +08:00
Jiarui Fang a590ed0ba3
[zero] improve the accuracy of get_memory_usage of sharded param (#538) 2022-03-28 16:19:19 +08:00
Jiarui Fang 37cb70feec
[zero] get memory usage for sharded param (#536) 2022-03-28 15:01:21 +08:00
LuGY 105c5301c3
[zero]added hybrid adam, removed loss scale in adam (#527)
* [zero]added hybrid adam, removed loss scale of adam

* remove useless code
2022-03-25 18:03:54 +08:00
Jiarui Fang 8d8c5407c0
[zero] refactor model data tracing (#522) 2022-03-25 18:03:32 +08:00
Frank Lee 3601b2bad0
[test] fixed rerun_on_exception and adapted test cases (#487) 2022-03-25 17:25:12 +08:00
Jiarui Fang 4d322b79da
[refactor] remove old zero code (#517) 2022-03-25 14:54:39 +08:00
LuGY 6a3f9fda83
[cuda] modify the fused adam, support hybrid of fp16 and fp32 (#497) 2022-03-25 14:15:53 +08:00
Jiarui Fang 920c5889a7
[zero] add colo move inline (#521) 2022-03-25 14:02:55 +08:00
Jiarui Fang 0bebda6ea5
[zero] fix init device bug in zero init context unittest (#516) 2022-03-25 12:24:18 +08:00
Jiarui Fang 7ef3507ace
[zero] show model data cuda memory usage after zero context init. (#515) 2022-03-25 11:23:35 +08:00
Jiarui Fang 9330be0f3c
[memory] set cuda mem frac (#506) 2022-03-24 16:57:13 +08:00
Jiarui Fang 0035b7be07
[memory] add model data tensor moving api (#503) 2022-03-24 14:29:41 +08:00
Jiarui Fang a445e118cf
[polish] polish singleton and global context (#500) 2022-03-23 18:03:39 +08:00
ver217 9ec1ce6ab1
[zero] sharded model support the reuse of fp16 shard (#495)
* sharded model supports reuse fp16 shard

* rename variable

* polish code

* polish code

* polish code
2022-03-23 14:59:59 +08:00
ver217 62b0a8d644
[zero] sharded optim support hybrid cpu adam (#486)
* sharded optim support hybrid cpu adam

* update unit test

* polish docstring
2022-03-22 14:56:59 +08:00
Jiarui Fang b334822163
[zero] polish sharded param name (#484)
* [zero] polish sharded param name

* polish code

* polish

* polish code

* polish

* polsih

* polish
2022-03-22 14:36:16 +08:00
Jiarui Fang 65c0f380c2
[format] polish name format for MOE (#481) 2022-03-21 23:19:47 +08:00
HELSON 7544347145
[MOE] add unitest for MOE experts layout, gradient handler and kernel (#469) 2022-03-21 13:35:04 +08:00
HELSON 84fd7c1d4d
add moe context, moe utilities and refactor gradient handler (#455) 2022-03-18 16:38:32 +08:00
Frank Lee af185b5519
[test] fixed amp convergence comparison test (#454) 2022-03-18 16:28:16 +08:00
ver217 a241f61b34
[zero] Update initialize for ZeRO (#458)
* polish code

* shard strategy receive pg in shard() / gather()

* update zero engine

* polish code
2022-03-18 16:18:31 +08:00
ver217 642846d6f9
update sharded optim and fix zero init ctx (#457) 2022-03-18 15:44:47 +08:00
Jiarui Fang e2e9f82588
Revert "[zero] update sharded optim and fix zero init ctx" (#456)
* Revert "polish code"

This reverts commit 8cf7ff08cf.

* Revert "rename variables"

This reverts commit e99af94ab8.

* Revert "remove surplus imports"

This reverts commit 46add4a5c5.

* Revert "update sharded optim and fix zero init ctx"

This reverts commit 57567ee768.
2022-03-18 15:22:43 +08:00
ver217 8cf7ff08cf polish code 2022-03-18 14:25:25 +08:00
ver217 46add4a5c5 remove surplus imports 2022-03-18 14:25:25 +08:00
ver217 57567ee768 update sharded optim and fix zero init ctx 2022-03-18 14:25:25 +08:00
Frank Lee f27d801a13
[test] optimized zero data parallel test (#452) 2022-03-18 11:35:54 +08:00
Jiarui Fang 0fcfb1e00d
[test] make zero engine test really work (#447) 2022-03-17 17:24:25 +08:00
Frank Lee bb2790cf0b
optimize engine and trainer test (#448) 2022-03-17 15:44:17 +08:00
Frank Lee b72b8445c6
optimized context test time consumption (#446) 2022-03-17 14:40:52 +08:00
Jiarui Fang 496cbb0760
[hotfix] fix initialize bug with zero (#442) 2022-03-17 13:16:22 +08:00
Jiarui Fang 17b8274f8a
[unitest] polish zero config in unittest (#438) 2022-03-17 10:20:53 +08:00
Jiarui Fang 640a6cd304
[refactory] refactory the initialize method for new zero design (#431) 2022-03-16 19:29:37 +08:00
ver217 fce9432f08 sync before creating empty grad 2022-03-16 14:24:09 +08:00
Jiarui Fang f9c762df85
[test] merge zero optim tests (#428) 2022-03-16 12:22:45 +08:00
Jiarui Fang 5d7dc3525b
[hotfix] run cpu adam unittest in pytest (#424) 2022-03-16 10:39:55 +08:00
Jiarui Fang adebb3e041
[zero] cuda margin space for OS (#418) 2022-03-15 12:02:19 +08:00
Jiarui Fang 56bb412e72
[polish] use GLOBAL_MODEL_DATA_TRACER (#417) 2022-03-15 11:29:46 +08:00
Jiarui Fang 23ba3fc450
[zero] refactory ShardedOptimV2 init method (#416) 2022-03-15 10:45:55 +08:00
Frank Lee e79ea44247
[fp16] refactored fp16 optimizer (#392) 2022-03-15 10:05:38 +08:00
Jiarui Fang 21dc54e019
[zero] memtracer to record cuda memory usage of model data and overall system (#395) 2022-03-14 22:05:30 +08:00
Jiarui Fang a37bf1bc42
[hotfix] rm test_tensor_detector.py (#413) 2022-03-14 21:39:48 +08:00
Jiarui Fang 370f567e7d
[zero] new interface for ShardedOptimv2 (#406) 2022-03-14 20:48:41 +08:00
LuGY a9c27be42e
Added tensor detector (#393)
* Added tensor detector

* Added the - states

* Allowed change include_cpu when detect()
2022-03-14 18:01:46 +08:00
ver217 54fd37f0e0 polish unit test 2022-03-14 15:06:02 +08:00
Frank Lee 1e4bf85cdb fixed bug in activation checkpointing test (#387) 2022-03-11 15:50:28 +08:00
Jiarui Fang 3af13a2c3e [zero] polish ShardedOptimV2 unittest (#385)
* place params on cpu after zero init context

* polish code

* bucketzed cpu gpu tensor transter

* find a bug in sharded optim unittest

* add offload unittest for ShardedOptimV2.

* polish code and make it more robust
2022-03-11 15:50:28 +08:00
Frank Lee 526a318032 [unit test] Refactored test cases with component func (#339)
* refactored test with component func

* fixed bug
2022-03-11 15:50:28 +08:00
LuGY de46450461 Added activation offload (#331)
* Added activation offload

* Fixed the import bug, used the pytest
2022-03-11 15:50:28 +08:00
Jiarui Fang b5f43acee3 [zero] find miss code (#378) 2022-03-11 15:50:28 +08:00
Jiarui Fang 6b6002962a [zero] zero init context collect numel of model (#375) 2022-03-11 15:50:28 +08:00
jiaruifang d9217e1960 Revert "[zero] bucketized tensor cpu gpu copy (#368)"
This reverts commit bef05489b6.
2022-03-11 15:50:28 +08:00
Jiarui Fang 00670c870e [zero] bucketized tensor cpu gpu copy (#368) 2022-03-11 15:50:28 +08:00
Jiarui Fang 44e4891f57 [zero] able to place params on cpu after zero init context (#365)
* place params on cpu after zero init context

* polish code
2022-03-11 15:50:28 +08:00
Jiarui Fang ea2872073f [zero] global model data memory tracer (#360) 2022-03-11 15:50:28 +08:00
Jiarui Fang cb34cd384d [test] polish zero related unitest (#351) 2022-03-11 15:50:28 +08:00
ver217 532ae79cb0 add test sharded optim with cpu adam (#347) 2022-03-11 15:50:28 +08:00
HELSON 425bb0df3f Added Profiler Context to manage all profilers (#340) 2022-03-11 15:50:28 +08:00
ver217 d0ae0f2215 [zero] update sharded optim v2 (#334) 2022-03-11 15:50:28 +08:00
ver217 2b8cddd40e skip bert in test engine 2022-03-11 15:50:28 +08:00
ver217 f5f0ad266e fix bert unit test 2022-03-11 15:50:28 +08:00
jiaruifang d271f2596b polish engine unitest 2022-03-11 15:50:28 +08:00
jiaruifang 354c0f9047 polish code 2022-03-11 15:50:28 +08:00
jiaruifang 4d94cd513e adapting bert unitest interface 2022-03-11 15:50:28 +08:00
jiaruifang 7977422aeb add bert for unitest and sharded model is not able to pass the bert case 2022-03-11 15:50:28 +08:00
ver217 1388671699 [zero] Update sharded model v2 using sharded param v2 (#323) 2022-03-11 15:50:28 +08:00
jiaruifang 799d105bb4 using pytest parametrize 2022-03-11 15:50:28 +08:00
jiaruifang dec24561cf show pytest parameterize 2022-03-11 15:50:28 +08:00
Jiarui Fang 11bddb6e55 [zero] update zero context init with the updated test utils (#327) 2022-03-11 15:50:28 +08:00
Frank Lee 6268446b81 [test] refactored testing components (#324) 2022-03-11 15:50:28 +08:00
Jiarui Fang de0468c7a8 [zero] zero init context (#321)
* add zero init context

* add more flags for zero init context
fix bug of repeated converting param to ShardedParamV2

* polish code
2022-03-11 15:50:28 +08:00
1SAA 73bff11288 Added profiler communication operations
Fixed bug for learning rate scheduler
2022-03-11 15:50:28 +08:00
LuGY a3269de5c9 [zero] cpu adam kernel (#288)
* Added CPU Adam

* finished the cpu adam

* updated the license

* delete useless parameters, removed resnet

* modified the method off cpu adam unittest

* deleted some useless codes

* removed useless codes

Co-authored-by: ver217 <lhx0217@gmail.com>
Co-authored-by: Frank Lee <somerlee.9@gmail.com>
Co-authored-by: jiaruifang <fangjiarui123@gmail.com>
2022-03-11 15:50:28 +08:00
Jiarui Fang 90d3aef62c [zero] yet an improved sharded param (#311) 2022-03-11 15:50:28 +08:00
Jiarui Fang c9e7d9582d [zero] polish shard strategy (#310)
* init shard param from shape tuple

* add more unitest for shard param

* add set_payload method for ShardedParam

* [zero] add shareded tensor class

* polish code

* add shard stratgy

* move shard and gather logic to shard strategy from shard tensor.

* polish code
2022-03-11 15:50:28 +08:00
ver217 36f9a74ab2 fix sharded param hook and unit test 2022-03-11 15:50:28 +08:00
ver217 001ca624dd impl shard optim v2 and add unit test 2022-03-11 15:50:28 +08:00
Jiarui Fang 74f77e314b [zero] a shard strategy in granularity of tensor (#307) 2022-03-11 15:50:28 +08:00