Ziyue Jiang
8e6fdb4f29
[tensor]fix test_linear ( #826 )
3 years ago
Ziyue Jiang
1a9e2c2dff
[tensor] fix kwargs in colo_tensor torch_funtion ( #825 )
3 years ago
Jiarui Fang
2ecc3d7a55
[tensor] lazy init ( #823 )
3 years ago
Jiarui Fang
660d2d1f1b
[Tensor] apply ColoTensor on Torch functions ( #821 )
...
* Revert "[zero] add ZeroTensorShardStrategy (#793 )"
This reverts commit 88759e289e
.
* [gemini] set cpu memory capacity
* [log] local throughput collecting
* polish
* polish
* polish
* polish code
* polish
* polish code
* add a new tensor structure and override linear for it
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* [tensor] renaming and reorganize directory structure.
* rm useless dir
* polish
* polish
* [tensor] hander the function not wrapped
3 years ago
Jiarui Fang
0ce8924ceb
[tensor] reorganize files ( #820 )
3 years ago
Jiarui Fang
ab962b9735
[gemini] a new tensor structure ( #818 )
...
* Revert "[zero] add ZeroTensorShardStrategy (#793 )"
This reverts commit 88759e289e
.
* [gemini] set cpu memory capacity
* [log] local throughput collecting
* polish
* polish
* polish
* polish code
* polish
* polish code
* add a new tensor structure and override linear for it
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* polish
3 years ago
Jiarui Fang
e761ad2cd7
Revert "[zero] add ZeroTensorShardStrategy ( #793 )" ( #806 )
3 years ago
HELSON
88759e289e
[zero] add ZeroTensorShardStrategy ( #793 )
3 years ago
Jiarui Fang
681addb512
[refactor] moving grad acc logic to engine ( #804 )
3 years ago
Jiarui Fang
4d9332b4c5
[refactor] moving memtracer to gemini ( #801 )
3 years ago
HELSON
4c4388c46e
[hotfix] fix memory leak in zero ( #781 )
3 years ago
Frank Lee
5a1a095b92
[test] refactored with the new rerun decorator ( #763 )
...
* [test] refactored with the new rerun decorator
* polish test case
3 years ago
Jiarui Fang
10ef8afdd2
[gemini] init genimi individual directory ( #754 )
3 years ago
ver217
dcca614eee
[hotfix] fix test_stateful_tensor_mgr ( #762 )
3 years ago
ver217
a93a7d7364
[hotfix] fix reuse_fp16_shard of sharded model ( #756 )
...
* fix reuse_fp16_shard
* disable test stm
* polish code
3 years ago
HELSON
84c6700b2a
[zero] refactor memstats_collector ( #746 )
3 years ago
ver217
e396bb71f2
[zero] add tensor placement policies ( #743 )
...
* add tensor placement policies
* polish comments
* polish comments
* update moe unit tests
3 years ago
HELSON
22c4b88d56
[zero] refactor ShardedParamV2 for convenience ( #742 )
3 years ago
Frank Lee
f4f42d4c3c
[bug] fixed DDP compatibility with torch 1.8 ( #739 )
3 years ago
Jiarui Fang
53cb584808
[utils] correct cpu memory used and capacity in the context of multi-process ( #726 )
3 years ago
HELSON
b9b469ea50
[moe] add checkpoint for moe zero test ( #729 )
3 years ago
FrankLeeeee
e88a498c9c
[test] removed trivial outdated test
3 years ago
FrankLeeeee
62b4ce7326
[test] added missing decorators to model checkpointing tests
3 years ago
Jiarui Fang
4d90a7b513
[refactor] zero directory ( #724 )
3 years ago
Frank Lee
20ab1f5520
[bug] fixed broken test_found_inf ( #725 )
3 years ago
Jiarui Fang
193dc8dacb
[refactor] refactor the memory utils ( #715 )
3 years ago
HELSON
dbd96fe90a
[zero] check whether gradients have inf and nan in gpu ( #712 )
3 years ago
HELSON
a9b8300d54
[zero] improve adaptability for not-shard parameters ( #708 )
...
* adapt post grad hooks for not-shard parameters
* adapt optimizer for not-shard parameters
* offload gradients for not-replicated parameters
3 years ago
ver217
ab8c6b4a0e
[zero] refactor memstats collector ( #706 )
...
* refactor memstats collector
* fix disposable
* polish code
3 years ago
HELSON
ee112fe1da
[zero] adapt zero hooks for unsharded module ( #699 )
3 years ago
ver217
3c9cd5bb5e
[zero] stateful tensor manager ( #687 )
...
* [WIP] stateful tensor manager
* add eviction strategy
* polish code
* polish code
* polish comment
* add unit test
* fix sampler bug
* polish code
* fix max sampling cnt resetting bug
* fix sampler bug
* polish code
* fix bug
* fix unit test
Co-authored-by: jiaruifang <fangjiarui123@gmail.com>
3 years ago
HELSON
d7ecaf362b
[zero] fix init bugs in zero context ( #686 )
...
* adapt model weight initialization for methods in Pytorch nn.init
3 years ago
Jiarui Fang
0aab52301e
[hotfix] fix a bug in model data stats tracing ( #655 )
3 years ago
YuliangLiu0306
ade05a5d83
[refactor] pipeline, put runtime schedule into engine. ( #627 )
3 years ago
HELSON
e5d615aeee
[hotfix] fix bugs in testing ( #659 )
...
* remove hybrid adam in test_moe_zero_optim
* fix activation checkpointing and its unitest
3 years ago
HELSON
b31daed4cf
fix bugs in CPU adam ( #633 )
...
* add cpu adam counter for all cpu adam
* fixed updating error in adam kernel
3 years ago
HELSON
055fbf5be6
[zero] adapt zero for unsharded paramters (Optimizer part) ( #601 )
3 years ago
アマデウス
354b7954d1
[model checkpoint] added unit tests for checkpoint save/load ( #599 )
3 years ago
FredHuang99
93f14d2a33
[zero] test zero tensor utils ( #609 )
3 years ago
Jiarui Fang
e956d93ac2
[refactor] memory utils ( #577 )
3 years ago
HELSON
e6d50ec107
[zero] adapt zero for unsharded parameters ( #561 )
...
* support existing sharded and unsharded parameters in zero
* add unitest for moe-zero model init
* polish moe gradient handler
3 years ago
ver217
7c6c427db1
[zero] trace states of fp16/32 grad and fp32 param ( #571 )
3 years ago
Jiarui Fang
7675366fce
[polish] rename col_attr -> colo_attr ( #558 )
3 years ago
ver217
014bac0c49
[zero] hijack p.grad in sharded model ( #554 )
...
* hijack p.grad in sharded model
* polish comments
* polish comments
3 years ago
Jiarui Fang
f552b11294
[zero] label state for param fp16 and grad ( #551 )
3 years ago
Jiarui Fang
214da761d4
[zero] add stateful tensor ( #549 )
3 years ago
HELSON
8c90d4df54
[zero] add zero context manager to change config during initialization ( #546 )
3 years ago
Liang Bowen
ec5086c49c
Refactored docstring to google style
3 years ago
Jiarui Fang
53b1b6e340
[zero] non model data tracing ( #545 )
3 years ago
ver217
1f90a3b129
[zero] polish ZeroInitContext ( #540 )
3 years ago
Jiarui Fang
c11ff81b15
[zero] get memory usage of sharded optim v2. ( #542 )
3 years ago
HELSON
a30e2b4c24
[zero] adapt for no-leaf module in zero ( #535 )
...
only process module's own parameters in Zero context
add zero hooks for all modules that contrain parameters
gather parameters only belonging to module itself
3 years ago
Jiarui Fang
705f56107c
[zero] refactor model data tracing ( #537 )
3 years ago
Jiarui Fang
a590ed0ba3
[zero] improve the accuracy of get_memory_usage of sharded param ( #538 )
3 years ago
Jiarui Fang
37cb70feec
[zero] get memory usage for sharded param ( #536 )
3 years ago
LuGY
105c5301c3
[zero]added hybrid adam, removed loss scale in adam ( #527 )
...
* [zero]added hybrid adam, removed loss scale of adam
* remove useless code
3 years ago
Jiarui Fang
8d8c5407c0
[zero] refactor model data tracing ( #522 )
3 years ago
Frank Lee
3601b2bad0
[test] fixed rerun_on_exception and adapted test cases ( #487 )
3 years ago
Jiarui Fang
4d322b79da
[refactor] remove old zero code ( #517 )
3 years ago
LuGY
6a3f9fda83
[cuda] modify the fused adam, support hybrid of fp16 and fp32 ( #497 )
3 years ago
Jiarui Fang
920c5889a7
[zero] add colo move inline ( #521 )
3 years ago
Jiarui Fang
0bebda6ea5
[zero] fix init device bug in zero init context unittest ( #516 )
3 years ago
Jiarui Fang
7ef3507ace
[zero] show model data cuda memory usage after zero context init. ( #515 )
3 years ago
Jiarui Fang
9330be0f3c
[memory] set cuda mem frac ( #506 )
3 years ago
Jiarui Fang
0035b7be07
[memory] add model data tensor moving api ( #503 )
3 years ago
Jiarui Fang
a445e118cf
[polish] polish singleton and global context ( #500 )
3 years ago
ver217
9ec1ce6ab1
[zero] sharded model support the reuse of fp16 shard ( #495 )
...
* sharded model supports reuse fp16 shard
* rename variable
* polish code
* polish code
* polish code
3 years ago
ver217
62b0a8d644
[zero] sharded optim support hybrid cpu adam ( #486 )
...
* sharded optim support hybrid cpu adam
* update unit test
* polish docstring
3 years ago
Jiarui Fang
b334822163
[zero] polish sharded param name ( #484 )
...
* [zero] polish sharded param name
* polish code
* polish
* polish code
* polish
* polsih
* polish
3 years ago
Jiarui Fang
65c0f380c2
[format] polish name format for MOE ( #481 )
3 years ago
HELSON
7544347145
[MOE] add unitest for MOE experts layout, gradient handler and kernel ( #469 )
3 years ago
HELSON
84fd7c1d4d
add moe context, moe utilities and refactor gradient handler ( #455 )
3 years ago
Frank Lee
af185b5519
[test] fixed amp convergence comparison test ( #454 )
3 years ago
ver217
a241f61b34
[zero] Update initialize for ZeRO ( #458 )
...
* polish code
* shard strategy receive pg in shard() / gather()
* update zero engine
* polish code
3 years ago
ver217
642846d6f9
update sharded optim and fix zero init ctx ( #457 )
3 years ago
Jiarui Fang
e2e9f82588
Revert "[zero] update sharded optim and fix zero init ctx" ( #456 )
...
* Revert "polish code"
This reverts commit 8cf7ff08cf
.
* Revert "rename variables"
This reverts commit e99af94ab8
.
* Revert "remove surplus imports"
This reverts commit 46add4a5c5
.
* Revert "update sharded optim and fix zero init ctx"
This reverts commit 57567ee768
.
3 years ago
ver217
8cf7ff08cf
polish code
3 years ago
ver217
46add4a5c5
remove surplus imports
3 years ago
ver217
57567ee768
update sharded optim and fix zero init ctx
3 years ago
Frank Lee
f27d801a13
[test] optimized zero data parallel test ( #452 )
3 years ago
Jiarui Fang
0fcfb1e00d
[test] make zero engine test really work ( #447 )
3 years ago
Frank Lee
bb2790cf0b
optimize engine and trainer test ( #448 )
3 years ago
Frank Lee
b72b8445c6
optimized context test time consumption ( #446 )
3 years ago
Jiarui Fang
496cbb0760
[hotfix] fix initialize bug with zero ( #442 )
3 years ago
Jiarui Fang
17b8274f8a
[unitest] polish zero config in unittest ( #438 )
3 years ago
Jiarui Fang
640a6cd304
[refactory] refactory the initialize method for new zero design ( #431 )
3 years ago
ver217
fce9432f08
sync before creating empty grad
3 years ago
Jiarui Fang
f9c762df85
[test] merge zero optim tests ( #428 )
3 years ago
Jiarui Fang
5d7dc3525b
[hotfix] run cpu adam unittest in pytest ( #424 )
3 years ago
Jiarui Fang
adebb3e041
[zero] cuda margin space for OS ( #418 )
3 years ago
Jiarui Fang
56bb412e72
[polish] use GLOBAL_MODEL_DATA_TRACER ( #417 )
3 years ago
Jiarui Fang
23ba3fc450
[zero] refactory ShardedOptimV2 init method ( #416 )
3 years ago
Frank Lee
e79ea44247
[fp16] refactored fp16 optimizer ( #392 )
3 years ago
Jiarui Fang
21dc54e019
[zero] memtracer to record cuda memory usage of model data and overall system ( #395 )
3 years ago
Jiarui Fang
a37bf1bc42
[hotfix] rm test_tensor_detector.py ( #413 )
3 years ago
Jiarui Fang
370f567e7d
[zero] new interface for ShardedOptimv2 ( #406 )
3 years ago
LuGY
a9c27be42e
Added tensor detector ( #393 )
...
* Added tensor detector
* Added the - states
* Allowed change include_cpu when detect()
3 years ago
ver217
54fd37f0e0
polish unit test
3 years ago
Frank Lee
1e4bf85cdb
fixed bug in activation checkpointing test ( #387 )
3 years ago
Jiarui Fang
3af13a2c3e
[zero] polish ShardedOptimV2 unittest ( #385 )
...
* place params on cpu after zero init context
* polish code
* bucketzed cpu gpu tensor transter
* find a bug in sharded optim unittest
* add offload unittest for ShardedOptimV2.
* polish code and make it more robust
3 years ago