ver217
8d3250d74b
[zero] ZeRO supports pipeline parallel ( #477 )
3 years ago
Frank Lee
83a847d058
[test] added rerun on exception for testing ( #475 )
...
* [test] added rerun on exception function
* polish code
3 years ago
HELSON
7544347145
[MOE] add unitest for MOE experts layout, gradient handler and kernel ( #469 )
3 years ago
ver217
3cb3fc275e
zero init ctx receives a dp process group ( #471 )
3 years ago
HELSON
aff9d354f7
[MOE] polish moe_env ( #467 )
3 years ago
HELSON
bccbc15861
[MOE] changed parallelmode to dist process group ( #460 )
3 years ago
ver217
fc8e6db005
[doc] Update docstring for ZeRO ( #459 )
...
* polish sharded model docstr
* polish sharded optim docstr
* polish zero docstr
* polish shard strategy docstr
3 years ago
HELSON
84fd7c1d4d
add moe context, moe utilities and refactor gradient handler ( #455 )
3 years ago
ver217
a241f61b34
[zero] Update initialize for ZeRO ( #458 )
...
* polish code
* shard strategy receive pg in shard() / gather()
* update zero engine
* polish code
3 years ago
ver217
642846d6f9
update sharded optim and fix zero init ctx ( #457 )
3 years ago
Jiarui Fang
e2e9f82588
Revert "[zero] update sharded optim and fix zero init ctx" ( #456 )
...
* Revert "polish code"
This reverts commit 8cf7ff08cf
.
* Revert "rename variables"
This reverts commit e99af94ab8
.
* Revert "remove surplus imports"
This reverts commit 46add4a5c5
.
* Revert "update sharded optim and fix zero init ctx"
This reverts commit 57567ee768
.
3 years ago
ver217
e99af94ab8
rename variables
3 years ago
ver217
57567ee768
update sharded optim and fix zero init ctx
3 years ago
Jiarui Fang
0fcfb1e00d
[test] make zero engine test really work ( #447 )
3 years ago
Jiarui Fang
237d08e7ee
[zero] hybrid cpu adam ( #445 )
3 years ago
Frank Lee
b72b8445c6
optimized context test time consumption ( #446 )
3 years ago
Jiarui Fang
496cbb0760
[hotfix] fix initialize bug with zero ( #442 )
3 years ago
Jiarui Fang
640a6cd304
[refactory] refactory the initialize method for new zero design ( #431 )
3 years ago
Frank Lee
bffd85bf34
added testing module ( #435 )
3 years ago
HELSON
dbdc9a7783
added Multiply Jitter and capacity factor eval for MOE ( #434 )
3 years ago
Frank Lee
b03b3ae99c
fixed mem monitor device ( #433 )
...
fixed mem monitor device
3 years ago
Frank Lee
14a7094243
fixed fp16 optimizer none grad bug ( #432 )
3 years ago
ver217
fce9432f08
sync before creating empty grad
3 years ago
ver217
ea6905a898
free param.grad
3 years ago
ver217
9506a8beb2
use double buffer to handle grad
3 years ago
Jiarui Fang
54229cd33e
[log] better logging display with rich ( #426 )
...
* better logger using rich
* remove deepspeed in zero requirements
3 years ago
HELSON
3f70a2b12f
removed noisy function during evaluation of MoE router ( #419 )
3 years ago
Jiarui Fang
adebb3e041
[zero] cuda margin space for OS ( #418 )
3 years ago
Jiarui Fang
56bb412e72
[polish] use GLOBAL_MODEL_DATA_TRACER ( #417 )
3 years ago
Jiarui Fang
23ba3fc450
[zero] refactory ShardedOptimV2 init method ( #416 )
3 years ago
Frank Lee
e79ea44247
[fp16] refactored fp16 optimizer ( #392 )
3 years ago
Jiarui Fang
21dc54e019
[zero] memtracer to record cuda memory usage of model data and overall system ( #395 )
3 years ago
Jiarui Fang
370f567e7d
[zero] new interface for ShardedOptimv2 ( #406 )
3 years ago
LuGY
a9c27be42e
Added tensor detector ( #393 )
...
* Added tensor detector
* Added the - states
* Allowed change include_cpu when detect()
3 years ago
1SAA
907ac4a2dc
fixed error when no collective communication in CommProfiler
3 years ago
Frank Lee
2fe68b359a
Merge pull request #403 from ver217/feature/shard-strategy
...
[zero] Add bucket tensor shard strategy
3 years ago
HELSON
dfd0363f68
polished output format for communication profiler and pcie profiler ( #404 )
...
fixed typing error
3 years ago
ver217
63469c0f91
polish code
3 years ago
ver217
88804aee49
add bucket tensor shard strategy
3 years ago
HELSON
7c079d9c33
[hotfix] fixed bugs in ShardStrategy and PcieProfiler ( #394 )
3 years ago
Frank Lee
1e4bf85cdb
fixed bug in activation checkpointing test ( #387 )
3 years ago
Jiarui Fang
3af13a2c3e
[zero] polish ShardedOptimV2 unittest ( #385 )
...
* place params on cpu after zero init context
* polish code
* bucketzed cpu gpu tensor transter
* find a bug in sharded optim unittest
* add offload unittest for ShardedOptimV2.
* polish code and make it more robust
3 years ago
Jiang Zhuo
5a4a3b77d9
fix format ( #376 )
3 years ago
LuGY
de46450461
Added activation offload ( #331 )
...
* Added activation offload
* Fixed the import bug, used the pytest
3 years ago
Jiarui Fang
272ebfb57d
[bug] shard param during initializing the ShardedModelV2 ( #381 )
3 years ago
HELSON
8c18eb0998
[profiler] Fixed bugs in CommProfiler and PcieProfiler ( #377 )
3 years ago
Jiarui Fang
b5f43acee3
[zero] find miss code ( #378 )
3 years ago
Jiarui Fang
6b6002962a
[zero] zero init context collect numel of model ( #375 )
3 years ago
HELSON
1ed7c24c02
Added PCIE profiler to dectect data transmission ( #373 )
3 years ago
jiaruifang
d9217e1960
Revert "[zero] bucketized tensor cpu gpu copy ( #368 )"
...
This reverts commit bef05489b6
.
3 years ago