ver217
fb841dd5c5
[zero] optimize grad offload ( #539 )
...
* optimize grad offload
* polish code
* polish code
3 years ago
Jiarui Fang
7d81b5b46e
[logging] polish logger format ( #543 )
3 years ago
ver217
1f90a3b129
[zero] polish ZeroInitContext ( #540 )
3 years ago
Jiarui Fang
c11ff81b15
[zero] get memory usage of sharded optim v2. ( #542 )
3 years ago
HELSON
a30e2b4c24
[zero] adapt for no-leaf module in zero ( #535 )
...
only process module's own parameters in Zero context
add zero hooks for all modules that contrain parameters
gather parameters only belonging to module itself
3 years ago
Jiarui Fang
705f56107c
[zero] refactor model data tracing ( #537 )
3 years ago
Jiarui Fang
a590ed0ba3
[zero] improve the accuracy of get_memory_usage of sharded param ( #538 )
3 years ago
Jiarui Fang
37cb70feec
[zero] get memory usage for sharded param ( #536 )
3 years ago
ver217
56ad945797
update version ( #533 )
3 years ago
ver217
ffca99d187
[doc] update apidoc ( #530 )
3 years ago
Jiarui Fang
05e33b2578
[zero] fix grad offload ( #528 )
...
* [zero] fix grad offload
* polish code
3 years ago
LuGY
105c5301c3
[zero]added hybrid adam, removed loss scale in adam ( #527 )
...
* [zero]added hybrid adam, removed loss scale of adam
* remove useless code
3 years ago
Jiarui Fang
8d8c5407c0
[zero] refactor model data tracing ( #522 )
3 years ago
Frank Lee
3601b2bad0
[test] fixed rerun_on_exception and adapted test cases ( #487 )
3 years ago
Jiarui Fang
4d322b79da
[refactor] remove old zero code ( #517 )
3 years ago
LuGY
6a3f9fda83
[cuda] modify the fused adam, support hybrid of fp16 and fp32 ( #497 )
3 years ago
Jiarui Fang
920c5889a7
[zero] add colo move inline ( #521 )
3 years ago
ver217
7be397ca9c
[log] polish disable_existing_loggers ( #519 )
3 years ago
Jiarui Fang
0bebda6ea5
[zero] fix init device bug in zero init context unittest ( #516 )
3 years ago
fastalgo
a513164379
Update README.md ( #514 )
3 years ago
Jiarui Fang
7ef3507ace
[zero] show model data cuda memory usage after zero context init. ( #515 )
3 years ago
ver217
a2e61d61d4
[zero] zero init ctx enable rm_torch_payload_on_the_fly ( #512 )
...
* enable rm_torch_payload_on_the_fly
* polish docstr
3 years ago
Jiarui Fang
81145208d1
[install] run with out rich ( #513 )
3 years ago
HELSON
0f2d219162
[MOE] add MOEGPT model ( #510 )
3 years ago
Jiarui Fang
bca0c49a9d
[zero] use colo model data api in optimv2 ( #511 )
3 years ago
Jiarui Fang
9330be0f3c
[memory] set cuda mem frac ( #506 )
3 years ago
Frank Lee
97933b6710
[devops] recover tsinghua pip source due to proxy issue ( #509 )
3 years ago
Jiarui Fang
0035b7be07
[memory] add model data tensor moving api ( #503 )
3 years ago
Frank Lee
65ad47c35c
[devops] remove tsinghua source for pip ( #507 )
3 years ago
Frank Lee
44f7bcb277
[devops] remove tsinghua source for pip ( #505 )
3 years ago
binmakeswell
af56c1d024
fix discussion button in issue template ( #504 )
3 years ago
Jiarui Fang
a445e118cf
[polish] polish singleton and global context ( #500 )
3 years ago
ver217
9ec1ce6ab1
[zero] sharded model support the reuse of fp16 shard ( #495 )
...
* sharded model supports reuse fp16 shard
* rename variable
* polish code
* polish code
* polish code
3 years ago
HELSON
f24b5ed201
[MOE] remove old MoE legacy ( #493 )
3 years ago
ver217
c4c02424f3
[zero] sharded model manages ophooks individually ( #492 )
3 years ago
HELSON
c9023d4078
[MOE] support PR-MOE ( #488 )
3 years ago
ver217
a9ecb4b244
[zero] polish sharded optimizer v2 ( #490 )
3 years ago
ver217
62b0a8d644
[zero] sharded optim support hybrid cpu adam ( #486 )
...
* sharded optim support hybrid cpu adam
* update unit test
* polish docstring
3 years ago
Jiarui Fang
b334822163
[zero] polish sharded param name ( #484 )
...
* [zero] polish sharded param name
* polish code
* polish
* polish code
* polish
* polsih
* polish
3 years ago
ver217
9caa8b6481
docs get correct release version ( #489 )
3 years ago
HELSON
d7ea63992b
[MOE] add FP32LinearGate for MOE in NaiveAMP context ( #480 )
3 years ago
github-actions[bot]
353566c198
Automated submodule synchronization ( #483 )
...
Co-authored-by: github-actions <github-actions@github.com>
3 years ago
Jiarui Fang
65c0f380c2
[format] polish name format for MOE ( #481 )
3 years ago
ver217
8d3250d74b
[zero] ZeRO supports pipeline parallel ( #477 )
3 years ago
Sze-qq
7f5e4592eb
Update Experiment result about Colossal-AI with ZeRO ( #479 )
...
* [readme] add experimental visualisation regarding ColossalAI with ZeRO (#476 )
* Hotfix/readme (#478 )
* add experimental visualisation regarding ColossalAI with ZeRO
* adjust newly-added figure size
3 years ago
Frank Lee
83a847d058
[test] added rerun on exception for testing ( #475 )
...
* [test] added rerun on exception function
* polish code
3 years ago
ver217
d70f43dd7a
embedding remove attn mask ( #474 )
3 years ago
HELSON
7544347145
[MOE] add unitest for MOE experts layout, gradient handler and kernel ( #469 )
3 years ago
ver217
1559c0df41
fix attn mask shape of gpt ( #472 )
3 years ago
ver217
3cb3fc275e
zero init ctx receives a dp process group ( #471 )
3 years ago