LuGY
|
105c5301c3
|
[zero]added hybrid adam, removed loss scale in adam (#527)
* [zero]added hybrid adam, removed loss scale of adam
* remove useless code
|
2022-03-25 18:03:54 +08:00 |
Jiarui Fang
|
8d8c5407c0
|
[zero] refactor model data tracing (#522)
|
2022-03-25 18:03:32 +08:00 |
Frank Lee
|
3601b2bad0
|
[test] fixed rerun_on_exception and adapted test cases (#487)
|
2022-03-25 17:25:12 +08:00 |
Jiarui Fang
|
4d322b79da
|
[refactor] remove old zero code (#517)
|
2022-03-25 14:54:39 +08:00 |
LuGY
|
6a3f9fda83
|
[cuda] modify the fused adam, support hybrid of fp16 and fp32 (#497)
|
2022-03-25 14:15:53 +08:00 |
Jiarui Fang
|
920c5889a7
|
[zero] add colo move inline (#521)
|
2022-03-25 14:02:55 +08:00 |
ver217
|
7be397ca9c
|
[log] polish disable_existing_loggers (#519)
|
2022-03-25 12:30:55 +08:00 |
Jiarui Fang
|
0bebda6ea5
|
[zero] fix init device bug in zero init context unittest (#516)
|
2022-03-25 12:24:18 +08:00 |
Jiarui Fang
|
7ef3507ace
|
[zero] show model data cuda memory usage after zero context init. (#515)
|
2022-03-25 11:23:35 +08:00 |
ver217
|
a2e61d61d4
|
[zero] zero init ctx enable rm_torch_payload_on_the_fly (#512)
* enable rm_torch_payload_on_the_fly
* polish docstr
|
2022-03-24 23:44:00 +08:00 |
Jiarui Fang
|
81145208d1
|
[install] run with out rich (#513)
|
2022-03-24 17:39:50 +08:00 |
Jiarui Fang
|
bca0c49a9d
|
[zero] use colo model data api in optimv2 (#511)
|
2022-03-24 17:19:34 +08:00 |
Jiarui Fang
|
9330be0f3c
|
[memory] set cuda mem frac (#506)
|
2022-03-24 16:57:13 +08:00 |
Jiarui Fang
|
0035b7be07
|
[memory] add model data tensor moving api (#503)
|
2022-03-24 14:29:41 +08:00 |
Jiarui Fang
|
a445e118cf
|
[polish] polish singleton and global context (#500)
|
2022-03-23 18:03:39 +08:00 |
ver217
|
9ec1ce6ab1
|
[zero] sharded model support the reuse of fp16 shard (#495)
* sharded model supports reuse fp16 shard
* rename variable
* polish code
* polish code
* polish code
|
2022-03-23 14:59:59 +08:00 |
HELSON
|
f24b5ed201
|
[MOE] remove old MoE legacy (#493)
|
2022-03-22 17:37:16 +08:00 |
ver217
|
c4c02424f3
|
[zero] sharded model manages ophooks individually (#492)
|
2022-03-22 17:33:20 +08:00 |
HELSON
|
c9023d4078
|
[MOE] support PR-MOE (#488)
|
2022-03-22 16:48:22 +08:00 |
ver217
|
a9ecb4b244
|
[zero] polish sharded optimizer v2 (#490)
|
2022-03-22 15:53:48 +08:00 |
ver217
|
62b0a8d644
|
[zero] sharded optim support hybrid cpu adam (#486)
* sharded optim support hybrid cpu adam
* update unit test
* polish docstring
|
2022-03-22 14:56:59 +08:00 |
Jiarui Fang
|
b334822163
|
[zero] polish sharded param name (#484)
* [zero] polish sharded param name
* polish code
* polish
* polish code
* polish
* polsih
* polish
|
2022-03-22 14:36:16 +08:00 |
HELSON
|
d7ea63992b
|
[MOE] add FP32LinearGate for MOE in NaiveAMP context (#480)
|
2022-03-22 10:50:20 +08:00 |
Jiarui Fang
|
65c0f380c2
|
[format] polish name format for MOE (#481)
|
2022-03-21 23:19:47 +08:00 |
ver217
|
8d3250d74b
|
[zero] ZeRO supports pipeline parallel (#477)
|
2022-03-21 16:55:37 +08:00 |
Frank Lee
|
83a847d058
|
[test] added rerun on exception for testing (#475)
* [test] added rerun on exception function
* polish code
|
2022-03-21 15:51:57 +08:00 |
HELSON
|
7544347145
|
[MOE] add unitest for MOE experts layout, gradient handler and kernel (#469)
|
2022-03-21 13:35:04 +08:00 |
ver217
|
3cb3fc275e
|
zero init ctx receives a dp process group (#471)
|
2022-03-21 11:18:55 +08:00 |
HELSON
|
aff9d354f7
|
[MOE] polish moe_env (#467)
|
2022-03-19 15:36:25 +08:00 |
HELSON
|
bccbc15861
|
[MOE] changed parallelmode to dist process group (#460)
|
2022-03-19 13:46:29 +08:00 |
ver217
|
fc8e6db005
|
[doc] Update docstring for ZeRO (#459)
* polish sharded model docstr
* polish sharded optim docstr
* polish zero docstr
* polish shard strategy docstr
|
2022-03-18 16:48:20 +08:00 |
HELSON
|
84fd7c1d4d
|
add moe context, moe utilities and refactor gradient handler (#455)
|
2022-03-18 16:38:32 +08:00 |
ver217
|
a241f61b34
|
[zero] Update initialize for ZeRO (#458)
* polish code
* shard strategy receive pg in shard() / gather()
* update zero engine
* polish code
|
2022-03-18 16:18:31 +08:00 |
ver217
|
642846d6f9
|
update sharded optim and fix zero init ctx (#457)
|
2022-03-18 15:44:47 +08:00 |
Jiarui Fang
|
e2e9f82588
|
Revert "[zero] update sharded optim and fix zero init ctx" (#456)
* Revert "polish code"
This reverts commit 8cf7ff08cf .
* Revert "rename variables"
This reverts commit e99af94ab8 .
* Revert "remove surplus imports"
This reverts commit 46add4a5c5 .
* Revert "update sharded optim and fix zero init ctx"
This reverts commit 57567ee768 .
|
2022-03-18 15:22:43 +08:00 |
ver217
|
e99af94ab8
|
rename variables
|
2022-03-18 14:25:25 +08:00 |
ver217
|
57567ee768
|
update sharded optim and fix zero init ctx
|
2022-03-18 14:25:25 +08:00 |
Jiarui Fang
|
0fcfb1e00d
|
[test] make zero engine test really work (#447)
|
2022-03-17 17:24:25 +08:00 |
Jiarui Fang
|
237d08e7ee
|
[zero] hybrid cpu adam (#445)
|
2022-03-17 15:05:41 +08:00 |
Frank Lee
|
b72b8445c6
|
optimized context test time consumption (#446)
|
2022-03-17 14:40:52 +08:00 |
Jiarui Fang
|
496cbb0760
|
[hotfix] fix initialize bug with zero (#442)
|
2022-03-17 13:16:22 +08:00 |
Jiarui Fang
|
640a6cd304
|
[refactory] refactory the initialize method for new zero design (#431)
|
2022-03-16 19:29:37 +08:00 |
Frank Lee
|
bffd85bf34
|
added testing module (#435)
|
2022-03-16 17:20:05 +08:00 |
HELSON
|
dbdc9a7783
|
added Multiply Jitter and capacity factor eval for MOE (#434)
|
2022-03-16 16:47:44 +08:00 |
Frank Lee
|
b03b3ae99c
|
fixed mem monitor device (#433)
fixed mem monitor device
|
2022-03-16 15:25:02 +08:00 |
Frank Lee
|
14a7094243
|
fixed fp16 optimizer none grad bug (#432)
|
2022-03-16 14:35:46 +08:00 |
ver217
|
fce9432f08
|
sync before creating empty grad
|
2022-03-16 14:24:09 +08:00 |
ver217
|
ea6905a898
|
free param.grad
|
2022-03-16 14:24:09 +08:00 |
ver217
|
9506a8beb2
|
use double buffer to handle grad
|
2022-03-16 14:24:09 +08:00 |
Jiarui Fang
|
54229cd33e
|
[log] better logging display with rich (#426)
* better logger using rich
* remove deepspeed in zero requirements
|
2022-03-16 09:51:15 +08:00 |