Commit Graph

342 Commits (6302069c0ea41cdbf640cb902df18e5ccce52ecb)

Author SHA1 Message Date
LuGY 105c5301c3
[zero]added hybrid adam, removed loss scale in adam (#527)
* [zero]added hybrid adam, removed loss scale of adam

* remove useless code
2022-03-25 18:03:54 +08:00
Jiarui Fang 8d8c5407c0
[zero] refactor model data tracing (#522) 2022-03-25 18:03:32 +08:00
Frank Lee 3601b2bad0
[test] fixed rerun_on_exception and adapted test cases (#487) 2022-03-25 17:25:12 +08:00
Jiarui Fang 4d322b79da
[refactor] remove old zero code (#517) 2022-03-25 14:54:39 +08:00
LuGY 6a3f9fda83
[cuda] modify the fused adam, support hybrid of fp16 and fp32 (#497) 2022-03-25 14:15:53 +08:00
Jiarui Fang 920c5889a7
[zero] add colo move inline (#521) 2022-03-25 14:02:55 +08:00
ver217 7be397ca9c
[log] polish disable_existing_loggers (#519) 2022-03-25 12:30:55 +08:00
Jiarui Fang 0bebda6ea5
[zero] fix init device bug in zero init context unittest (#516) 2022-03-25 12:24:18 +08:00
fastalgo a513164379
Update README.md (#514) 2022-03-25 12:12:05 +08:00
Jiarui Fang 7ef3507ace
[zero] show model data cuda memory usage after zero context init. (#515) 2022-03-25 11:23:35 +08:00
ver217 a2e61d61d4
[zero] zero init ctx enable rm_torch_payload_on_the_fly (#512)
* enable rm_torch_payload_on_the_fly

* polish docstr
2022-03-24 23:44:00 +08:00
Jiarui Fang 81145208d1
[install] run with out rich (#513) 2022-03-24 17:39:50 +08:00
HELSON 0f2d219162
[MOE] add MOEGPT model (#510) 2022-03-24 17:39:21 +08:00
Jiarui Fang bca0c49a9d
[zero] use colo model data api in optimv2 (#511) 2022-03-24 17:19:34 +08:00
Jiarui Fang 9330be0f3c
[memory] set cuda mem frac (#506) 2022-03-24 16:57:13 +08:00
Frank Lee 97933b6710
[devops] recover tsinghua pip source due to proxy issue (#509) 2022-03-24 16:11:49 +08:00
Jiarui Fang 0035b7be07
[memory] add model data tensor moving api (#503) 2022-03-24 14:29:41 +08:00
Frank Lee 65ad47c35c
[devops] remove tsinghua source for pip (#507) 2022-03-24 14:12:02 +08:00
Frank Lee 44f7bcb277
[devops] remove tsinghua source for pip (#505) 2022-03-24 14:03:05 +08:00
binmakeswell af56c1d024
fix discussion button in issue template (#504) 2022-03-24 12:25:00 +08:00
Jiarui Fang a445e118cf
[polish] polish singleton and global context (#500) 2022-03-23 18:03:39 +08:00
ver217 9ec1ce6ab1
[zero] sharded model support the reuse of fp16 shard (#495)
* sharded model supports reuse fp16 shard

* rename variable

* polish code

* polish code

* polish code
2022-03-23 14:59:59 +08:00
HELSON f24b5ed201
[MOE] remove old MoE legacy (#493) 2022-03-22 17:37:16 +08:00
ver217 c4c02424f3
[zero] sharded model manages ophooks individually (#492) 2022-03-22 17:33:20 +08:00
HELSON c9023d4078
[MOE] support PR-MOE (#488) 2022-03-22 16:48:22 +08:00
ver217 a9ecb4b244
[zero] polish sharded optimizer v2 (#490) 2022-03-22 15:53:48 +08:00
ver217 62b0a8d644
[zero] sharded optim support hybrid cpu adam (#486)
* sharded optim support hybrid cpu adam

* update unit test

* polish docstring
2022-03-22 14:56:59 +08:00
Jiarui Fang b334822163
[zero] polish sharded param name (#484)
* [zero] polish sharded param name

* polish code

* polish

* polish code

* polish

* polsih

* polish
2022-03-22 14:36:16 +08:00
ver217 9caa8b6481
docs get correct release version (#489) 2022-03-22 14:24:41 +08:00
HELSON d7ea63992b
[MOE] add FP32LinearGate for MOE in NaiveAMP context (#480) 2022-03-22 10:50:20 +08:00
github-actions[bot] 353566c198
Automated submodule synchronization (#483)
Co-authored-by: github-actions <github-actions@github.com>
2022-03-22 09:34:26 +08:00
Jiarui Fang 65c0f380c2
[format] polish name format for MOE (#481) 2022-03-21 23:19:47 +08:00
ver217 8d3250d74b
[zero] ZeRO supports pipeline parallel (#477) 2022-03-21 16:55:37 +08:00
Sze-qq 7f5e4592eb
Update Experiment result about Colossal-AI with ZeRO (#479)
* [readme] add experimental visualisation regarding ColossalAI with ZeRO (#476)

* Hotfix/readme (#478)

* add experimental visualisation regarding ColossalAI with ZeRO

* adjust newly-added figure size
2022-03-21 16:34:07 +08:00
Frank Lee 83a847d058
[test] added rerun on exception for testing (#475)
* [test] added rerun on exception function

* polish code
2022-03-21 15:51:57 +08:00
ver217 d70f43dd7a
embedding remove attn mask (#474) 2022-03-21 14:53:23 +08:00
HELSON 7544347145
[MOE] add unitest for MOE experts layout, gradient handler and kernel (#469) 2022-03-21 13:35:04 +08:00
ver217 1559c0df41
fix attn mask shape of gpt (#472) 2022-03-21 12:01:31 +08:00
ver217 3cb3fc275e
zero init ctx receives a dp process group (#471) 2022-03-21 11:18:55 +08:00
ver217 7e30068a22
[doc] update rst (#470)
* update rst

* remove empty rst
2022-03-21 10:52:45 +08:00
HELSON aff9d354f7
[MOE] polish moe_env (#467) 2022-03-19 15:36:25 +08:00
HELSON bccbc15861
[MOE] changed parallelmode to dist process group (#460) 2022-03-19 13:46:29 +08:00
Frank Lee 8f9617c313
[release] update version (#465) 2022-03-18 19:26:07 +08:00
Frank Lee 2963565ff8
[test] fixed release workflow step (#464) 2022-03-18 19:17:13 +08:00
Frank Lee 292590e0fa
[test] fixed release workflow condition (#463) 2022-03-18 17:42:33 +08:00
Frank Lee 90bd97b9c0
[devops] fixed workflow bug (#462) 2022-03-18 17:26:24 +08:00
ver217 304263c2ce
fix gpt attention mask (#461) 2022-03-18 17:24:19 +08:00
ver217 fc8e6db005
[doc] Update docstring for ZeRO (#459)
* polish sharded model docstr

* polish sharded optim docstr

* polish zero docstr

* polish shard strategy docstr
2022-03-18 16:48:20 +08:00
HELSON 84fd7c1d4d
add moe context, moe utilities and refactor gradient handler (#455) 2022-03-18 16:38:32 +08:00
Frank Lee af185b5519
[test] fixed amp convergence comparison test (#454) 2022-03-18 16:28:16 +08:00