Commit Graph

2755 Commits (50e5602c2d6c8e25ad544cbecc38649e5257e7b8)

Author SHA1 Message Date
Jiarui Fang 53b1b6e340
[zero] non model data tracing (#545) 2022-03-29 15:45:48 +08:00
Jie Zhu 73d36618a6
[profiler] add MemProfiler (#356)
* add memory trainer hook

* fix bug

* add memory trainer hook

* fix import bug

* fix import bug

* add trainer hook

* fix #370 git log bug

* modify `to_tensorboard` function to support better output

* remove useless output

* change the name of `MemProfiler`

* complete memory profiler

* replace error with warning

* finish trainer hook

* modify interface of MemProfiler

* modify `__init__.py` in profiler

* remove unnecessary pass statement

* add usage to doc string

* add usage to trainer hook

* new location to store temp data file
2022-03-29 12:48:34 +08:00
ver217 fb841dd5c5
[zero] optimize grad offload (#539)
* optimize grad offload

* polish code

* polish code
2022-03-29 12:48:00 +08:00
Jiarui Fang 7d81b5b46e
[logging] polish logger format (#543) 2022-03-29 10:37:11 +08:00
ver217 1f90a3b129
[zero] polish ZeroInitContext (#540) 2022-03-29 09:09:04 +08:00
Jiarui Fang c11ff81b15
[zero] get memory usage of sharded optim v2. (#542) 2022-03-29 09:08:18 +08:00
HELSON a30e2b4c24
[zero] adapt for no-leaf module in zero (#535)
only process module's own parameters in Zero context

add zero hooks for all modules that contrain parameters

gather parameters only belonging to module itself
2022-03-28 17:42:18 +08:00
Jiarui Fang 705f56107c
[zero] refactor model data tracing (#537) 2022-03-28 16:38:18 +08:00
Jiarui Fang a590ed0ba3
[zero] improve the accuracy of get_memory_usage of sharded param (#538) 2022-03-28 16:19:19 +08:00
Jiarui Fang 37cb70feec
[zero] get memory usage for sharded param (#536) 2022-03-28 15:01:21 +08:00
ver217 56ad945797
update version (#533) 2022-03-26 12:34:28 +08:00
ver217 ffca99d187
[doc] update apidoc (#530) 2022-03-25 18:29:43 +08:00
Jiarui Fang 05e33b2578
[zero] fix grad offload (#528)
* [zero] fix grad offload

* polish code
2022-03-25 18:23:25 +08:00
LuGY 105c5301c3
[zero]added hybrid adam, removed loss scale in adam (#527)
* [zero]added hybrid adam, removed loss scale of adam

* remove useless code
2022-03-25 18:03:54 +08:00
Jiarui Fang 8d8c5407c0
[zero] refactor model data tracing (#522) 2022-03-25 18:03:32 +08:00
Frank Lee 3601b2bad0
[test] fixed rerun_on_exception and adapted test cases (#487) 2022-03-25 17:25:12 +08:00
Jiarui Fang 4d322b79da
[refactor] remove old zero code (#517) 2022-03-25 14:54:39 +08:00
LuGY 6a3f9fda83
[cuda] modify the fused adam, support hybrid of fp16 and fp32 (#497) 2022-03-25 14:15:53 +08:00
Jiarui Fang 920c5889a7
[zero] add colo move inline (#521) 2022-03-25 14:02:55 +08:00
ver217 7be397ca9c
[log] polish disable_existing_loggers (#519) 2022-03-25 12:30:55 +08:00
Jiarui Fang 0bebda6ea5
[zero] fix init device bug in zero init context unittest (#516) 2022-03-25 12:24:18 +08:00
fastalgo a513164379
Update README.md (#514) 2022-03-25 12:12:05 +08:00
Jiarui Fang 7ef3507ace
[zero] show model data cuda memory usage after zero context init. (#515) 2022-03-25 11:23:35 +08:00
ver217 a2e61d61d4
[zero] zero init ctx enable rm_torch_payload_on_the_fly (#512)
* enable rm_torch_payload_on_the_fly

* polish docstr
2022-03-24 23:44:00 +08:00
Jiarui Fang 81145208d1
[install] run with out rich (#513) 2022-03-24 17:39:50 +08:00
HELSON 0f2d219162
[MOE] add MOEGPT model (#510) 2022-03-24 17:39:21 +08:00
Jiarui Fang bca0c49a9d
[zero] use colo model data api in optimv2 (#511) 2022-03-24 17:19:34 +08:00
Jiarui Fang 9330be0f3c
[memory] set cuda mem frac (#506) 2022-03-24 16:57:13 +08:00
Frank Lee 97933b6710
[devops] recover tsinghua pip source due to proxy issue (#509) 2022-03-24 16:11:49 +08:00
Jiarui Fang 0035b7be07
[memory] add model data tensor moving api (#503) 2022-03-24 14:29:41 +08:00
Frank Lee 65ad47c35c
[devops] remove tsinghua source for pip (#507) 2022-03-24 14:12:02 +08:00
Frank Lee 44f7bcb277
[devops] remove tsinghua source for pip (#505) 2022-03-24 14:03:05 +08:00
binmakeswell af56c1d024
fix discussion button in issue template (#504) 2022-03-24 12:25:00 +08:00
Jiarui Fang a445e118cf
[polish] polish singleton and global context (#500) 2022-03-23 18:03:39 +08:00
ver217 9ec1ce6ab1
[zero] sharded model support the reuse of fp16 shard (#495)
* sharded model supports reuse fp16 shard

* rename variable

* polish code

* polish code

* polish code
2022-03-23 14:59:59 +08:00
HELSON f24b5ed201
[MOE] remove old MoE legacy (#493) 2022-03-22 17:37:16 +08:00
ver217 c4c02424f3
[zero] sharded model manages ophooks individually (#492) 2022-03-22 17:33:20 +08:00
HELSON c9023d4078
[MOE] support PR-MOE (#488) 2022-03-22 16:48:22 +08:00
ver217 a9ecb4b244
[zero] polish sharded optimizer v2 (#490) 2022-03-22 15:53:48 +08:00
ver217 62b0a8d644
[zero] sharded optim support hybrid cpu adam (#486)
* sharded optim support hybrid cpu adam

* update unit test

* polish docstring
2022-03-22 14:56:59 +08:00
Jiarui Fang b334822163
[zero] polish sharded param name (#484)
* [zero] polish sharded param name

* polish code

* polish

* polish code

* polish

* polsih

* polish
2022-03-22 14:36:16 +08:00
ver217 9caa8b6481
docs get correct release version (#489) 2022-03-22 14:24:41 +08:00
HELSON d7ea63992b
[MOE] add FP32LinearGate for MOE in NaiveAMP context (#480) 2022-03-22 10:50:20 +08:00
github-actions[bot] 353566c198
Automated submodule synchronization (#483)
Co-authored-by: github-actions <github-actions@github.com>
2022-03-22 09:34:26 +08:00
Jiarui Fang 65c0f380c2
[format] polish name format for MOE (#481) 2022-03-21 23:19:47 +08:00
ver217 8d3250d74b
[zero] ZeRO supports pipeline parallel (#477) 2022-03-21 16:55:37 +08:00
Sze-qq 7f5e4592eb
Update Experiment result about Colossal-AI with ZeRO (#479)
* [readme] add experimental visualisation regarding ColossalAI with ZeRO (#476)

* Hotfix/readme (#478)

* add experimental visualisation regarding ColossalAI with ZeRO

* adjust newly-added figure size
2022-03-21 16:34:07 +08:00
Frank Lee 83a847d058
[test] added rerun on exception for testing (#475)
* [test] added rerun on exception function

* polish code
2022-03-21 15:51:57 +08:00
ver217 d70f43dd7a
embedding remove attn mask (#474) 2022-03-21 14:53:23 +08:00
HELSON 7544347145
[MOE] add unitest for MOE experts layout, gradient handler and kernel (#469) 2022-03-21 13:35:04 +08:00