アマデウス
6302069c0e
[model checkpoint] updated communication ops for cpu tensors ( #590 )
2022-04-01 16:52:20 +08:00
アマデウス
c50bfb807b
[model checkpoint] updated saving/loading for 1d layers ( #594 )
2022-04-01 16:51:52 +08:00
アマデウス
7636d518e1
[model checkpoint] updated saving/loading for 2d layers ( #595 )
2022-04-01 16:50:34 +08:00
アマデウス
cd13b63832
[model checkpoint] reworked unified layers for ease of save/load states ( #593 )
2022-04-01 16:49:56 +08:00
アマデウス
acae68eb04
[model checkpoint] updated checkpoint save/load utils ( #592 )
2022-04-01 16:49:21 +08:00
Ziyue Jiang
1c40ee8749
[TP] add assert for tp1d ( #621 )
2022-04-01 16:44:23 +08:00
ver217
369a288bf3
polish utils docstring ( #620 )
2022-04-01 16:36:47 +08:00
ver217
e619a651fb
polish optimizer docstring ( #619 )
2022-04-01 16:27:03 +08:00
ver217
8432dc7080
polish moe docsrting ( #618 )
2022-04-01 16:15:36 +08:00
ver217
c5b488edf8
polish amp docstring ( #616 )
2022-04-01 16:09:39 +08:00
ver217
0ef8819c67
polish docstring of zero ( #612 )
2022-04-01 14:50:56 +08:00
LuGY
02b187c14f
[zero] add sampling time for memstats collector ( #610 )
2022-04-01 14:03:00 +08:00
ver217
9bee119104
[hotfix] fix sharded optim zero grad ( #604 )
...
* fix sharded optim zero grad
* polish comments
2022-04-01 12:41:20 +08:00
アマデウス
297b8baae2
[model checkpoint] add gloo groups for cpu tensor communication ( #589 )
2022-04-01 10:15:52 +08:00
アマデウス
54e688b623
moved ensure_path_exists to utils.common ( #591 )
2022-04-01 09:46:33 +08:00
Jiarui Fang
e956d93ac2
[refactor] memory utils ( #577 )
2022-04-01 09:22:33 +08:00
ver217
104cbbb313
[hotfix] add hybrid adam to __init__ ( #584 )
2022-03-31 19:08:34 +08:00
HELSON
e6d50ec107
[zero] adapt zero for unsharded parameters ( #561 )
...
* support existing sharded and unsharded parameters in zero
* add unitest for moe-zero model init
* polish moe gradient handler
2022-03-31 18:34:11 +08:00
Wesley
46c9ba33da
update code format
2022-03-31 17:15:08 +08:00
Wesley
666cfd094a
fix parallel_input flag for Linear1D_Col gather_output
2022-03-31 17:15:08 +08:00
ver217
7c6c427db1
[zero] trace states of fp16/32 grad and fp32 param ( #571 )
2022-03-31 16:26:54 +08:00
Jiarui Fang
7675366fce
[polish] rename col_attr -> colo_attr ( #558 )
2022-03-31 12:25:45 +08:00
Liang Bowen
2c45efc398
html refactor ( #555 )
2022-03-31 11:36:56 +08:00
Jiarui Fang
d1211148a7
[utils] update colo tensor moving APIs ( #553 )
2022-03-30 23:13:24 +08:00
LuGY
c44d797072
[docs] updatad docs of hybrid adam and cpu adam ( #552 )
2022-03-30 18:14:59 +08:00
ver217
014bac0c49
[zero] hijack p.grad in sharded model ( #554 )
...
* hijack p.grad in sharded model
* polish comments
* polish comments
2022-03-30 18:14:50 +08:00
Jiarui Fang
f552b11294
[zero] label state for param fp16 and grad ( #551 )
2022-03-30 15:57:46 +08:00
Jiarui Fang
214da761d4
[zero] add stateful tensor ( #549 )
2022-03-30 13:51:37 +08:00
Jiarui Fang
107b99ddb1
[zero] dump memory stats for sharded model ( #548 )
2022-03-30 09:38:44 +08:00
Ziyue Jiang
763dc325f1
[TP] Add gather_out arg to Linear ( #541 )
2022-03-30 09:35:46 +08:00
HELSON
8c90d4df54
[zero] add zero context manager to change config during initialization ( #546 )
2022-03-29 17:57:59 +08:00
Liang Bowen
ec5086c49c
Refactored docstring to google style
2022-03-29 17:17:47 +08:00
Jiarui Fang
53b1b6e340
[zero] non model data tracing ( #545 )
2022-03-29 15:45:48 +08:00
Jie Zhu
73d36618a6
[profiler] add MemProfiler ( #356 )
...
* add memory trainer hook
* fix bug
* add memory trainer hook
* fix import bug
* fix import bug
* add trainer hook
* fix #370 git log bug
* modify `to_tensorboard` function to support better output
* remove useless output
* change the name of `MemProfiler`
* complete memory profiler
* replace error with warning
* finish trainer hook
* modify interface of MemProfiler
* modify `__init__.py` in profiler
* remove unnecessary pass statement
* add usage to doc string
* add usage to trainer hook
* new location to store temp data file
2022-03-29 12:48:34 +08:00
ver217
fb841dd5c5
[zero] optimize grad offload ( #539 )
...
* optimize grad offload
* polish code
* polish code
2022-03-29 12:48:00 +08:00
Jiarui Fang
7d81b5b46e
[logging] polish logger format ( #543 )
2022-03-29 10:37:11 +08:00
ver217
1f90a3b129
[zero] polish ZeroInitContext ( #540 )
2022-03-29 09:09:04 +08:00
Jiarui Fang
c11ff81b15
[zero] get memory usage of sharded optim v2. ( #542 )
2022-03-29 09:08:18 +08:00
HELSON
a30e2b4c24
[zero] adapt for no-leaf module in zero ( #535 )
...
only process module's own parameters in Zero context
add zero hooks for all modules that contrain parameters
gather parameters only belonging to module itself
2022-03-28 17:42:18 +08:00
Jiarui Fang
705f56107c
[zero] refactor model data tracing ( #537 )
2022-03-28 16:38:18 +08:00
Jiarui Fang
a590ed0ba3
[zero] improve the accuracy of get_memory_usage of sharded param ( #538 )
2022-03-28 16:19:19 +08:00
Jiarui Fang
37cb70feec
[zero] get memory usage for sharded param ( #536 )
2022-03-28 15:01:21 +08:00
Jiarui Fang
05e33b2578
[zero] fix grad offload ( #528 )
...
* [zero] fix grad offload
* polish code
2022-03-25 18:23:25 +08:00
LuGY
105c5301c3
[zero]added hybrid adam, removed loss scale in adam ( #527 )
...
* [zero]added hybrid adam, removed loss scale of adam
* remove useless code
2022-03-25 18:03:54 +08:00
Jiarui Fang
8d8c5407c0
[zero] refactor model data tracing ( #522 )
2022-03-25 18:03:32 +08:00
Frank Lee
3601b2bad0
[test] fixed rerun_on_exception and adapted test cases ( #487 )
2022-03-25 17:25:12 +08:00
Jiarui Fang
4d322b79da
[refactor] remove old zero code ( #517 )
2022-03-25 14:54:39 +08:00
LuGY
6a3f9fda83
[cuda] modify the fused adam, support hybrid of fp16 and fp32 ( #497 )
2022-03-25 14:15:53 +08:00
Jiarui Fang
920c5889a7
[zero] add colo move inline ( #521 )
2022-03-25 14:02:55 +08:00
ver217
7be397ca9c
[log] polish disable_existing_loggers ( #519 )
2022-03-25 12:30:55 +08:00