ver217
dcca614eee
[hotfix] fix test_stateful_tensor_mgr ( #762 )
2022-04-14 15:50:09 +08:00
github-actions[bot]
6978980f6d
Automated submodule synchronization ( #751 )
...
Co-authored-by: github-actions <github-actions@github.com>
2022-04-14 15:34:01 +08:00
ver217
a93a7d7364
[hotfix] fix reuse_fp16_shard of sharded model ( #756 )
...
* fix reuse_fp16_shard
* disable test stm
* polish code
2022-04-14 14:56:46 +08:00
ver217
8f7ce94b8e
[hotfix] fix auto tensor placement policy ( #753 )
2022-04-14 12:04:45 +08:00
HELSON
84c6700b2a
[zero] refactor memstats_collector ( #746 )
2022-04-14 12:01:12 +08:00
アマデウス
b8899e0905
[TP] allow layernorm without bias ( #750 )
2022-04-14 11:43:56 +08:00
Jiarui Fang
3d7dc46d33
[zero] use factory pattern for tensor_placement_policy ( #752 )
2022-04-14 11:07:29 +08:00
ver217
4b048a8728
fix prepare grads in sharded optim ( #749 )
2022-04-13 22:36:11 +08:00
ver217
097772546e
fix initialize about zero
2022-04-13 19:10:21 +08:00
ver217
e396bb71f2
[zero] add tensor placement policies ( #743 )
...
* add tensor placement policies
* polish comments
* polish comments
* update moe unit tests
2022-04-13 15:00:48 +08:00
HELSON
22c4b88d56
[zero] refactor ShardedParamV2 for convenience ( #742 )
2022-04-13 14:54:26 +08:00
HELSON
340e59f968
[utils] add synchronized cuda memory monitor ( #740 )
2022-04-13 10:50:54 +08:00
ver217
e6212f56cd
[hotfix] fix memory leak in backward of sharded model ( #741 )
2022-04-13 09:59:05 +08:00
Frank Lee
f4f42d4c3c
[bug] fixed DDP compatibility with torch 1.8 ( #739 )
2022-04-13 00:08:46 +08:00
Frank Lee
a4e91bc87f
[bug] fixed grad scaler compatibility with torch 1.8 ( #735 )
2022-04-12 16:04:21 +08:00
Jiarui Fang
53cb584808
[utils] correct cpu memory used and capacity in the context of multi-process ( #726 )
2022-04-12 14:57:54 +08:00
Jiarui Fang
7db3ccc79b
[hotfix] remove duplicated param register to stateful tensor manager ( #728 )
2022-04-12 13:55:25 +08:00
binmakeswell
600e769a42
add video ( #732 )
2022-04-12 13:41:56 +08:00
Frank Lee
a5c3f072f6
[bug] removed zero installation requirements ( #731 )
2022-04-12 13:27:25 +08:00
HELSON
b9b469ea50
[moe] add checkpoint for moe zero test ( #729 )
2022-04-12 12:11:54 +08:00
Frank Lee
6f7d1362c9
[doc] removed outdated installation command ( #730 )
2022-04-12 11:56:45 +08:00
FrankLeeeee
e88a498c9c
[test] removed trivial outdated test
2022-04-12 11:08:15 +08:00
FrankLeeeee
62b4ce7326
[test] added missing decorators to model checkpointing tests
2022-04-12 11:08:15 +08:00
Frank Lee
1cb7bdad3b
[util] fixed communication API depth with PyTorch 1.9 ( #721 )
2022-04-12 09:44:40 +08:00
Frank Lee
2412429d54
[util] fixed activation checkpointing on torch 1.9 ( #719 )
2022-04-12 09:35:45 +08:00
Frank Lee
04ff5ea546
[utils] support detection of number of processes on current node ( #723 )
2022-04-12 09:28:19 +08:00
Jiarui Fang
4d90a7b513
[refactor] zero directory ( #724 )
2022-04-11 23:13:02 +08:00
Frank Lee
20ab1f5520
[bug] fixed broken test_found_inf ( #725 )
2022-04-11 22:00:27 +08:00
Jiarui Fang
193dc8dacb
[refactor] refactor the memory utils ( #715 )
2022-04-11 16:47:57 +08:00
HELSON
dbd96fe90a
[zero] check whether gradients have inf and nan in gpu ( #712 )
2022-04-11 15:40:13 +08:00
ver217
715b86eadd
[hotfix] fix stm cuda model data size ( #710 )
2022-04-11 15:10:39 +08:00
LuGY
140263a394
[hotfix]fixed bugs of assigning grad states to non leaf nodes ( #711 )
...
* fixed bugs of assigning grad states to non leaf nodes
* use detach()
2022-04-11 14:04:58 +08:00
Frank Lee
eda30a058e
[compatibility] fixed tensor parallel compatibility with torch 1.9 ( #700 )
2022-04-11 13:44:50 +08:00
HELSON
a9b8300d54
[zero] improve adaptability for not-shard parameters ( #708 )
...
* adapt post grad hooks for not-shard parameters
* adapt optimizer for not-shard parameters
* offload gradients for not-replicated parameters
2022-04-11 13:38:51 +08:00
ver217
ab8c6b4a0e
[zero] refactor memstats collector ( #706 )
...
* refactor memstats collector
* fix disposable
* polish code
2022-04-11 10:46:08 +08:00
アマデウス
3fc8a204dc
[]Corrected 3d vocab parallel embedding ( #707 )
2022-04-11 10:17:55 +08:00
HELSON
ee112fe1da
[zero] adapt zero hooks for unsharded module ( #699 )
2022-04-08 20:23:26 +08:00
binmakeswell
896ade15d6
add PaLM link ( #704 ) ( #705 )
2022-04-08 18:42:12 +08:00
binmakeswell
270157e9e7
add PaLM link ( #704 )
...
* add PaLM link
2022-04-08 18:26:59 +08:00
ver217
3c9cd5bb5e
[zero] stateful tensor manager ( #687 )
...
* [WIP] stateful tensor manager
* add eviction strategy
* polish code
* polish code
* polish comment
* add unit test
* fix sampler bug
* polish code
* fix max sampling cnt resetting bug
* fix sampler bug
* polish code
* fix bug
* fix unit test
Co-authored-by: jiaruifang <fangjiarui123@gmail.com>
2022-04-08 17:51:34 +08:00
ver217
70e8dd418b
[hotfix] update requirements-test ( #701 )
2022-04-08 16:52:36 +08:00
Frank Lee
1ae94ea85a
[ci] remove ipc config for rootless docker ( #694 )
2022-04-08 10:15:52 +08:00
github-actions[bot]
d878d843ad
Automated submodule synchronization ( #695 )
...
Co-authored-by: github-actions <github-actions@github.com>
2022-04-08 10:03:53 +08:00
github-actions[bot]
d50cdabbc9
Automated submodule synchronization ( #556 )
...
Co-authored-by: github-actions <github-actions@github.com>
2022-04-07 22:11:00 +08:00
Frank Lee
dbe8e030fb
[ci] added missing field in workflow ( #692 )
2022-04-07 18:07:15 +08:00
Frank Lee
0372ed7951
[ci] update workflow trigger condition and support options ( #691 )
2022-04-07 17:53:03 +08:00
HELSON
d7ecaf362b
[zero] fix init bugs in zero context ( #686 )
...
* adapt model weight initialization for methods in Pytorch nn.init
2022-04-07 17:38:45 +08:00
YuliangLiu0306
0ed7042f42
[pipeline] refactor pipeline ( #679 )
...
* refactor pipeline---put runtime schedule into engine.
* add type hint for schedule Optional[BaseSchedule]
* preprocess schedule during engine initializing
* infer pipeline schedule params from config
2022-04-07 15:54:14 +08:00
Frank Lee
eace69387d
[ci] fixed compatibility workflow ( #678 )
2022-04-06 16:19:34 +08:00
Jiarui Fang
59bf2dc590
[zero] initialize a stateful tensor manager ( #614 )
2022-04-06 16:18:49 +08:00