Commit Graph

608 Commits (48c4a180c7cc615a23bbeccc601b96ec3bceeefa)

Author SHA1 Message Date
ver217 38102cf61a
update version (#779) 2022-04-16 17:09:24 +08:00
HELSON a65cbb7e4e
[zero] refactor shard and gather operation (#773) 2022-04-15 14:41:31 +08:00
Frank Lee 5a1a095b92
[test] refactored with the new rerun decorator (#763)
* [test] refactored with the new rerun decorator

* polish test case
2022-04-15 00:33:04 +08:00
binmakeswell deaf99f4c9
[readme] sync CN readme (#766) 2022-04-14 21:04:51 +08:00
ver217 6e553748a7
polish sharded optim docstr and warning (#770) 2022-04-14 21:03:59 +08:00
LuGY 80e37eec42
fix the ckpt bugs when using DDP (#769) 2022-04-14 21:03:24 +08:00
Jiarui Fang 1f698f4406
[readme] polish readme (#764)
* [readme] polish readme

* centering image
2022-04-14 17:34:08 +08:00
Frank Lee 920fe31526
[compatibility] used backward-compatible API for global process group (#758) 2022-04-14 17:20:35 +08:00
Frank Lee 4ea49cb536
[test] added a decorator for address already in use error with backward compatibility (#760)
* [test] added a decorator for address already in use error with backward compatibility

* [test] added a decorator for address already in use error with backward compatibility
2022-04-14 16:48:44 +08:00
Jiarui Fang 10ef8afdd2
[gemini] init genimi individual directory (#754) 2022-04-14 16:40:26 +08:00
ver217 dcca614eee
[hotfix] fix test_stateful_tensor_mgr (#762) 2022-04-14 15:50:09 +08:00
github-actions[bot] 6978980f6d
Automated submodule synchronization (#751)
Co-authored-by: github-actions <github-actions@github.com>
2022-04-14 15:34:01 +08:00
ver217 a93a7d7364
[hotfix] fix reuse_fp16_shard of sharded model (#756)
* fix reuse_fp16_shard

* disable test stm

* polish code
2022-04-14 14:56:46 +08:00
ver217 8f7ce94b8e
[hotfix] fix auto tensor placement policy (#753) 2022-04-14 12:04:45 +08:00
HELSON 84c6700b2a
[zero] refactor memstats_collector (#746) 2022-04-14 12:01:12 +08:00
アマデウス b8899e0905
[TP] allow layernorm without bias (#750) 2022-04-14 11:43:56 +08:00
Jiarui Fang 3d7dc46d33
[zero] use factory pattern for tensor_placement_policy (#752) 2022-04-14 11:07:29 +08:00
ver217 4b048a8728
fix prepare grads in sharded optim (#749) 2022-04-13 22:36:11 +08:00
ver217 097772546e fix initialize about zero 2022-04-13 19:10:21 +08:00
ver217 e396bb71f2
[zero] add tensor placement policies (#743)
* add tensor placement policies

* polish comments

* polish comments

* update moe unit tests
2022-04-13 15:00:48 +08:00
HELSON 22c4b88d56
[zero] refactor ShardedParamV2 for convenience (#742) 2022-04-13 14:54:26 +08:00
HELSON 340e59f968
[utils] add synchronized cuda memory monitor (#740) 2022-04-13 10:50:54 +08:00
ver217 e6212f56cd
[hotfix] fix memory leak in backward of sharded model (#741) 2022-04-13 09:59:05 +08:00
Frank Lee f4f42d4c3c
[bug] fixed DDP compatibility with torch 1.8 (#739) 2022-04-13 00:08:46 +08:00
Frank Lee a4e91bc87f
[bug] fixed grad scaler compatibility with torch 1.8 (#735) 2022-04-12 16:04:21 +08:00
Jiarui Fang 53cb584808
[utils] correct cpu memory used and capacity in the context of multi-process (#726) 2022-04-12 14:57:54 +08:00
Jiarui Fang 7db3ccc79b
[hotfix] remove duplicated param register to stateful tensor manager (#728) 2022-04-12 13:55:25 +08:00
binmakeswell 600e769a42
add video (#732) 2022-04-12 13:41:56 +08:00
Frank Lee a5c3f072f6
[bug] removed zero installation requirements (#731) 2022-04-12 13:27:25 +08:00
HELSON b9b469ea50
[moe] add checkpoint for moe zero test (#729) 2022-04-12 12:11:54 +08:00
Frank Lee 6f7d1362c9
[doc] removed outdated installation command (#730) 2022-04-12 11:56:45 +08:00
FrankLeeeee e88a498c9c [test] removed trivial outdated test 2022-04-12 11:08:15 +08:00
FrankLeeeee 62b4ce7326 [test] added missing decorators to model checkpointing tests 2022-04-12 11:08:15 +08:00
Frank Lee 1cb7bdad3b
[util] fixed communication API depth with PyTorch 1.9 (#721) 2022-04-12 09:44:40 +08:00
Frank Lee 2412429d54
[util] fixed activation checkpointing on torch 1.9 (#719) 2022-04-12 09:35:45 +08:00
Frank Lee 04ff5ea546
[utils] support detection of number of processes on current node (#723) 2022-04-12 09:28:19 +08:00
Jiarui Fang 4d90a7b513
[refactor] zero directory (#724) 2022-04-11 23:13:02 +08:00
Frank Lee 20ab1f5520
[bug] fixed broken test_found_inf (#725) 2022-04-11 22:00:27 +08:00
Jiarui Fang 193dc8dacb
[refactor] refactor the memory utils (#715) 2022-04-11 16:47:57 +08:00
HELSON dbd96fe90a
[zero] check whether gradients have inf and nan in gpu (#712) 2022-04-11 15:40:13 +08:00
ver217 715b86eadd
[hotfix] fix stm cuda model data size (#710) 2022-04-11 15:10:39 +08:00
LuGY 140263a394
[hotfix]fixed bugs of assigning grad states to non leaf nodes (#711)
* fixed bugs of assigning grad states to non leaf nodes

* use detach()
2022-04-11 14:04:58 +08:00
Frank Lee eda30a058e
[compatibility] fixed tensor parallel compatibility with torch 1.9 (#700) 2022-04-11 13:44:50 +08:00
HELSON a9b8300d54
[zero] improve adaptability for not-shard parameters (#708)
* adapt post grad hooks for not-shard parameters
* adapt optimizer for not-shard parameters
* offload gradients for not-replicated parameters
2022-04-11 13:38:51 +08:00
ver217 ab8c6b4a0e
[zero] refactor memstats collector (#706)
* refactor memstats collector

* fix disposable

* polish code
2022-04-11 10:46:08 +08:00
アマデウス 3fc8a204dc
[]Corrected 3d vocab parallel embedding (#707) 2022-04-11 10:17:55 +08:00
HELSON ee112fe1da
[zero] adapt zero hooks for unsharded module (#699) 2022-04-08 20:23:26 +08:00
binmakeswell 896ade15d6
add PaLM link (#704) (#705) 2022-04-08 18:42:12 +08:00
binmakeswell 270157e9e7
add PaLM link (#704)
* add PaLM link
2022-04-08 18:26:59 +08:00
ver217 3c9cd5bb5e
[zero] stateful tensor manager (#687)
* [WIP] stateful tensor manager

* add eviction strategy

* polish code

* polish code

* polish comment

* add unit test

* fix sampler bug

* polish code

* fix max sampling cnt resetting bug

* fix sampler bug

* polish code

* fix bug

* fix unit test

Co-authored-by: jiaruifang <fangjiarui123@gmail.com>
2022-04-08 17:51:34 +08:00