Commit Graph

2496 Commits (6b30dfb7ce002be3acc0668d3fa44c4d4ebb4108)

Author SHA1 Message Date
ver217 304263c2ce
fix gpt attention mask (#461) 2022-03-18 17:24:19 +08:00
ver217 fc8e6db005
[doc] Update docstring for ZeRO (#459)
* polish sharded model docstr

* polish sharded optim docstr

* polish zero docstr

* polish shard strategy docstr
2022-03-18 16:48:20 +08:00
HELSON 84fd7c1d4d
add moe context, moe utilities and refactor gradient handler (#455) 2022-03-18 16:38:32 +08:00
Frank Lee af185b5519
[test] fixed amp convergence comparison test (#454) 2022-03-18 16:28:16 +08:00
ver217 a241f61b34
[zero] Update initialize for ZeRO (#458)
* polish code

* shard strategy receive pg in shard() / gather()

* update zero engine

* polish code
2022-03-18 16:18:31 +08:00
ver217 642846d6f9
update sharded optim and fix zero init ctx (#457) 2022-03-18 15:44:47 +08:00
Jiarui Fang e2e9f82588
Revert "[zero] update sharded optim and fix zero init ctx" (#456)
* Revert "polish code"

This reverts commit 8cf7ff08cf.

* Revert "rename variables"

This reverts commit e99af94ab8.

* Revert "remove surplus imports"

This reverts commit 46add4a5c5.

* Revert "update sharded optim and fix zero init ctx"

This reverts commit 57567ee768.
2022-03-18 15:22:43 +08:00
ver217 8cf7ff08cf polish code 2022-03-18 14:25:25 +08:00
ver217 e99af94ab8 rename variables 2022-03-18 14:25:25 +08:00
ver217 46add4a5c5 remove surplus imports 2022-03-18 14:25:25 +08:00
ver217 57567ee768 update sharded optim and fix zero init ctx 2022-03-18 14:25:25 +08:00
Frank Lee f27d801a13
[test] optimized zero data parallel test (#452) 2022-03-18 11:35:54 +08:00
github-actions[bot] cfcc8271f3
[Bot] Automated submodule synchronization (#451)
Co-authored-by: github-actions <github-actions@github.com>
2022-03-18 09:51:43 +08:00
Frank Lee ac4513c56e
[DevOps] remove unneeded dependency in build workflow (#449) 2022-03-17 17:29:02 +08:00
Jiarui Fang 0fcfb1e00d
[test] make zero engine test really work (#447) 2022-03-17 17:24:25 +08:00
Frank Lee bb2790cf0b
optimize engine and trainer test (#448) 2022-03-17 15:44:17 +08:00
Jiarui Fang 237d08e7ee
[zero] hybrid cpu adam (#445) 2022-03-17 15:05:41 +08:00
Frank Lee b72b8445c6
optimized context test time consumption (#446) 2022-03-17 14:40:52 +08:00
Jiarui Fang 496cbb0760
[hotfix] fix initialize bug with zero (#442) 2022-03-17 13:16:22 +08:00
Frank Lee 725a39f4bd
update github CI with the current workflow (#441) 2022-03-17 10:38:04 +08:00
Frank Lee 5a1e33b97f
update contributing.md with the current workflow (#440) 2022-03-17 10:28:04 +08:00
Jiarui Fang 17b8274f8a
[unitest] polish zero config in unittest (#438) 2022-03-17 10:20:53 +08:00
Jiarui Fang 640a6cd304
[refactory] refactory the initialize method for new zero design (#431) 2022-03-16 19:29:37 +08:00
Frank Lee 4f85b687cf
[misc] replace codebeat with codefactor on readme (#436) 2022-03-16 17:43:52 +08:00
Frank Lee bffd85bf34
added testing module (#435) 2022-03-16 17:20:05 +08:00
HELSON dbdc9a7783
added Multiply Jitter and capacity factor eval for MOE (#434) 2022-03-16 16:47:44 +08:00
Frank Lee b03b3ae99c
fixed mem monitor device (#433)
fixed mem monitor device
2022-03-16 15:25:02 +08:00
Frank Lee 14a7094243
fixed fp16 optimizer none grad bug (#432) 2022-03-16 14:35:46 +08:00
ver217 fce9432f08 sync before creating empty grad 2022-03-16 14:24:09 +08:00
ver217 ea6905a898 free param.grad 2022-03-16 14:24:09 +08:00
ver217 9506a8beb2 use double buffer to handle grad 2022-03-16 14:24:09 +08:00
Frank Lee 0f5f5dd556
fixed gpt attention mask in pipeline (#430) 2022-03-16 14:23:43 +08:00
Jiarui Fang f9c762df85
[test] merge zero optim tests (#428) 2022-03-16 12:22:45 +08:00
Frank Lee f0d6e2208b
[polish] add license meta to setup.py (#427) 2022-03-16 12:05:56 +08:00
Jiarui Fang 5d7dc3525b
[hotfix] run cpu adam unittest in pytest (#424) 2022-03-16 10:39:55 +08:00
Jiarui Fang 54229cd33e
[log] better logging display with rich (#426)
* better logger using rich

* remove deepspeed in zero requirements
2022-03-16 09:51:15 +08:00
HELSON 3f70a2b12f
removed noisy function during evaluation of MoE router (#419) 2022-03-15 12:06:09 +08:00
Jiarui Fang adebb3e041
[zero] cuda margin space for OS (#418) 2022-03-15 12:02:19 +08:00
Jiarui Fang 56bb412e72
[polish] use GLOBAL_MODEL_DATA_TRACER (#417) 2022-03-15 11:29:46 +08:00
Jiarui Fang 23ba3fc450
[zero] refactory ShardedOptimV2 init method (#416) 2022-03-15 10:45:55 +08:00
Frank Lee e79ea44247
[fp16] refactored fp16 optimizer (#392) 2022-03-15 10:05:38 +08:00
Frank Lee f8a0e7fb01
Merge pull request #412 from hpcaitech/develop
merge develop to main
2022-03-14 22:48:56 +08:00
Jiarui Fang 21dc54e019
[zero] memtracer to record cuda memory usage of model data and overall system (#395) 2022-03-14 22:05:30 +08:00
Jiarui Fang a37bf1bc42
[hotfix] rm test_tensor_detector.py (#413) 2022-03-14 21:39:48 +08:00
Jiarui Fang 370f567e7d
[zero] new interface for ShardedOptimv2 (#406) 2022-03-14 20:48:41 +08:00
LuGY a9c27be42e
Added tensor detector (#393)
* Added tensor detector

* Added the - states

* Allowed change include_cpu when detect()
2022-03-14 18:01:46 +08:00
Frank Lee 32296cf462
Merge pull request #409 from 1SAA/develop
[hotfix] fixed error when no collective communication in CommProfiler
2022-03-14 17:43:45 +08:00
1SAA 907ac4a2dc fixed error when no collective communication in CommProfiler 2022-03-14 17:21:00 +08:00
Frank Lee 62b08acc72
update hf badge link (#410) 2022-03-14 17:07:01 +08:00
Frank Lee 2fe68b359a
Merge pull request #403 from ver217/feature/shard-strategy
[zero] Add bucket tensor shard strategy
2022-03-14 16:29:28 +08:00