HELSON
|
b528eea0f0
|
[zero] add zero wrappers (#2523)
* [zero] add zero wrappers
* change names
* add wrapper functions to init
|
2023-01-29 17:52:58 +08:00 |
HELSON
|
077a5cdde4
|
[zero] fix gradient clipping in hybrid parallelism (#2521)
* [zero] fix gradient clipping in hybrid parallelism
* [testing] change model name to avoid pytest warning
* [hotfix] fix unit testing
|
2023-01-29 15:09:57 +08:00 |
HELSON
|
d565a24849
|
[zero] add unit testings for hybrid parallelism (#2486)
|
2023-01-18 10:36:10 +08:00 |
HELSON
|
21c88220ce
|
[zero] add unit test for low-level zero init (#2474)
|
2023-01-15 10:42:01 +08:00 |
HELSON
|
a5dc4253c6
|
[zero] polish low level optimizer (#2473)
|
2023-01-13 14:56:17 +08:00 |
Jiarui Fang
|
867c8c2d3a
|
[zero] low level optim supports ProcessGroup (#2464)
|
2023-01-13 10:05:58 +08:00 |
HELSON
|
a3100bd50d
|
[testing] add beit model for unit testings (#2196)
* [testing] add beit model
* [beit] fix bugs
* [beit] fix bugs
* [testing] fix bugs
|
2022-12-26 17:35:36 +08:00 |
Jiarui Fang
|
b87496a66b
|
[hotfix] fix auto policy of test_sharded_optim_v2 (#2157)
|
2022-12-20 23:03:18 +08:00 |
Jiarui Fang
|
c89c66a858
|
[Gemini] update API of the chunkmemstatscollector. (#2129)
|
2022-12-14 00:47:06 +08:00 |
Jiarui Fang
|
1fca5d79ea
|
[Gemini] remove GLOBAL_MODEL_DATA_TRACER (#2091)
|
2022-12-06 22:30:16 +08:00 |
Jiarui Fang
|
33f4412102
|
[Gemini] use MemStats to store the tracing data. Seperate it from Collector. (#2084)
|
2022-12-06 16:43:06 +08:00 |
Jiarui Fang
|
1e885329f4
|
[test] align model name with the file name. (#2045)
|
2022-11-30 15:45:26 +08:00 |
HELSON
|
a1ce02d740
|
[zero] test gradient accumulation (#1964)
* [zero] fix memory leak for zero2
* [zero] test gradient accumulation
* [zero] remove grad clip test
|
2022-11-29 13:00:30 +08:00 |
Jiarui Fang
|
3712ac7f90
|
[Gemini] add bert for MemtracerWrapper unintests (#1982)
|
2022-11-18 14:58:28 +08:00 |
HELSON
|
7066dfbf82
|
[zero] fix memory leak for zero2 (#1955)
|
2022-11-16 11:43:24 +08:00 |
HELSON
|
6e51d296f0
|
[zero] migrate zero1&2 (#1878)
* add zero1&2 optimizer
* rename test ditectory
* rename test files
* change tolerance in test
|
2022-11-11 09:26:40 +08:00 |
HELSON
|
b28991dd0a
|
[feature] A new ZeRO implementation (#1644)
|
2022-10-09 09:18:51 +08:00 |
Jiarui Fang
|
c5d39215f6
|
Revert "[feature] new zero implementation (#1623)" (#1643)
This reverts commit 5be118f405 .
|
2022-09-26 10:06:03 +08:00 |
HELSON
|
5be118f405
|
[feature] new zero implementation (#1623)
|
2022-09-24 19:58:18 +08:00 |
HELSON
|
f7f2248771
|
[moe] fix MoE bugs (#1628)
* remove forced FP32 modules
* correct no_shard-contexts' positions
|
2022-09-22 13:56:30 +08:00 |
ver217
|
8dced41ad0
|
[zero] zero optim state_dict takes only_rank_0 (#1384)
* zero optim state_dict takes only_rank_0
* fix unit test
|
2022-07-29 13:22:50 +08:00 |
ver217
|
828b9e5e0d
|
[hotfix] fix zero optim save/load state dict (#1381)
|
2022-07-28 17:19:39 +08:00 |
HELSON
|
7a8702c06d
|
[colotensor] add Tensor.view op and its unit test (#1343)
[colotensor] add megatron initialization for gpt2
|
2022-07-21 10:53:15 +08:00 |
ver217
|
0c51ff2c13
|
[hotfix] ZeroDDP use new process group (#1333)
* process group supports getting ranks in group
* chunk mgr receives a process group
* update unit test
* fix unit tests
|
2022-07-18 14:14:52 +08:00 |
ver217
|
7a05367101
|
[hotfix] shared model returns cpu state_dict (#1328)
|
2022-07-15 22:11:37 +08:00 |
Jiarui Fang
|
060b917daf
|
[refactor] remove gpc dependency in colotensor's _ops (#1189)
|
2022-07-04 18:54:37 +08:00 |
Jiarui Fang
|
372f791444
|
[refactor] move chunk and chunkmgr to directory gemini (#1182)
|
2022-06-29 13:31:02 +08:00 |
ver217
|
9e1daa63d2
|
[zero] sharded optim supports loading local state dict (#1170)
* sharded optim supports loading local state dict
* polish code
* add unit test
|
2022-06-24 18:05:16 +08:00 |
ver217
|
561e90493f
|
[zero] zero optim supports loading local state dict (#1171)
* zero optim supports loading local state dict
* polish code
* add unit test
|
2022-06-24 17:25:57 +08:00 |
Frank Lee
|
65ee6dcc20
|
[test] ignore 8 gpu test (#1080)
* [test] ignore 8 gpu test
* polish code
* polish workflow
* polish workflow
|
2022-06-08 23:14:18 +08:00 |
HELSON
|
e5ea3fdeef
|
[gemini] add GeminiMemoryManger (#832)
* refactor StatefulTensor, tensor utilities
* add unitest for GeminiMemoryManager
|
2022-04-24 13:08:48 +08:00 |
Jiarui Fang
|
e761ad2cd7
|
Revert "[zero] add ZeroTensorShardStrategy (#793)" (#806)
|
2022-04-19 14:40:02 +08:00 |
HELSON
|
88759e289e
|
[zero] add ZeroTensorShardStrategy (#793)
|
2022-04-19 14:32:45 +08:00 |
Jiarui Fang
|
4d9332b4c5
|
[refactor] moving memtracer to gemini (#801)
|
2022-04-19 10:13:08 +08:00 |
HELSON
|
4c4388c46e
|
[hotfix] fix memory leak in zero (#781)
|
2022-04-18 13:57:03 +08:00 |
Frank Lee
|
5a1a095b92
|
[test] refactored with the new rerun decorator (#763)
* [test] refactored with the new rerun decorator
* polish test case
|
2022-04-15 00:33:04 +08:00 |
Jiarui Fang
|
10ef8afdd2
|
[gemini] init genimi individual directory (#754)
|
2022-04-14 16:40:26 +08:00 |
ver217
|
dcca614eee
|
[hotfix] fix test_stateful_tensor_mgr (#762)
|
2022-04-14 15:50:09 +08:00 |
ver217
|
a93a7d7364
|
[hotfix] fix reuse_fp16_shard of sharded model (#756)
* fix reuse_fp16_shard
* disable test stm
* polish code
|
2022-04-14 14:56:46 +08:00 |
HELSON
|
84c6700b2a
|
[zero] refactor memstats_collector (#746)
|
2022-04-14 12:01:12 +08:00 |
ver217
|
e396bb71f2
|
[zero] add tensor placement policies (#743)
* add tensor placement policies
* polish comments
* polish comments
* update moe unit tests
|
2022-04-13 15:00:48 +08:00 |
HELSON
|
22c4b88d56
|
[zero] refactor ShardedParamV2 for convenience (#742)
|
2022-04-13 14:54:26 +08:00 |
Frank Lee
|
f4f42d4c3c
|
[bug] fixed DDP compatibility with torch 1.8 (#739)
|
2022-04-13 00:08:46 +08:00 |
Jiarui Fang
|
53cb584808
|
[utils] correct cpu memory used and capacity in the context of multi-process (#726)
|
2022-04-12 14:57:54 +08:00 |