jiaruifang
dec24561cf
show pytest parameterize
2022-03-11 15:50:28 +08:00
Jiarui Fang
11bddb6e55
[zero] update zero context init with the updated test utils ( #327 )
2022-03-11 15:50:28 +08:00
Frank Lee
6268446b81
[test] refactored testing components ( #324 )
2022-03-11 15:50:28 +08:00
HELSON
4f26fabe4f
fixed strings in profiler outputs ( #325 )
2022-03-11 15:50:28 +08:00
Jiarui Fang
de0468c7a8
[zero] zero init context ( #321 )
...
* add zero init context
* add more flags for zero init context
fix bug of repeated converting param to ShardedParamV2
* polish code
2022-03-11 15:50:28 +08:00
1SAA
73bff11288
Added profiler communication operations
...
Fixed bug for learning rate scheduler
2022-03-11 15:50:28 +08:00
binmakeswell
d275b98b7d
add badge and contributor list
2022-03-11 15:50:28 +08:00
LuGY
a3269de5c9
[zero] cpu adam kernel ( #288 )
...
* Added CPU Adam
* finished the cpu adam
* updated the license
* delete useless parameters, removed resnet
* modified the method off cpu adam unittest
* deleted some useless codes
* removed useless codes
Co-authored-by: ver217 <lhx0217@gmail.com>
Co-authored-by: Frank Lee <somerlee.9@gmail.com>
Co-authored-by: jiaruifang <fangjiarui123@gmail.com>
2022-03-11 15:50:28 +08:00
Jiarui Fang
90d3aef62c
[zero] yet an improved sharded param ( #311 )
2022-03-11 15:50:28 +08:00
Jiarui Fang
c9e7d9582d
[zero] polish shard strategy ( #310 )
...
* init shard param from shape tuple
* add more unitest for shard param
* add set_payload method for ShardedParam
* [zero] add shareded tensor class
* polish code
* add shard stratgy
* move shard and gather logic to shard strategy from shard tensor.
* polish code
2022-03-11 15:50:28 +08:00
ver217
3092317b80
polish code
2022-03-11 15:50:28 +08:00
ver217
36f9a74ab2
fix sharded param hook and unit test
2022-03-11 15:50:28 +08:00
ver217
001ca624dd
impl shard optim v2 and add unit test
2022-03-11 15:50:28 +08:00
Jiarui Fang
74f77e314b
[zero] a shard strategy in granularity of tensor ( #307 )
2022-03-11 15:50:28 +08:00
Jiarui Fang
80364c7686
[zero] sharded tensor ( #305 )
...
* init shard param from shape tuple
* add more unitest for shard param
* add set_payload method for ShardedParam
* [zero] add shareded tensor class
* polish code
2022-03-11 15:50:28 +08:00
Jie Zhu
d344689274
[profiler] primary memory tracer
2022-03-11 15:50:28 +08:00
FrankLeeeee
dfc3fafe89
update unit testing CI rules
2022-03-11 15:50:28 +08:00
FrankLeeeee
bbbfe9b2c9
added compatibility CI and options for release ci
2022-03-11 15:50:28 +08:00
FrankLeeeee
115bcc0b41
added pypi publication CI and remove formatting CI
2022-03-11 15:50:28 +08:00
ver217
b105371ace
rename shared adam to sharded optim v2
2022-03-11 15:50:28 +08:00
ver217
70814dc22f
fix master params dtype
2022-03-11 15:50:28 +08:00
ver217
795210dd99
add fp32 master params in sharded adam
2022-03-11 15:50:28 +08:00
ver217
a109225bc2
add sharded adam
2022-03-11 15:50:28 +08:00
Jiarui Fang
8f74fbd9c9
polish license ( #300 )
...
* init shard param from shape tuple
* add more unitest for shard param
2022-03-11 15:50:28 +08:00
Jiarui Fang
e17e92c54d
Polish sharded parameter ( #297 )
...
* init shard param from shape tuple
* add more unitest for shard param
* add more unittests to shareded param
2022-03-11 15:50:28 +08:00
ver217
7aef75ca42
[zero] add sharded grad and refactor grad hooks for ShardedModel ( #287 )
2022-03-11 15:50:28 +08:00
Frank Lee
9afb5c8b2d
fixed typo in ShardParam ( #294 )
2022-03-11 15:50:28 +08:00
Frank Lee
27155b8513
added unit test for sharded optimizer ( #293 )
...
* added unit test for sharded optimizer
* refactor for elegance
2022-03-11 15:50:28 +08:00
Frank Lee
e17e54e32a
added buffer sync to naive amp model wrapper ( #291 )
2022-03-11 15:50:28 +08:00
Jiarui Fang
8d653af408
add a common util for hooks registered on parameter. ( #292 )
2022-03-11 15:50:28 +08:00
Jie Zhu
f867365aba
bug fix: pass hook_list to engine ( #273 )
...
* bug fix: pass hook_list to engine
* change parameter name
2022-03-11 15:50:28 +08:00
Jiarui Fang
5a560a060a
Feature/zero ( #279 )
...
* add zero1 (#209 )
* add zero1
* add test zero1
* update zero stage 1 develop (#212 )
* Implement naive zero3 (#240 )
* naive zero3 works well
* add zero3 param manager
* add TODOs in comments
* add gather full param ctx
* fix sub module streams
* add offload
* fix bugs of hook and add unit tests
* fix bugs of hook and add unit tests (#252 )
* add gather full param ctx
* fix sub module streams
* add offload
* fix bugs of hook and add unit tests
* polish code and add state dict hook
* fix bug
* update unit test
* refactor reconstructed zero code
* clip_grad support zero3 and add unit test
* add unit test for Zero3ParameterManager
* [WIP] initialize the shard param class
* [WIP] Yet another sharded model implementation (#274 )
* [WIP] initialize the shard param class
* [WIP] Yes another implementation of shardModel. Using a better hook method.
* torch.concat -> torch.cat
* fix test_zero_level_1.py::test_zero_level_1 unitest
* remove deepspeed implementation and refactor for the reconstructed zero module
* polish zero dp unittests
Co-authored-by: ver217 <lhx0217@gmail.com>
Co-authored-by: Frank Lee <somerlee.9@gmail.com>
2022-03-11 15:50:28 +08:00
binmakeswell
08eccfe681
add community group and update issue template( #271 )
2022-03-11 15:50:28 +08:00
Sze-qq
3312d716a0
update experimental visualization ( #253 )
2022-03-11 15:50:28 +08:00
binmakeswell
753035edd3
add Chinese README
2022-03-11 15:50:28 +08:00
1SAA
82023779bb
Added TPExpert for special situation
2022-03-11 15:50:28 +08:00
HELSON
36b8477228
Fixed parameter initialization in FFNExpert ( #251 )
2022-03-11 15:50:28 +08:00
アマデウス
e13293bb4c
fixed CI dataset directory; fixed import error of 2.5d accuracy ( #255 )
2022-03-11 15:50:28 +08:00
1SAA
219df6e685
Optimized MoE layer and fixed some bugs;
...
Decreased moe tests;
Added FFNExperts and ViTMoE model
2022-03-11 15:50:28 +08:00
zbian
3dba070580
fixed padding index issue for vocab parallel embedding layers; updated 3D linear to be compatible with examples in the tutorial
2022-03-11 15:50:28 +08:00
ver217
24f8583cc4
update setup info ( #233 )
2022-03-11 15:50:28 +08:00
github-actions
b9f8521f8c
Automated submodule synchronization
2022-02-15 11:35:37 +08:00
Frank Lee
f5ca88ec97
fixed apex import ( #227 )
2022-02-15 11:31:13 +08:00
Frank Lee
eb3fda4c28
updated readme and change log ( #224 )
2022-02-15 11:31:13 +08:00
ver217
578ea0583b
update setup and workflow ( #222 )
2022-02-15 11:31:13 +08:00
Frank Lee
3a1a9820b0
fixed mkdir conflict and align yapf config with flake ( #220 )
2022-02-15 11:31:13 +08:00
Frank Lee
65e72983dc
added flake8 config ( #219 )
2022-02-15 11:31:13 +08:00
アマデウス
9ee197d0e9
moved env variables to global variables; ( #215 )
...
added branch context;
added vocab parallel layers;
moved split_batch from load_batch to tensor parallel embedding layers;
updated gpt model;
updated unit test cases;
fixed few collective communicator bugs
2022-02-15 11:31:13 +08:00
Frank Lee
b82d60be02
updated github action for develop branch ( #214 )
2022-02-15 11:31:13 +08:00
BoxiangW
7d15ec7fe2
Update github actions ( #205 )
2022-02-04 15:04:55 +08:00