Commit Graph

56 Commits (001ca624dd4b96c68489827826b9730a074ac704)

Author SHA1 Message Date
ver217 001ca624dd impl shard optim v2 and add unit test 2022-03-11 15:50:28 +08:00
Jiarui Fang 74f77e314b [zero] a shard strategy in granularity of tensor (#307) 2022-03-11 15:50:28 +08:00
Jiarui Fang 80364c7686 [zero] sharded tensor (#305)
* init shard param from shape tuple

* add more unitest for shard param

* add set_payload method for ShardedParam

* [zero] add shareded tensor class

* polish code
2022-03-11 15:50:28 +08:00
Jie Zhu d344689274 [profiler] primary memory tracer 2022-03-11 15:50:28 +08:00
ver217 b105371ace rename shared adam to sharded optim v2 2022-03-11 15:50:28 +08:00
ver217 70814dc22f fix master params dtype 2022-03-11 15:50:28 +08:00
ver217 795210dd99 add fp32 master params in sharded adam 2022-03-11 15:50:28 +08:00
ver217 a109225bc2 add sharded adam 2022-03-11 15:50:28 +08:00
Jiarui Fang e17e92c54d Polish sharded parameter (#297)
* init shard param from shape tuple

* add more unitest for shard param

* add more unittests to shareded param
2022-03-11 15:50:28 +08:00
ver217 7aef75ca42 [zero] add sharded grad and refactor grad hooks for ShardedModel (#287) 2022-03-11 15:50:28 +08:00
Frank Lee 9afb5c8b2d fixed typo in ShardParam (#294) 2022-03-11 15:50:28 +08:00
Frank Lee e17e54e32a added buffer sync to naive amp model wrapper (#291) 2022-03-11 15:50:28 +08:00
Jiarui Fang 8d653af408 add a common util for hooks registered on parameter. (#292) 2022-03-11 15:50:28 +08:00
Jie Zhu f867365aba bug fix: pass hook_list to engine (#273)
* bug fix: pass hook_list to engine

* change parameter name
2022-03-11 15:50:28 +08:00
Jiarui Fang 5a560a060a Feature/zero (#279)
* add zero1 (#209)

* add zero1

* add test zero1

* update zero stage 1 develop (#212)

* Implement naive zero3 (#240)

* naive zero3 works well

* add zero3 param manager

* add TODOs in comments

* add gather full param ctx

* fix sub module streams

* add offload

* fix bugs of hook and add unit tests

* fix bugs of hook and add unit tests (#252)

* add gather full param ctx

* fix sub module streams

* add offload

* fix bugs of hook and add unit tests

* polish code and add state dict hook

* fix bug

* update unit test

* refactor reconstructed zero code

* clip_grad support zero3 and add unit test

* add unit test for Zero3ParameterManager

* [WIP] initialize the shard param class

* [WIP] Yet another sharded model implementation (#274)

* [WIP] initialize the shard param class

* [WIP] Yes another implementation of shardModel. Using a better hook method.

* torch.concat -> torch.cat

* fix test_zero_level_1.py::test_zero_level_1 unitest

* remove deepspeed implementation and refactor for the reconstructed zero module

* polish zero dp unittests

Co-authored-by: ver217 <lhx0217@gmail.com>
Co-authored-by: Frank Lee <somerlee.9@gmail.com>
2022-03-11 15:50:28 +08:00
1SAA 82023779bb Added TPExpert for special situation 2022-03-11 15:50:28 +08:00
HELSON 36b8477228 Fixed parameter initialization in FFNExpert (#251) 2022-03-11 15:50:28 +08:00
アマデウス e13293bb4c fixed CI dataset directory; fixed import error of 2.5d accuracy (#255) 2022-03-11 15:50:28 +08:00
1SAA 219df6e685 Optimized MoE layer and fixed some bugs;
Decreased moe tests;

Added FFNExperts and ViTMoE model
2022-03-11 15:50:28 +08:00
zbian 3dba070580 fixed padding index issue for vocab parallel embedding layers; updated 3D linear to be compatible with examples in the tutorial 2022-03-11 15:50:28 +08:00
Frank Lee f5ca88ec97 fixed apex import (#227) 2022-02-15 11:31:13 +08:00
Frank Lee 3a1a9820b0 fixed mkdir conflict and align yapf config with flake (#220) 2022-02-15 11:31:13 +08:00
アマデウス 9ee197d0e9 moved env variables to global variables; (#215)
added branch context;
added vocab parallel layers;
moved split_batch from load_batch to tensor parallel embedding layers;
updated gpt model;
updated unit test cases;
fixed few collective communicator bugs
2022-02-15 11:31:13 +08:00
Frank Lee 812357d63c
fixed utils docstring and add example to readme (#200) 2022-02-03 11:37:17 +08:00
Frank Lee 765db512b5
fixed ddp bug on torch 1.8 (#194) 2022-01-28 15:14:04 +08:00
Jiarui Fang 569357fea0
add pytorch hooks (#179)
* add pytorch hooks
fix #175

* remove licenses in src code

* add gpu memory tracer

* replacing print with logger in ophooks.
2022-01-25 22:20:54 +08:00
ver217 708404d5f8
fix pipeline forward return tensors (#176) 2022-01-21 15:46:02 +08:00
HELSON 0f8c7f9804
Fixed docstring in colossalai (#171) 2022-01-21 10:44:30 +08:00
Frank Lee e2089c5c15
adapted for sequence parallel (#163) 2022-01-20 13:44:51 +08:00
puck_WCR 9473a1b9c8
AMP docstring/markdown update (#160) 2022-01-18 18:33:36 +08:00
Frank Lee f3802d6b06
fixed jit default setting (#154) 2022-01-18 13:37:20 +08:00
ver217 7bf1e98b97
pipeline last stage supports multi output (#151) 2022-01-17 15:57:47 +08:00
ver217 f68eddfb3d
refactor kernel (#142) 2022-01-13 16:47:17 +08:00
BoxiangW 4a3d3446b0
Update layer integration documentations (#108)
Update the documentations of layer integration

Update _log_hook.py

Update _operation.py
2022-01-10 18:05:58 +08:00
ver217 9ef05ed1fc
try import deepspeed when using zero (#130) 2022-01-07 17:24:57 +08:00
HELSON dceae85195
Added MoE parallel (#127) 2022-01-07 15:08:36 +08:00
ver217 293fb40c42
add scatter/gather optim for pipeline (#123) 2022-01-07 13:22:22 +08:00
Jiarui Fang 2c0c85d3d3
fix a bug in timer (#114) 2022-01-05 16:07:06 +08:00
ver217 7904baf6e1
fix layers/schedule for hybrid parallelization (#111) (#112) 2022-01-04 20:52:31 +08:00
ver217 a951bc6089
update default logger (#100) (#101) 2022-01-04 20:03:26 +08:00
ver217 96780e6ee4
Optimize pipeline schedule (#94)
* add pipeline shared module wrapper and update load batch

* added model parallel process group for amp and clip grad (#86)

* added model parallel process group for amp and clip grad

* update amp and clip with model parallel process group

* remove pipeline_prev/next group (#88)

* micro batch offload

* optimize pipeline gpu memory usage

* pipeline can receive tensor shape (#93)

* optimize pipeline gpu memory usage

* fix grad accumulation step counter

* rename classes and functions

Co-authored-by: Frank Lee <somerlee.9@gmail.com>
2021-12-30 15:56:46 +08:00
アマデウス 01a80cd86d
Hotfix/Colossalai layers (#92)
* optimized 1d layer apis; reorganized nn.layer modules; fixed tests

* fixed 2.5d runtime issue

* reworked split batch, now called in trainer.schedule.load_batch

Co-authored-by: BoxiangW <45734921+BoxiangW@users.noreply.github.com>
2021-12-29 23:32:10 +08:00
アマデウス 0fedef4f3c
Layer integration (#83)
* integrated parallel layers for ease of building models

* integrated 2.5d layers

* cleaned codes and unit tests

* added log metric by step hook; updated imagenet benchmark; fixed some bugs

* reworked initialization; cleaned codes

Co-authored-by: BoxiangW <45734921+BoxiangW@users.noreply.github.com>
2021-12-27 15:04:32 +08:00
shenggan 5c3843dc98
add colossalai kernel module (#55) 2021-12-21 12:19:52 +08:00
ver217 8f02a88db2
add interleaved pipeline, fix naive amp and update pipeline model initializer (#80) 2021-12-20 23:26:19 +08:00
Frank Lee 91c327cb44
fixed zero level 3 dtype bug (#76) 2021-12-20 17:00:53 +08:00
HELSON 632e622de8
overlap computation and communication in 2d operations (#75) 2021-12-16 16:05:15 +08:00
Frank Lee cd9c28e055
added CI for unit testing (#69) 2021-12-16 10:32:08 +08:00
Frank Lee 35813ed3c4
update examples and sphnix docs for the new api (#63) 2021-12-13 22:07:01 +08:00
ver217 7d3711058f
fix zero3 fp16 and add zero3 model context (#62) 2021-12-10 17:48:50 +08:00