HELSON
7c079d9c33
[hotfix] fixed bugs in ShardStrategy and PcieProfiler ( #394 )
2022-03-11 18:12:46 +08:00
Frank Lee
1e4bf85cdb
fixed bug in activation checkpointing test ( #387 )
2022-03-11 15:50:28 +08:00
Jiarui Fang
3af13a2c3e
[zero] polish ShardedOptimV2 unittest ( #385 )
...
* place params on cpu after zero init context
* polish code
* bucketzed cpu gpu tensor transter
* find a bug in sharded optim unittest
* add offload unittest for ShardedOptimV2.
* polish code and make it more robust
2022-03-11 15:50:28 +08:00
Jiang Zhuo
5a4a3b77d9
fix format ( #376 )
2022-03-11 15:50:28 +08:00
LuGY
de46450461
Added activation offload ( #331 )
...
* Added activation offload
* Fixed the import bug, used the pytest
2022-03-11 15:50:28 +08:00
Jiarui Fang
272ebfb57d
[bug] shard param during initializing the ShardedModelV2 ( #381 )
2022-03-11 15:50:28 +08:00
HELSON
8c18eb0998
[profiler] Fixed bugs in CommProfiler and PcieProfiler ( #377 )
2022-03-11 15:50:28 +08:00
Jiarui Fang
b5f43acee3
[zero] find miss code ( #378 )
2022-03-11 15:50:28 +08:00
Jiarui Fang
6b6002962a
[zero] zero init context collect numel of model ( #375 )
2022-03-11 15:50:28 +08:00
HELSON
1ed7c24c02
Added PCIE profiler to dectect data transmission ( #373 )
2022-03-11 15:50:28 +08:00
jiaruifang
d9217e1960
Revert "[zero] bucketized tensor cpu gpu copy ( #368 )"
...
This reverts commit bef05489b6
.
2022-03-11 15:50:28 +08:00
RichardoLuo
8539898ec6
flake8 style change ( #363 )
2022-03-11 15:50:28 +08:00
Kai Wang (Victor Kai)
53bb3bcc0a
fix format ( #362 )
2022-03-11 15:50:28 +08:00
ziyu huang
a77d73f22b
fix format parallel_context.py ( #359 )
...
Co-authored-by: huangziyu <202476410arsmart@gmail.com>
2022-03-11 15:50:28 +08:00
Zangwei
c695369af0
fix format constants.py ( #358 )
2022-03-11 15:50:28 +08:00
Yuer867
4a0f8c2c50
fix format parallel_2p5d ( #357 )
2022-03-11 15:50:28 +08:00
Liang Bowen
7eb87f516d
flake8 style ( #352 )
2022-03-11 15:50:28 +08:00
Xu Kai
54ee8d1254
Fix/format colossalai/engine/paramhooks/( #350 )
2022-03-11 15:50:28 +08:00
Maruyama_Aya
e83970e3dc
fix format ColossalAI\colossalai\context\process_group_initializer
2022-03-11 15:50:28 +08:00
yuxuan-lou
3b88eb2259
Flake8 code restyle
2022-03-11 15:50:28 +08:00
xuqifan897
148207048e
Qifan formated file ColossalAI\colossalai\nn\layer\parallel_1d\layers.py ( #342 )
2022-03-11 15:50:28 +08:00
Cautiousss
3a51d909af
fix format ( #332 )
...
Co-authored-by: 何晓昕 <cautious@r-205-106-25-172.comp.nus.edu.sg>
2022-03-11 15:50:28 +08:00
DouJS
cbb6436ff0
fix format for dir-[parallel_3d] ( #333 )
2022-03-11 15:50:28 +08:00
ExtremeViscent
eaac03ae1d
[formart] format fixed for kernel\cuda_native codes ( #335 )
2022-03-11 15:50:28 +08:00
Jiarui Fang
00670c870e
[zero] bucketized tensor cpu gpu copy ( #368 )
2022-03-11 15:50:28 +08:00
Jiarui Fang
44e4891f57
[zero] able to place params on cpu after zero init context ( #365 )
...
* place params on cpu after zero init context
* polish code
2022-03-11 15:50:28 +08:00
ver217
253e54d98a
fix grad shape
2022-03-11 15:50:28 +08:00
Jiarui Fang
ea2872073f
[zero] global model data memory tracer ( #360 )
2022-03-11 15:50:28 +08:00
Jiarui Fang
cb34cd384d
[test] polish zero related unitest ( #351 )
2022-03-11 15:50:28 +08:00
HELSON
534e0bb118
Fixed import bug for no-tensorboard environment ( #354 )
2022-03-11 15:50:28 +08:00
HELSON
c57e089824
[profile] added example for ProfilerContext ( #349 )
2022-03-11 15:50:28 +08:00
Jiarui Fang
10e2826426
move async memory to an individual directory ( #345 )
2022-03-11 15:50:28 +08:00
HELSON
425bb0df3f
Added Profiler Context to manage all profilers ( #340 )
2022-03-11 15:50:28 +08:00
ver217
d0ae0f2215
[zero] update sharded optim v2 ( #334 )
2022-03-11 15:50:28 +08:00
jiaruifang
5663616921
polish code
2022-03-11 15:50:28 +08:00
jiaruifang
7977422aeb
add bert for unitest and sharded model is not able to pass the bert case
2022-03-11 15:50:28 +08:00
Frank Lee
3d5d64bd10
refactored grad scaler ( #338 )
2022-03-11 15:50:28 +08:00
Frank Lee
6a3188167c
set criterion as optional in colossalai initialize ( #336 )
2022-03-11 15:50:28 +08:00
Jie Zhu
3213554cc2
[profiler] add adaptive sampling to memory profiler ( #330 )
...
* fix merge conflict
modify unit test
remove unnessesary log info
reformat file
* remove unused module
* remove unnecessary sync function
* change doc string style from Google to Sphinx
2022-03-11 15:50:28 +08:00
ver217
1388671699
[zero] Update sharded model v2 using sharded param v2 ( #323 )
2022-03-11 15:50:28 +08:00
Jiarui Fang
11bddb6e55
[zero] update zero context init with the updated test utils ( #327 )
2022-03-11 15:50:28 +08:00
HELSON
4f26fabe4f
fixed strings in profiler outputs ( #325 )
2022-03-11 15:50:28 +08:00
Jiarui Fang
de0468c7a8
[zero] zero init context ( #321 )
...
* add zero init context
* add more flags for zero init context
fix bug of repeated converting param to ShardedParamV2
* polish code
2022-03-11 15:50:28 +08:00
1SAA
73bff11288
Added profiler communication operations
...
Fixed bug for learning rate scheduler
2022-03-11 15:50:28 +08:00
LuGY
a3269de5c9
[zero] cpu adam kernel ( #288 )
...
* Added CPU Adam
* finished the cpu adam
* updated the license
* delete useless parameters, removed resnet
* modified the method off cpu adam unittest
* deleted some useless codes
* removed useless codes
Co-authored-by: ver217 <lhx0217@gmail.com>
Co-authored-by: Frank Lee <somerlee.9@gmail.com>
Co-authored-by: jiaruifang <fangjiarui123@gmail.com>
2022-03-11 15:50:28 +08:00
Jiarui Fang
90d3aef62c
[zero] yet an improved sharded param ( #311 )
2022-03-11 15:50:28 +08:00
Jiarui Fang
c9e7d9582d
[zero] polish shard strategy ( #310 )
...
* init shard param from shape tuple
* add more unitest for shard param
* add set_payload method for ShardedParam
* [zero] add shareded tensor class
* polish code
* add shard stratgy
* move shard and gather logic to shard strategy from shard tensor.
* polish code
2022-03-11 15:50:28 +08:00
ver217
3092317b80
polish code
2022-03-11 15:50:28 +08:00
ver217
36f9a74ab2
fix sharded param hook and unit test
2022-03-11 15:50:28 +08:00
ver217
001ca624dd
impl shard optim v2 and add unit test
2022-03-11 15:50:28 +08:00
Jiarui Fang
74f77e314b
[zero] a shard strategy in granularity of tensor ( #307 )
2022-03-11 15:50:28 +08:00
Jiarui Fang
80364c7686
[zero] sharded tensor ( #305 )
...
* init shard param from shape tuple
* add more unitest for shard param
* add set_payload method for ShardedParam
* [zero] add shareded tensor class
* polish code
2022-03-11 15:50:28 +08:00
Jie Zhu
d344689274
[profiler] primary memory tracer
2022-03-11 15:50:28 +08:00
ver217
b105371ace
rename shared adam to sharded optim v2
2022-03-11 15:50:28 +08:00
ver217
70814dc22f
fix master params dtype
2022-03-11 15:50:28 +08:00
ver217
795210dd99
add fp32 master params in sharded adam
2022-03-11 15:50:28 +08:00
ver217
a109225bc2
add sharded adam
2022-03-11 15:50:28 +08:00
Jiarui Fang
e17e92c54d
Polish sharded parameter ( #297 )
...
* init shard param from shape tuple
* add more unitest for shard param
* add more unittests to shareded param
2022-03-11 15:50:28 +08:00
ver217
7aef75ca42
[zero] add sharded grad and refactor grad hooks for ShardedModel ( #287 )
2022-03-11 15:50:28 +08:00
Frank Lee
9afb5c8b2d
fixed typo in ShardParam ( #294 )
2022-03-11 15:50:28 +08:00
Frank Lee
e17e54e32a
added buffer sync to naive amp model wrapper ( #291 )
2022-03-11 15:50:28 +08:00
Jiarui Fang
8d653af408
add a common util for hooks registered on parameter. ( #292 )
2022-03-11 15:50:28 +08:00
Jie Zhu
f867365aba
bug fix: pass hook_list to engine ( #273 )
...
* bug fix: pass hook_list to engine
* change parameter name
2022-03-11 15:50:28 +08:00
Jiarui Fang
5a560a060a
Feature/zero ( #279 )
...
* add zero1 (#209 )
* add zero1
* add test zero1
* update zero stage 1 develop (#212 )
* Implement naive zero3 (#240 )
* naive zero3 works well
* add zero3 param manager
* add TODOs in comments
* add gather full param ctx
* fix sub module streams
* add offload
* fix bugs of hook and add unit tests
* fix bugs of hook and add unit tests (#252 )
* add gather full param ctx
* fix sub module streams
* add offload
* fix bugs of hook and add unit tests
* polish code and add state dict hook
* fix bug
* update unit test
* refactor reconstructed zero code
* clip_grad support zero3 and add unit test
* add unit test for Zero3ParameterManager
* [WIP] initialize the shard param class
* [WIP] Yet another sharded model implementation (#274 )
* [WIP] initialize the shard param class
* [WIP] Yes another implementation of shardModel. Using a better hook method.
* torch.concat -> torch.cat
* fix test_zero_level_1.py::test_zero_level_1 unitest
* remove deepspeed implementation and refactor for the reconstructed zero module
* polish zero dp unittests
Co-authored-by: ver217 <lhx0217@gmail.com>
Co-authored-by: Frank Lee <somerlee.9@gmail.com>
2022-03-11 15:50:28 +08:00
1SAA
82023779bb
Added TPExpert for special situation
2022-03-11 15:50:28 +08:00
HELSON
36b8477228
Fixed parameter initialization in FFNExpert ( #251 )
2022-03-11 15:50:28 +08:00
アマデウス
e13293bb4c
fixed CI dataset directory; fixed import error of 2.5d accuracy ( #255 )
2022-03-11 15:50:28 +08:00
1SAA
219df6e685
Optimized MoE layer and fixed some bugs;
...
Decreased moe tests;
Added FFNExperts and ViTMoE model
2022-03-11 15:50:28 +08:00
zbian
3dba070580
fixed padding index issue for vocab parallel embedding layers; updated 3D linear to be compatible with examples in the tutorial
2022-03-11 15:50:28 +08:00
Frank Lee
f5ca88ec97
fixed apex import ( #227 )
2022-02-15 11:31:13 +08:00
Frank Lee
3a1a9820b0
fixed mkdir conflict and align yapf config with flake ( #220 )
2022-02-15 11:31:13 +08:00
アマデウス
9ee197d0e9
moved env variables to global variables; ( #215 )
...
added branch context;
added vocab parallel layers;
moved split_batch from load_batch to tensor parallel embedding layers;
updated gpt model;
updated unit test cases;
fixed few collective communicator bugs
2022-02-15 11:31:13 +08:00
Frank Lee
812357d63c
fixed utils docstring and add example to readme ( #200 )
2022-02-03 11:37:17 +08:00
Frank Lee
765db512b5
fixed ddp bug on torch 1.8 ( #194 )
2022-01-28 15:14:04 +08:00
Jiarui Fang
569357fea0
add pytorch hooks ( #179 )
...
* add pytorch hooks
fix #175
* remove licenses in src code
* add gpu memory tracer
* replacing print with logger in ophooks.
2022-01-25 22:20:54 +08:00
ver217
708404d5f8
fix pipeline forward return tensors ( #176 )
2022-01-21 15:46:02 +08:00
HELSON
0f8c7f9804
Fixed docstring in colossalai ( #171 )
2022-01-21 10:44:30 +08:00
Frank Lee
e2089c5c15
adapted for sequence parallel ( #163 )
2022-01-20 13:44:51 +08:00
puck_WCR
9473a1b9c8
AMP docstring/markdown update ( #160 )
2022-01-18 18:33:36 +08:00
Frank Lee
f3802d6b06
fixed jit default setting ( #154 )
2022-01-18 13:37:20 +08:00
ver217
7bf1e98b97
pipeline last stage supports multi output ( #151 )
2022-01-17 15:57:47 +08:00
ver217
f68eddfb3d
refactor kernel ( #142 )
2022-01-13 16:47:17 +08:00
BoxiangW
4a3d3446b0
Update layer integration documentations ( #108 )
...
Update the documentations of layer integration
Update _log_hook.py
Update _operation.py
2022-01-10 18:05:58 +08:00
ver217
9ef05ed1fc
try import deepspeed when using zero ( #130 )
2022-01-07 17:24:57 +08:00
HELSON
dceae85195
Added MoE parallel ( #127 )
2022-01-07 15:08:36 +08:00
ver217
293fb40c42
add scatter/gather optim for pipeline ( #123 )
2022-01-07 13:22:22 +08:00
Jiarui Fang
2c0c85d3d3
fix a bug in timer ( #114 )
2022-01-05 16:07:06 +08:00
ver217
7904baf6e1
fix layers/schedule for hybrid parallelization ( #111 ) ( #112 )
2022-01-04 20:52:31 +08:00
ver217
a951bc6089
update default logger ( #100 ) ( #101 )
2022-01-04 20:03:26 +08:00
ver217
96780e6ee4
Optimize pipeline schedule ( #94 )
...
* add pipeline shared module wrapper and update load batch
* added model parallel process group for amp and clip grad (#86 )
* added model parallel process group for amp and clip grad
* update amp and clip with model parallel process group
* remove pipeline_prev/next group (#88 )
* micro batch offload
* optimize pipeline gpu memory usage
* pipeline can receive tensor shape (#93 )
* optimize pipeline gpu memory usage
* fix grad accumulation step counter
* rename classes and functions
Co-authored-by: Frank Lee <somerlee.9@gmail.com>
2021-12-30 15:56:46 +08:00
アマデウス
01a80cd86d
Hotfix/Colossalai layers ( #92 )
...
* optimized 1d layer apis; reorganized nn.layer modules; fixed tests
* fixed 2.5d runtime issue
* reworked split batch, now called in trainer.schedule.load_batch
Co-authored-by: BoxiangW <45734921+BoxiangW@users.noreply.github.com>
2021-12-29 23:32:10 +08:00
アマデウス
0fedef4f3c
Layer integration ( #83 )
...
* integrated parallel layers for ease of building models
* integrated 2.5d layers
* cleaned codes and unit tests
* added log metric by step hook; updated imagenet benchmark; fixed some bugs
* reworked initialization; cleaned codes
Co-authored-by: BoxiangW <45734921+BoxiangW@users.noreply.github.com>
2021-12-27 15:04:32 +08:00
shenggan
5c3843dc98
add colossalai kernel module ( #55 )
2021-12-21 12:19:52 +08:00
ver217
8f02a88db2
add interleaved pipeline, fix naive amp and update pipeline model initializer ( #80 )
2021-12-20 23:26:19 +08:00
Frank Lee
91c327cb44
fixed zero level 3 dtype bug ( #76 )
2021-12-20 17:00:53 +08:00
HELSON
632e622de8
overlap computation and communication in 2d operations ( #75 )
2021-12-16 16:05:15 +08:00
Frank Lee
cd9c28e055
added CI for unit testing ( #69 )
2021-12-16 10:32:08 +08:00
Frank Lee
35813ed3c4
update examples and sphnix docs for the new api ( #63 )
2021-12-13 22:07:01 +08:00
ver217
7d3711058f
fix zero3 fp16 and add zero3 model context ( #62 )
2021-12-10 17:48:50 +08:00
Frank Lee
9a0466534c
update markdown docs (english) ( #60 )
2021-12-10 14:37:33 +08:00