Commit Graph

1618 Commits (50e5602c2d6c8e25ad544cbecc38649e5257e7b8)

Author SHA1 Message Date
LuGY 105c5301c3
[zero]added hybrid adam, removed loss scale in adam (#527)
* [zero]added hybrid adam, removed loss scale of adam

* remove useless code
2022-03-25 18:03:54 +08:00
Jiarui Fang 8d8c5407c0
[zero] refactor model data tracing (#522) 2022-03-25 18:03:32 +08:00
Frank Lee 3601b2bad0
[test] fixed rerun_on_exception and adapted test cases (#487) 2022-03-25 17:25:12 +08:00
Jiarui Fang 4d322b79da
[refactor] remove old zero code (#517) 2022-03-25 14:54:39 +08:00
LuGY 6a3f9fda83
[cuda] modify the fused adam, support hybrid of fp16 and fp32 (#497) 2022-03-25 14:15:53 +08:00
Jiarui Fang 920c5889a7
[zero] add colo move inline (#521) 2022-03-25 14:02:55 +08:00
ver217 7be397ca9c
[log] polish disable_existing_loggers (#519) 2022-03-25 12:30:55 +08:00
Jiarui Fang 0bebda6ea5
[zero] fix init device bug in zero init context unittest (#516) 2022-03-25 12:24:18 +08:00
Jiarui Fang 7ef3507ace
[zero] show model data cuda memory usage after zero context init. (#515) 2022-03-25 11:23:35 +08:00
ver217 a2e61d61d4
[zero] zero init ctx enable rm_torch_payload_on_the_fly (#512)
* enable rm_torch_payload_on_the_fly

* polish docstr
2022-03-24 23:44:00 +08:00
Jiarui Fang 81145208d1
[install] run with out rich (#513) 2022-03-24 17:39:50 +08:00
Jiarui Fang bca0c49a9d
[zero] use colo model data api in optimv2 (#511) 2022-03-24 17:19:34 +08:00
Jiarui Fang 9330be0f3c
[memory] set cuda mem frac (#506) 2022-03-24 16:57:13 +08:00
Jiarui Fang 0035b7be07
[memory] add model data tensor moving api (#503) 2022-03-24 14:29:41 +08:00
Jiarui Fang a445e118cf
[polish] polish singleton and global context (#500) 2022-03-23 18:03:39 +08:00
ver217 9ec1ce6ab1
[zero] sharded model support the reuse of fp16 shard (#495)
* sharded model supports reuse fp16 shard

* rename variable

* polish code

* polish code

* polish code
2022-03-23 14:59:59 +08:00
HELSON f24b5ed201
[MOE] remove old MoE legacy (#493) 2022-03-22 17:37:16 +08:00
ver217 c4c02424f3
[zero] sharded model manages ophooks individually (#492) 2022-03-22 17:33:20 +08:00
HELSON c9023d4078
[MOE] support PR-MOE (#488) 2022-03-22 16:48:22 +08:00
ver217 a9ecb4b244
[zero] polish sharded optimizer v2 (#490) 2022-03-22 15:53:48 +08:00
ver217 62b0a8d644
[zero] sharded optim support hybrid cpu adam (#486)
* sharded optim support hybrid cpu adam

* update unit test

* polish docstring
2022-03-22 14:56:59 +08:00
Jiarui Fang b334822163
[zero] polish sharded param name (#484)
* [zero] polish sharded param name

* polish code

* polish

* polish code

* polish

* polsih

* polish
2022-03-22 14:36:16 +08:00
HELSON d7ea63992b
[MOE] add FP32LinearGate for MOE in NaiveAMP context (#480) 2022-03-22 10:50:20 +08:00
Jiarui Fang 65c0f380c2
[format] polish name format for MOE (#481) 2022-03-21 23:19:47 +08:00
ver217 8d3250d74b
[zero] ZeRO supports pipeline parallel (#477) 2022-03-21 16:55:37 +08:00
Frank Lee 83a847d058
[test] added rerun on exception for testing (#475)
* [test] added rerun on exception function

* polish code
2022-03-21 15:51:57 +08:00
HELSON 7544347145
[MOE] add unitest for MOE experts layout, gradient handler and kernel (#469) 2022-03-21 13:35:04 +08:00
ver217 3cb3fc275e
zero init ctx receives a dp process group (#471) 2022-03-21 11:18:55 +08:00
HELSON aff9d354f7
[MOE] polish moe_env (#467) 2022-03-19 15:36:25 +08:00
HELSON bccbc15861
[MOE] changed parallelmode to dist process group (#460) 2022-03-19 13:46:29 +08:00
ver217 fc8e6db005
[doc] Update docstring for ZeRO (#459)
* polish sharded model docstr

* polish sharded optim docstr

* polish zero docstr

* polish shard strategy docstr
2022-03-18 16:48:20 +08:00
HELSON 84fd7c1d4d
add moe context, moe utilities and refactor gradient handler (#455) 2022-03-18 16:38:32 +08:00
ver217 a241f61b34
[zero] Update initialize for ZeRO (#458)
* polish code

* shard strategy receive pg in shard() / gather()

* update zero engine

* polish code
2022-03-18 16:18:31 +08:00
ver217 642846d6f9
update sharded optim and fix zero init ctx (#457) 2022-03-18 15:44:47 +08:00
Jiarui Fang e2e9f82588
Revert "[zero] update sharded optim and fix zero init ctx" (#456)
* Revert "polish code"

This reverts commit 8cf7ff08cf.

* Revert "rename variables"

This reverts commit e99af94ab8.

* Revert "remove surplus imports"

This reverts commit 46add4a5c5.

* Revert "update sharded optim and fix zero init ctx"

This reverts commit 57567ee768.
2022-03-18 15:22:43 +08:00
ver217 e99af94ab8 rename variables 2022-03-18 14:25:25 +08:00
ver217 57567ee768 update sharded optim and fix zero init ctx 2022-03-18 14:25:25 +08:00
Jiarui Fang 0fcfb1e00d
[test] make zero engine test really work (#447) 2022-03-17 17:24:25 +08:00
Jiarui Fang 237d08e7ee
[zero] hybrid cpu adam (#445) 2022-03-17 15:05:41 +08:00
Frank Lee b72b8445c6
optimized context test time consumption (#446) 2022-03-17 14:40:52 +08:00
Jiarui Fang 496cbb0760
[hotfix] fix initialize bug with zero (#442) 2022-03-17 13:16:22 +08:00
Jiarui Fang 640a6cd304
[refactory] refactory the initialize method for new zero design (#431) 2022-03-16 19:29:37 +08:00
Frank Lee bffd85bf34
added testing module (#435) 2022-03-16 17:20:05 +08:00
HELSON dbdc9a7783
added Multiply Jitter and capacity factor eval for MOE (#434) 2022-03-16 16:47:44 +08:00
Frank Lee b03b3ae99c
fixed mem monitor device (#433)
fixed mem monitor device
2022-03-16 15:25:02 +08:00
Frank Lee 14a7094243
fixed fp16 optimizer none grad bug (#432) 2022-03-16 14:35:46 +08:00
ver217 fce9432f08 sync before creating empty grad 2022-03-16 14:24:09 +08:00
ver217 ea6905a898 free param.grad 2022-03-16 14:24:09 +08:00
ver217 9506a8beb2 use double buffer to handle grad 2022-03-16 14:24:09 +08:00
Jiarui Fang 54229cd33e
[log] better logging display with rich (#426)
* better logger using rich

* remove deepspeed in zero requirements
2022-03-16 09:51:15 +08:00
HELSON 3f70a2b12f
removed noisy function during evaluation of MoE router (#419) 2022-03-15 12:06:09 +08:00
Jiarui Fang adebb3e041
[zero] cuda margin space for OS (#418) 2022-03-15 12:02:19 +08:00
Jiarui Fang 56bb412e72
[polish] use GLOBAL_MODEL_DATA_TRACER (#417) 2022-03-15 11:29:46 +08:00
Jiarui Fang 23ba3fc450
[zero] refactory ShardedOptimV2 init method (#416) 2022-03-15 10:45:55 +08:00
Frank Lee e79ea44247
[fp16] refactored fp16 optimizer (#392) 2022-03-15 10:05:38 +08:00
Jiarui Fang 21dc54e019
[zero] memtracer to record cuda memory usage of model data and overall system (#395) 2022-03-14 22:05:30 +08:00
Jiarui Fang 370f567e7d
[zero] new interface for ShardedOptimv2 (#406) 2022-03-14 20:48:41 +08:00
LuGY a9c27be42e
Added tensor detector (#393)
* Added tensor detector

* Added the - states

* Allowed change include_cpu when detect()
2022-03-14 18:01:46 +08:00
1SAA 907ac4a2dc fixed error when no collective communication in CommProfiler 2022-03-14 17:21:00 +08:00
Frank Lee 2fe68b359a
Merge pull request #403 from ver217/feature/shard-strategy
[zero] Add bucket tensor shard strategy
2022-03-14 16:29:28 +08:00
HELSON dfd0363f68
polished output format for communication profiler and pcie profiler (#404)
fixed typing error
2022-03-14 16:07:45 +08:00
ver217 63469c0f91 polish code 2022-03-14 15:48:55 +08:00
ver217 88804aee49 add bucket tensor shard strategy 2022-03-14 14:48:32 +08:00
HELSON 7c079d9c33
[hotfix] fixed bugs in ShardStrategy and PcieProfiler (#394) 2022-03-11 18:12:46 +08:00
Frank Lee 1e4bf85cdb fixed bug in activation checkpointing test (#387) 2022-03-11 15:50:28 +08:00
Jiarui Fang 3af13a2c3e [zero] polish ShardedOptimV2 unittest (#385)
* place params on cpu after zero init context

* polish code

* bucketzed cpu gpu tensor transter

* find a bug in sharded optim unittest

* add offload unittest for ShardedOptimV2.

* polish code and make it more robust
2022-03-11 15:50:28 +08:00
Jiang Zhuo 5a4a3b77d9 fix format (#376) 2022-03-11 15:50:28 +08:00
LuGY de46450461 Added activation offload (#331)
* Added activation offload

* Fixed the import bug, used the pytest
2022-03-11 15:50:28 +08:00
Jiarui Fang 272ebfb57d [bug] shard param during initializing the ShardedModelV2 (#381) 2022-03-11 15:50:28 +08:00
HELSON 8c18eb0998 [profiler] Fixed bugs in CommProfiler and PcieProfiler (#377) 2022-03-11 15:50:28 +08:00
Jiarui Fang b5f43acee3 [zero] find miss code (#378) 2022-03-11 15:50:28 +08:00
Jiarui Fang 6b6002962a [zero] zero init context collect numel of model (#375) 2022-03-11 15:50:28 +08:00
HELSON 1ed7c24c02 Added PCIE profiler to dectect data transmission (#373) 2022-03-11 15:50:28 +08:00
jiaruifang d9217e1960 Revert "[zero] bucketized tensor cpu gpu copy (#368)"
This reverts commit bef05489b6.
2022-03-11 15:50:28 +08:00
RichardoLuo 8539898ec6 flake8 style change (#363) 2022-03-11 15:50:28 +08:00
Kai Wang (Victor Kai) 53bb3bcc0a fix format (#362) 2022-03-11 15:50:28 +08:00
ziyu huang a77d73f22b fix format parallel_context.py (#359)
Co-authored-by: huangziyu <202476410arsmart@gmail.com>
2022-03-11 15:50:28 +08:00
Zangwei c695369af0 fix format constants.py (#358) 2022-03-11 15:50:28 +08:00
Yuer867 4a0f8c2c50 fix format parallel_2p5d (#357) 2022-03-11 15:50:28 +08:00
Liang Bowen 7eb87f516d flake8 style (#352) 2022-03-11 15:50:28 +08:00
Xu Kai 54ee8d1254 Fix/format colossalai/engine/paramhooks/(#350) 2022-03-11 15:50:28 +08:00
Maruyama_Aya e83970e3dc fix format ColossalAI\colossalai\context\process_group_initializer 2022-03-11 15:50:28 +08:00
yuxuan-lou 3b88eb2259 Flake8 code restyle 2022-03-11 15:50:28 +08:00
xuqifan897 148207048e Qifan formated file ColossalAI\colossalai\nn\layer\parallel_1d\layers.py (#342) 2022-03-11 15:50:28 +08:00
Cautiousss 3a51d909af fix format (#332)
Co-authored-by: 何晓昕 <cautious@r-205-106-25-172.comp.nus.edu.sg>
2022-03-11 15:50:28 +08:00
DouJS cbb6436ff0 fix format for dir-[parallel_3d] (#333) 2022-03-11 15:50:28 +08:00
ExtremeViscent eaac03ae1d [formart] format fixed for kernel\cuda_native codes (#335) 2022-03-11 15:50:28 +08:00
Jiarui Fang 00670c870e [zero] bucketized tensor cpu gpu copy (#368) 2022-03-11 15:50:28 +08:00
Jiarui Fang 44e4891f57 [zero] able to place params on cpu after zero init context (#365)
* place params on cpu after zero init context

* polish code
2022-03-11 15:50:28 +08:00
ver217 253e54d98a fix grad shape 2022-03-11 15:50:28 +08:00
Jiarui Fang ea2872073f [zero] global model data memory tracer (#360) 2022-03-11 15:50:28 +08:00
Jiarui Fang cb34cd384d [test] polish zero related unitest (#351) 2022-03-11 15:50:28 +08:00
HELSON 534e0bb118 Fixed import bug for no-tensorboard environment (#354) 2022-03-11 15:50:28 +08:00
HELSON c57e089824 [profile] added example for ProfilerContext (#349) 2022-03-11 15:50:28 +08:00
Jiarui Fang 10e2826426 move async memory to an individual directory (#345) 2022-03-11 15:50:28 +08:00
HELSON 425bb0df3f Added Profiler Context to manage all profilers (#340) 2022-03-11 15:50:28 +08:00
ver217 d0ae0f2215 [zero] update sharded optim v2 (#334) 2022-03-11 15:50:28 +08:00
jiaruifang 5663616921 polish code 2022-03-11 15:50:28 +08:00
jiaruifang 7977422aeb add bert for unitest and sharded model is not able to pass the bert case 2022-03-11 15:50:28 +08:00
Frank Lee 3d5d64bd10 refactored grad scaler (#338) 2022-03-11 15:50:28 +08:00
Frank Lee 6a3188167c set criterion as optional in colossalai initialize (#336) 2022-03-11 15:50:28 +08:00
Jie Zhu 3213554cc2 [profiler] add adaptive sampling to memory profiler (#330)
* fix merge conflict

modify unit test

remove unnessesary log info

reformat file

* remove unused module

* remove unnecessary sync function

* change doc string style from Google to Sphinx
2022-03-11 15:50:28 +08:00
ver217 1388671699 [zero] Update sharded model v2 using sharded param v2 (#323) 2022-03-11 15:50:28 +08:00
Jiarui Fang 11bddb6e55 [zero] update zero context init with the updated test utils (#327) 2022-03-11 15:50:28 +08:00
HELSON 4f26fabe4f fixed strings in profiler outputs (#325) 2022-03-11 15:50:28 +08:00
Jiarui Fang de0468c7a8 [zero] zero init context (#321)
* add zero init context

* add more flags for zero init context
fix bug of repeated converting param to ShardedParamV2

* polish code
2022-03-11 15:50:28 +08:00
1SAA 73bff11288 Added profiler communication operations
Fixed bug for learning rate scheduler
2022-03-11 15:50:28 +08:00
LuGY a3269de5c9 [zero] cpu adam kernel (#288)
* Added CPU Adam

* finished the cpu adam

* updated the license

* delete useless parameters, removed resnet

* modified the method off cpu adam unittest

* deleted some useless codes

* removed useless codes

Co-authored-by: ver217 <lhx0217@gmail.com>
Co-authored-by: Frank Lee <somerlee.9@gmail.com>
Co-authored-by: jiaruifang <fangjiarui123@gmail.com>
2022-03-11 15:50:28 +08:00
Jiarui Fang 90d3aef62c [zero] yet an improved sharded param (#311) 2022-03-11 15:50:28 +08:00
Jiarui Fang c9e7d9582d [zero] polish shard strategy (#310)
* init shard param from shape tuple

* add more unitest for shard param

* add set_payload method for ShardedParam

* [zero] add shareded tensor class

* polish code

* add shard stratgy

* move shard and gather logic to shard strategy from shard tensor.

* polish code
2022-03-11 15:50:28 +08:00
ver217 3092317b80 polish code 2022-03-11 15:50:28 +08:00
ver217 36f9a74ab2 fix sharded param hook and unit test 2022-03-11 15:50:28 +08:00
ver217 001ca624dd impl shard optim v2 and add unit test 2022-03-11 15:50:28 +08:00
Jiarui Fang 74f77e314b [zero] a shard strategy in granularity of tensor (#307) 2022-03-11 15:50:28 +08:00
Jiarui Fang 80364c7686 [zero] sharded tensor (#305)
* init shard param from shape tuple

* add more unitest for shard param

* add set_payload method for ShardedParam

* [zero] add shareded tensor class

* polish code
2022-03-11 15:50:28 +08:00
Jie Zhu d344689274 [profiler] primary memory tracer 2022-03-11 15:50:28 +08:00
ver217 b105371ace rename shared adam to sharded optim v2 2022-03-11 15:50:28 +08:00
ver217 70814dc22f fix master params dtype 2022-03-11 15:50:28 +08:00
ver217 795210dd99 add fp32 master params in sharded adam 2022-03-11 15:50:28 +08:00
ver217 a109225bc2 add sharded adam 2022-03-11 15:50:28 +08:00
Jiarui Fang e17e92c54d Polish sharded parameter (#297)
* init shard param from shape tuple

* add more unitest for shard param

* add more unittests to shareded param
2022-03-11 15:50:28 +08:00
ver217 7aef75ca42 [zero] add sharded grad and refactor grad hooks for ShardedModel (#287) 2022-03-11 15:50:28 +08:00
Frank Lee 9afb5c8b2d fixed typo in ShardParam (#294) 2022-03-11 15:50:28 +08:00
Frank Lee e17e54e32a added buffer sync to naive amp model wrapper (#291) 2022-03-11 15:50:28 +08:00
Jiarui Fang 8d653af408 add a common util for hooks registered on parameter. (#292) 2022-03-11 15:50:28 +08:00
Jie Zhu f867365aba bug fix: pass hook_list to engine (#273)
* bug fix: pass hook_list to engine

* change parameter name
2022-03-11 15:50:28 +08:00
Jiarui Fang 5a560a060a Feature/zero (#279)
* add zero1 (#209)

* add zero1

* add test zero1

* update zero stage 1 develop (#212)

* Implement naive zero3 (#240)

* naive zero3 works well

* add zero3 param manager

* add TODOs in comments

* add gather full param ctx

* fix sub module streams

* add offload

* fix bugs of hook and add unit tests

* fix bugs of hook and add unit tests (#252)

* add gather full param ctx

* fix sub module streams

* add offload

* fix bugs of hook and add unit tests

* polish code and add state dict hook

* fix bug

* update unit test

* refactor reconstructed zero code

* clip_grad support zero3 and add unit test

* add unit test for Zero3ParameterManager

* [WIP] initialize the shard param class

* [WIP] Yet another sharded model implementation (#274)

* [WIP] initialize the shard param class

* [WIP] Yes another implementation of shardModel. Using a better hook method.

* torch.concat -> torch.cat

* fix test_zero_level_1.py::test_zero_level_1 unitest

* remove deepspeed implementation and refactor for the reconstructed zero module

* polish zero dp unittests

Co-authored-by: ver217 <lhx0217@gmail.com>
Co-authored-by: Frank Lee <somerlee.9@gmail.com>
2022-03-11 15:50:28 +08:00
1SAA 82023779bb Added TPExpert for special situation 2022-03-11 15:50:28 +08:00
HELSON 36b8477228 Fixed parameter initialization in FFNExpert (#251) 2022-03-11 15:50:28 +08:00
アマデウス e13293bb4c fixed CI dataset directory; fixed import error of 2.5d accuracy (#255) 2022-03-11 15:50:28 +08:00
1SAA 219df6e685 Optimized MoE layer and fixed some bugs;
Decreased moe tests;

Added FFNExperts and ViTMoE model
2022-03-11 15:50:28 +08:00
zbian 3dba070580 fixed padding index issue for vocab parallel embedding layers; updated 3D linear to be compatible with examples in the tutorial 2022-03-11 15:50:28 +08:00
Frank Lee f5ca88ec97 fixed apex import (#227) 2022-02-15 11:31:13 +08:00
Frank Lee 3a1a9820b0 fixed mkdir conflict and align yapf config with flake (#220) 2022-02-15 11:31:13 +08:00
アマデウス 9ee197d0e9 moved env variables to global variables; (#215)
added branch context;
added vocab parallel layers;
moved split_batch from load_batch to tensor parallel embedding layers;
updated gpt model;
updated unit test cases;
fixed few collective communicator bugs
2022-02-15 11:31:13 +08:00
Frank Lee 812357d63c
fixed utils docstring and add example to readme (#200) 2022-02-03 11:37:17 +08:00
Frank Lee 765db512b5
fixed ddp bug on torch 1.8 (#194) 2022-01-28 15:14:04 +08:00
Jiarui Fang 569357fea0
add pytorch hooks (#179)
* add pytorch hooks
fix #175

* remove licenses in src code

* add gpu memory tracer

* replacing print with logger in ophooks.
2022-01-25 22:20:54 +08:00
ver217 708404d5f8
fix pipeline forward return tensors (#176) 2022-01-21 15:46:02 +08:00
HELSON 0f8c7f9804
Fixed docstring in colossalai (#171) 2022-01-21 10:44:30 +08:00
Frank Lee e2089c5c15
adapted for sequence parallel (#163) 2022-01-20 13:44:51 +08:00
puck_WCR 9473a1b9c8
AMP docstring/markdown update (#160) 2022-01-18 18:33:36 +08:00
Frank Lee f3802d6b06
fixed jit default setting (#154) 2022-01-18 13:37:20 +08:00
ver217 7bf1e98b97
pipeline last stage supports multi output (#151) 2022-01-17 15:57:47 +08:00
ver217 f68eddfb3d
refactor kernel (#142) 2022-01-13 16:47:17 +08:00
BoxiangW 4a3d3446b0
Update layer integration documentations (#108)
Update the documentations of layer integration

Update _log_hook.py

Update _operation.py
2022-01-10 18:05:58 +08:00
ver217 9ef05ed1fc
try import deepspeed when using zero (#130) 2022-01-07 17:24:57 +08:00
HELSON dceae85195
Added MoE parallel (#127) 2022-01-07 15:08:36 +08:00
ver217 293fb40c42
add scatter/gather optim for pipeline (#123) 2022-01-07 13:22:22 +08:00
Jiarui Fang 2c0c85d3d3
fix a bug in timer (#114) 2022-01-05 16:07:06 +08:00
ver217 7904baf6e1
fix layers/schedule for hybrid parallelization (#111) (#112) 2022-01-04 20:52:31 +08:00
ver217 a951bc6089
update default logger (#100) (#101) 2022-01-04 20:03:26 +08:00
ver217 96780e6ee4
Optimize pipeline schedule (#94)
* add pipeline shared module wrapper and update load batch

* added model parallel process group for amp and clip grad (#86)

* added model parallel process group for amp and clip grad

* update amp and clip with model parallel process group

* remove pipeline_prev/next group (#88)

* micro batch offload

* optimize pipeline gpu memory usage

* pipeline can receive tensor shape (#93)

* optimize pipeline gpu memory usage

* fix grad accumulation step counter

* rename classes and functions

Co-authored-by: Frank Lee <somerlee.9@gmail.com>
2021-12-30 15:56:46 +08:00
アマデウス 01a80cd86d
Hotfix/Colossalai layers (#92)
* optimized 1d layer apis; reorganized nn.layer modules; fixed tests

* fixed 2.5d runtime issue

* reworked split batch, now called in trainer.schedule.load_batch

Co-authored-by: BoxiangW <45734921+BoxiangW@users.noreply.github.com>
2021-12-29 23:32:10 +08:00
アマデウス 0fedef4f3c
Layer integration (#83)
* integrated parallel layers for ease of building models

* integrated 2.5d layers

* cleaned codes and unit tests

* added log metric by step hook; updated imagenet benchmark; fixed some bugs

* reworked initialization; cleaned codes

Co-authored-by: BoxiangW <45734921+BoxiangW@users.noreply.github.com>
2021-12-27 15:04:32 +08:00
shenggan 5c3843dc98
add colossalai kernel module (#55) 2021-12-21 12:19:52 +08:00
ver217 8f02a88db2
add interleaved pipeline, fix naive amp and update pipeline model initializer (#80) 2021-12-20 23:26:19 +08:00
Frank Lee 91c327cb44
fixed zero level 3 dtype bug (#76) 2021-12-20 17:00:53 +08:00
HELSON 632e622de8
overlap computation and communication in 2d operations (#75) 2021-12-16 16:05:15 +08:00
Frank Lee cd9c28e055
added CI for unit testing (#69) 2021-12-16 10:32:08 +08:00
Frank Lee 35813ed3c4
update examples and sphnix docs for the new api (#63) 2021-12-13 22:07:01 +08:00
ver217 7d3711058f
fix zero3 fp16 and add zero3 model context (#62) 2021-12-10 17:48:50 +08:00
Frank Lee 9a0466534c
update markdown docs (english) (#60) 2021-12-10 14:37:33 +08:00
Frank Lee da01c234e1
Develop/experiments (#59)
* Add gradient accumulation, fix lr scheduler

* fix FP16 optimizer and adapted torch amp with tensor parallel (#18)

* fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes

* fixed trainer

* Revert "fixed trainer"

This reverts commit 2e0b0b7699.

* improved consistency between trainer, engine and schedule (#23)

Co-authored-by: 1SAA <c2h214748@gmail.com>

* Split conv2d, class token, positional embedding in 2d, Fix random number in ddp
Fix convergence in cifar10, Imagenet1000

* Integrate 1d tensor parallel in Colossal-AI (#39)

* fixed 1D and 2D convergence (#38)

* optimized 2D operations

* fixed 1D ViT convergence problem

* Feature/ddp (#49)

* remove redundancy func in setup (#19) (#20)

* use env to control the language of doc (#24) (#25)

* Support TP-compatible Torch AMP and Update trainer API (#27)

* Add gradient accumulation, fix lr scheduler

* fix FP16 optimizer and adapted torch amp with tensor parallel (#18)

* fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes

* fixed trainer

* Revert "fixed trainer"

This reverts commit 2e0b0b7699.

* improved consistency between trainer, engine and schedule (#23)

Co-authored-by: 1SAA <c2h214748@gmail.com>

Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: ver217 <lhx0217@gmail.com>

* add an example of ViT-B/16 and remove w_norm clipping in LAMB (#29)

* add explanation for ViT example (#35) (#36)

* support torch ddp

* fix loss accumulation

* add log for ddp

* change seed

* modify timing hook

Co-authored-by: Frank Lee <somerlee.9@gmail.com>
Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: binmakeswell <binmakeswell@gmail.com>

* Feature/pipeline (#40)

* remove redundancy func in setup (#19) (#20)

* use env to control the language of doc (#24) (#25)

* Support TP-compatible Torch AMP and Update trainer API (#27)

* Add gradient accumulation, fix lr scheduler

* fix FP16 optimizer and adapted torch amp with tensor parallel (#18)

* fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes

* fixed trainer

* Revert "fixed trainer"

This reverts commit 2e0b0b7699.

* improved consistency between trainer, engine and schedule (#23)

Co-authored-by: 1SAA <c2h214748@gmail.com>

Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: ver217 <lhx0217@gmail.com>

* add an example of ViT-B/16 and remove w_norm clipping in LAMB (#29)

* add explanation for ViT example (#35) (#36)

* optimize communication of pipeline parallel

* fix grad clip for pipeline

Co-authored-by: Frank Lee <somerlee.9@gmail.com>
Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: binmakeswell <binmakeswell@gmail.com>

* optimized 3d layer to fix slow computation ; tested imagenet performance with 3d; reworked lr_scheduler config definition; fixed launch args; fixed some printing issues; simplified apis of 3d layers (#51)

* Update 2.5d layer code to get a similar accuracy on imagenet-1k dataset

* update api for better usability (#58)

update api for better usability

Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: ver217 <lhx0217@gmail.com>
Co-authored-by: puck_WCR <46049915+WANG-CR@users.noreply.github.com>
Co-authored-by: binmakeswell <binmakeswell@gmail.com>
Co-authored-by: アマデウス <kurisusnowdeng@users.noreply.github.com>
Co-authored-by: BoxiangW <45734921+BoxiangW@users.noreply.github.com>
2021-12-09 15:08:29 +08:00
ver217 dbe62c67b8
add an example of ViT-B/16 and remove w_norm clipping in LAMB (#29) 2021-11-18 23:45:09 +08:00
Frank Lee 3defa32aee
Support TP-compatible Torch AMP and Update trainer API (#27)
* Add gradient accumulation, fix lr scheduler

* fix FP16 optimizer and adapted torch amp with tensor parallel (#18)

* fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes

* fixed trainer

* Revert "fixed trainer"

This reverts commit 2e0b0b7699.

* improved consistency between trainer, engine and schedule (#23)

Co-authored-by: 1SAA <c2h214748@gmail.com>

Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: ver217 <lhx0217@gmail.com>
2021-11-18 19:45:06 +08:00
ver217 3c7604ba30 update documentation 2021-10-29 09:29:20 +08:00
zbian 404ecbdcc6 Migrated project 2021-10-28 18:21:23 +02:00