Commit Graph

27 Commits (f660152c73511ea5d7d63be2b382a34cdc3ae56f)

Author SHA1 Message Date
YuliangLiu0306 e27645376d
[hotfix]different overflow status lead to communication stuck. (#1175)
* [CLI] add CLI launcher

* Revert "[CLI] add CLI launcher"

This reverts commit df7e6506d4.

* [hotfix]fix some bugs caused by refactored schedule.

* [hotfix]different overflow statu llead to communication stuck.
2022-06-27 09:53:57 +08:00
Frank Lee 72bd7c696b
[amp] included dict for type casting of model output (#1102) 2022-06-13 14:18:04 +08:00
Frank Lee 9fdebadd69
[doc] improved docstring in the amp module (#857) 2022-04-25 13:42:17 +08:00
HELSON 4c4388c46e
[hotfix] fix memory leak in zero (#781) 2022-04-18 13:57:03 +08:00
Frank Lee a4e91bc87f
[bug] fixed grad scaler compatibility with torch 1.8 (#735) 2022-04-12 16:04:21 +08:00
Jiarui Fang 4d90a7b513
[refactor] zero directory (#724) 2022-04-11 23:13:02 +08:00
Kai Wang (Victor Kai) b0f708dfc1 fix format (#570) 2022-04-06 11:40:59 +08:00
ver217 c5b488edf8
polish amp docstring (#616) 2022-04-01 16:09:39 +08:00
Liang Bowen 2c45efc398
html refactor (#555) 2022-03-31 11:36:56 +08:00
Liang Bowen ec5086c49c Refactored docstring to google style 2022-03-29 17:17:47 +08:00
Jiarui Fang 496cbb0760
[hotfix] fix initialize bug with zero (#442) 2022-03-17 13:16:22 +08:00
Frank Lee 14a7094243
fixed fp16 optimizer none grad bug (#432) 2022-03-16 14:35:46 +08:00
Frank Lee e79ea44247
[fp16] refactored fp16 optimizer (#392) 2022-03-15 10:05:38 +08:00
Kai Wang (Victor Kai) 53bb3bcc0a fix format (#362) 2022-03-11 15:50:28 +08:00
Frank Lee 3d5d64bd10 refactored grad scaler (#338) 2022-03-11 15:50:28 +08:00
Frank Lee 6a3188167c set criterion as optional in colossalai initialize (#336) 2022-03-11 15:50:28 +08:00
Frank Lee e17e54e32a added buffer sync to naive amp model wrapper (#291) 2022-03-11 15:50:28 +08:00
Frank Lee f5ca88ec97 fixed apex import (#227) 2022-02-15 11:31:13 +08:00
アマデウス 9ee197d0e9 moved env variables to global variables; (#215)
added branch context;
added vocab parallel layers;
moved split_batch from load_batch to tensor parallel embedding layers;
updated gpt model;
updated unit test cases;
fixed few collective communicator bugs
2022-02-15 11:31:13 +08:00
HELSON 0f8c7f9804
Fixed docstring in colossalai (#171) 2022-01-21 10:44:30 +08:00
Frank Lee e2089c5c15
adapted for sequence parallel (#163) 2022-01-20 13:44:51 +08:00
puck_WCR 9473a1b9c8
AMP docstring/markdown update (#160) 2022-01-18 18:33:36 +08:00
ver217 96780e6ee4
Optimize pipeline schedule (#94)
* add pipeline shared module wrapper and update load batch

* added model parallel process group for amp and clip grad (#86)

* added model parallel process group for amp and clip grad

* update amp and clip with model parallel process group

* remove pipeline_prev/next group (#88)

* micro batch offload

* optimize pipeline gpu memory usage

* pipeline can receive tensor shape (#93)

* optimize pipeline gpu memory usage

* fix grad accumulation step counter

* rename classes and functions

Co-authored-by: Frank Lee <somerlee.9@gmail.com>
2021-12-30 15:56:46 +08:00
ver217 8f02a88db2
add interleaved pipeline, fix naive amp and update pipeline model initializer (#80) 2021-12-20 23:26:19 +08:00
Frank Lee 91c327cb44
fixed zero level 3 dtype bug (#76) 2021-12-20 17:00:53 +08:00
Frank Lee 35813ed3c4
update examples and sphnix docs for the new api (#63) 2021-12-13 22:07:01 +08:00
Frank Lee da01c234e1
Develop/experiments (#59)
* Add gradient accumulation, fix lr scheduler

* fix FP16 optimizer and adapted torch amp with tensor parallel (#18)

* fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes

* fixed trainer

* Revert "fixed trainer"

This reverts commit 2e0b0b7699.

* improved consistency between trainer, engine and schedule (#23)

Co-authored-by: 1SAA <c2h214748@gmail.com>

* Split conv2d, class token, positional embedding in 2d, Fix random number in ddp
Fix convergence in cifar10, Imagenet1000

* Integrate 1d tensor parallel in Colossal-AI (#39)

* fixed 1D and 2D convergence (#38)

* optimized 2D operations

* fixed 1D ViT convergence problem

* Feature/ddp (#49)

* remove redundancy func in setup (#19) (#20)

* use env to control the language of doc (#24) (#25)

* Support TP-compatible Torch AMP and Update trainer API (#27)

* Add gradient accumulation, fix lr scheduler

* fix FP16 optimizer and adapted torch amp with tensor parallel (#18)

* fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes

* fixed trainer

* Revert "fixed trainer"

This reverts commit 2e0b0b7699.

* improved consistency between trainer, engine and schedule (#23)

Co-authored-by: 1SAA <c2h214748@gmail.com>

Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: ver217 <lhx0217@gmail.com>

* add an example of ViT-B/16 and remove w_norm clipping in LAMB (#29)

* add explanation for ViT example (#35) (#36)

* support torch ddp

* fix loss accumulation

* add log for ddp

* change seed

* modify timing hook

Co-authored-by: Frank Lee <somerlee.9@gmail.com>
Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: binmakeswell <binmakeswell@gmail.com>

* Feature/pipeline (#40)

* remove redundancy func in setup (#19) (#20)

* use env to control the language of doc (#24) (#25)

* Support TP-compatible Torch AMP and Update trainer API (#27)

* Add gradient accumulation, fix lr scheduler

* fix FP16 optimizer and adapted torch amp with tensor parallel (#18)

* fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes

* fixed trainer

* Revert "fixed trainer"

This reverts commit 2e0b0b7699.

* improved consistency between trainer, engine and schedule (#23)

Co-authored-by: 1SAA <c2h214748@gmail.com>

Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: ver217 <lhx0217@gmail.com>

* add an example of ViT-B/16 and remove w_norm clipping in LAMB (#29)

* add explanation for ViT example (#35) (#36)

* optimize communication of pipeline parallel

* fix grad clip for pipeline

Co-authored-by: Frank Lee <somerlee.9@gmail.com>
Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: binmakeswell <binmakeswell@gmail.com>

* optimized 3d layer to fix slow computation ; tested imagenet performance with 3d; reworked lr_scheduler config definition; fixed launch args; fixed some printing issues; simplified apis of 3d layers (#51)

* Update 2.5d layer code to get a similar accuracy on imagenet-1k dataset

* update api for better usability (#58)

update api for better usability

Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: ver217 <lhx0217@gmail.com>
Co-authored-by: puck_WCR <46049915+WANG-CR@users.noreply.github.com>
Co-authored-by: binmakeswell <binmakeswell@gmail.com>
Co-authored-by: アマデウス <kurisusnowdeng@users.noreply.github.com>
Co-authored-by: BoxiangW <45734921+BoxiangW@users.noreply.github.com>
2021-12-09 15:08:29 +08:00