Commit Graph

102 Commits (a9d1cadc49bd0a37208d8d7f321f16fd37c41471)

Author SHA1 Message Date
Frank Lee b8804aa60c
[doc] added readme for documentation (#2935) 2023-02-28 14:04:52 +08:00
Frank Lee 9e3b8b7aff
[doc] removed read-the-docs (#2932) 2023-02-28 11:28:24 +08:00
Frank Lee 77b88a3849
[workflow] added auto doc test on PR (#2929)
* [workflow] added auto doc test on PR

* [workflow] added doc test workflow

* polish code

* polish code

* polish code

* polish code

* polish code

* polish code

* polish code
2023-02-28 11:10:38 +08:00
binmakeswell 0afb55fc5b
[doc] add os scope, update tutorial install and tips (#2914) 2023-02-27 14:59:27 +08:00
YuliangLiu0306 cf6409dd40
Hotfix/auto parallel zh doc (#2820)
* [hotfix] fix autoparallel zh docs

* polish

* polish
2023-02-19 15:57:14 +08:00
YuliangLiu0306 2059fdd6b0
[hotfix] add copyright for solver and device mesh (#2803)
* [hotfix] add copyright for solver and device mesh

* add readme

* add alpa license

* polish
2023-02-18 21:14:38 +08:00
Frank Lee e376954305
[doc] add opt service doc (#2747) 2023-02-16 15:45:26 +08:00
Frank Lee 5479fdd5b8
[doc] updated documentation version list (#2730) 2023-02-15 17:39:50 +08:00
Frank Lee 2045d45ab7
[doc] updated documentation version list (#2715) 2023-02-15 11:24:18 +08:00
Frank Lee 0966008839
[dooc] fixed the sidebar itemm key (#2672) 2023-02-13 10:45:16 +08:00
Frank Lee 6d60634433
[doc] added documentation sidebar translation (#2670) 2023-02-13 10:10:12 +08:00
Frank Lee 81ea66d25d
[release] v0.2.3 (#2669)
* [release] v0.2.3

* polish code
2023-02-13 09:51:25 +08:00
YuliangLiu0306 8de85051b3
[Docs] layout converting management (#2665) 2023-02-10 18:38:32 +08:00
Frank Lee b673e5f78b
[release] v0.2.2 (#2661) 2023-02-10 11:01:24 +08:00
Frank Lee cd4f02bed8
[doc] fixed compatiblity with docusaurus (#2657) 2023-02-09 17:06:29 +08:00
Frank Lee a4ae43f071
[doc] added docusaurus-based version control (#2656) 2023-02-09 16:38:49 +08:00
Frank Lee 85b2303b55
[doc] migrate the markdown files (#2652) 2023-02-09 14:21:38 +08:00
Frank Lee d3480396f8
[doc] updated the sphinx theme (#2635) 2023-02-08 13:48:08 +08:00
binmakeswell a01278e810
Update requirements.txt 2022-11-18 18:57:18 +08:00
Jiarui Fang cc0ed7cf33
[Gemini] ZeROHookV2 -> GeminiZeROHook (#1972) 2022-11-17 14:43:49 +08:00
Ziyue Jiang 63f250bbd4
fix file name (#1759)
Co-authored-by: Ziyue Jiang <ziyue.jiang@gmail.com>
2022-10-25 16:48:48 +08:00
ver217 d068af81a3
[doc] update rst and docstring (#1351)
* update rst

* add zero docstr

* fix docstr

* remove fx.tracer.meta_patch

* fix docstr

* fix docstr

* update fx rst

* fix fx docstr

* remove useless rst
2022-07-21 15:54:53 +08:00
Jiarui Fang 4165eabb1e
[hotfix] remove potiential circle import (#1307)
* make it faster

* [hotfix] remove circle import
2022-07-14 13:44:26 +08:00
Jiarui Fang 4d9332b4c5
[refactor] moving memtracer to gemini (#801) 2022-04-19 10:13:08 +08:00
ver217 f69507dd22
update rst (#615) 2022-04-01 15:46:38 +08:00
Liang Bowen 2c45efc398
html refactor (#555) 2022-03-31 11:36:56 +08:00
LuGY c44d797072
[docs] updatad docs of hybrid adam and cpu adam (#552) 2022-03-30 18:14:59 +08:00
ver217 ffca99d187
[doc] update apidoc (#530) 2022-03-25 18:29:43 +08:00
ver217 9caa8b6481
docs get correct release version (#489) 2022-03-22 14:24:41 +08:00
ver217 7e30068a22
[doc] update rst (#470)
* update rst

* remove empty rst
2022-03-21 10:52:45 +08:00
binmakeswell ce7b2c9ae3 update README and images path (#384) 2022-03-11 15:50:28 +08:00
binmakeswell 08eccfe681 add community group and update issue template(#271) 2022-03-11 15:50:28 +08:00
Sze-qq 3312d716a0 update experimental visualization (#253) 2022-03-11 15:50:28 +08:00
binmakeswell 753035edd3 add Chinese README 2022-03-11 15:50:28 +08:00
WANG-CR 6fb550acdb update logo 2022-01-21 12:31:07 +08:00
ver217 1949d3a889
update doc requirements and rtd conf (#165) 2022-01-19 19:46:43 +08:00
Frank Lee be85a0f366 removed tutorial markdown and refreshed rst files for consistency 2022-01-19 17:01:37 +08:00
binmakeswell 17ce8569a8
add logo at homepage, add forum in issue template (#161) 2022-01-19 14:29:31 +08:00
puck_WCR 9473a1b9c8
AMP docstring/markdown update (#160) 2022-01-18 18:33:36 +08:00
ver217 96780e6ee4
Optimize pipeline schedule (#94)
* add pipeline shared module wrapper and update load batch

* added model parallel process group for amp and clip grad (#86)

* added model parallel process group for amp and clip grad

* update amp and clip with model parallel process group

* remove pipeline_prev/next group (#88)

* micro batch offload

* optimize pipeline gpu memory usage

* pipeline can receive tensor shape (#93)

* optimize pipeline gpu memory usage

* fix grad accumulation step counter

* rename classes and functions

Co-authored-by: Frank Lee <somerlee.9@gmail.com>
2021-12-30 15:56:46 +08:00
ver217 8f02a88db2
add interleaved pipeline, fix naive amp and update pipeline model initializer (#80) 2021-12-20 23:26:19 +08:00
Frank Lee 35813ed3c4
update examples and sphnix docs for the new api (#63) 2021-12-13 22:07:01 +08:00
ver217 7d3711058f
fix zero3 fp16 and add zero3 model context (#62) 2021-12-10 17:48:50 +08:00
Frank Lee 9a0466534c
update markdown docs (english) (#60) 2021-12-10 14:37:33 +08:00
Frank Lee da01c234e1
Develop/experiments (#59)
* Add gradient accumulation, fix lr scheduler

* fix FP16 optimizer and adapted torch amp with tensor parallel (#18)

* fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes

* fixed trainer

* Revert "fixed trainer"

This reverts commit 2e0b0b7699.

* improved consistency between trainer, engine and schedule (#23)

Co-authored-by: 1SAA <c2h214748@gmail.com>

* Split conv2d, class token, positional embedding in 2d, Fix random number in ddp
Fix convergence in cifar10, Imagenet1000

* Integrate 1d tensor parallel in Colossal-AI (#39)

* fixed 1D and 2D convergence (#38)

* optimized 2D operations

* fixed 1D ViT convergence problem

* Feature/ddp (#49)

* remove redundancy func in setup (#19) (#20)

* use env to control the language of doc (#24) (#25)

* Support TP-compatible Torch AMP and Update trainer API (#27)

* Add gradient accumulation, fix lr scheduler

* fix FP16 optimizer and adapted torch amp with tensor parallel (#18)

* fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes

* fixed trainer

* Revert "fixed trainer"

This reverts commit 2e0b0b7699.

* improved consistency between trainer, engine and schedule (#23)

Co-authored-by: 1SAA <c2h214748@gmail.com>

Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: ver217 <lhx0217@gmail.com>

* add an example of ViT-B/16 and remove w_norm clipping in LAMB (#29)

* add explanation for ViT example (#35) (#36)

* support torch ddp

* fix loss accumulation

* add log for ddp

* change seed

* modify timing hook

Co-authored-by: Frank Lee <somerlee.9@gmail.com>
Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: binmakeswell <binmakeswell@gmail.com>

* Feature/pipeline (#40)

* remove redundancy func in setup (#19) (#20)

* use env to control the language of doc (#24) (#25)

* Support TP-compatible Torch AMP and Update trainer API (#27)

* Add gradient accumulation, fix lr scheduler

* fix FP16 optimizer and adapted torch amp with tensor parallel (#18)

* fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes

* fixed trainer

* Revert "fixed trainer"

This reverts commit 2e0b0b7699.

* improved consistency between trainer, engine and schedule (#23)

Co-authored-by: 1SAA <c2h214748@gmail.com>

Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: ver217 <lhx0217@gmail.com>

* add an example of ViT-B/16 and remove w_norm clipping in LAMB (#29)

* add explanation for ViT example (#35) (#36)

* optimize communication of pipeline parallel

* fix grad clip for pipeline

Co-authored-by: Frank Lee <somerlee.9@gmail.com>
Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: binmakeswell <binmakeswell@gmail.com>

* optimized 3d layer to fix slow computation ; tested imagenet performance with 3d; reworked lr_scheduler config definition; fixed launch args; fixed some printing issues; simplified apis of 3d layers (#51)

* Update 2.5d layer code to get a similar accuracy on imagenet-1k dataset

* update api for better usability (#58)

update api for better usability

Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: ver217 <lhx0217@gmail.com>
Co-authored-by: puck_WCR <46049915+WANG-CR@users.noreply.github.com>
Co-authored-by: binmakeswell <binmakeswell@gmail.com>
Co-authored-by: アマデウス <kurisusnowdeng@users.noreply.github.com>
Co-authored-by: BoxiangW <45734921+BoxiangW@users.noreply.github.com>
2021-12-09 15:08:29 +08:00
Frank Lee 3defa32aee
Support TP-compatible Torch AMP and Update trainer API (#27)
* Add gradient accumulation, fix lr scheduler

* fix FP16 optimizer and adapted torch amp with tensor parallel (#18)

* fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes

* fixed trainer

* Revert "fixed trainer"

This reverts commit 2e0b0b7699.

* improved consistency between trainer, engine and schedule (#23)

Co-authored-by: 1SAA <c2h214748@gmail.com>

Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: ver217 <lhx0217@gmail.com>
2021-11-18 19:45:06 +08:00
ver217 2b05de4c64
use env to control the language of doc (#24) (#25) 2021-11-15 16:53:56 +08:00
binmakeswell 05e7069a5b fixed some typos in the documents, added blog link and paper author information in README 2021-11-03 17:18:43 +08:00
Fan Cui 18ba66e012 added Chinese documents and fixed some typos in English documents 2021-11-02 23:28:44 +08:00
ver217 50982c0b7d reoder parallelization methods in parallelization documentation 2021-11-01 14:31:55 +08:00
ver217 3c7604ba30 update documentation 2021-10-29 09:29:20 +08:00
zbian 404ecbdcc6 Migrated project 2021-10-28 18:21:23 +02:00