binmakeswell
34e909256c
[release] grok-1 inference benchmark ( #5500 )
...
* [release] grok-1 inference benchmark
* [release] grok-1 inference benchmark
* [release] grok-1 inference benchmark
* [release] grok-1 inference benchmark
* [release] grok-1 inference benchmark
2024-03-25 14:42:51 +08:00
Wenhao Chen
bb0a668fee
[hotfix] set return_outputs=False in examples and polish code ( #5404 )
...
* fix: simplify merge_batch
* fix: use return_outputs=False to eliminate extra memory consumption
* feat: add return_outputs warning
* style: remove `return_outputs=False` as it is the default value
2024-03-25 12:31:09 +08:00
binmakeswell
6df844b8c4
[release] grok-1 314b inference ( #5490 )
...
* [release] grok-1 inference
* [release] grok-1 inference
* [release] grok-1 inference
2024-03-22 15:48:12 +08:00
binmakeswell
d158fc0e64
[doc] update open-sora demo ( #5479 )
...
* [doc] update open-sora demo
* [doc] update open-sora demo
* [doc] update open-sora demo
2024-03-20 16:08:41 +08:00
binmakeswell
bd998ced03
[doc] release Open-Sora 1.0 with model weights ( #5468 )
...
* [doc] release Open-Sora 1.0 with model weights
* [doc] release Open-Sora 1.0 with model weights
* [doc] release Open-Sora 1.0 with model weights
2024-03-18 18:31:18 +08:00
digger yu
70cce5cbed
[doc] update some translations with README-zh-Hans.md ( #5382 )
2024-03-05 21:45:55 +08:00
Hongxin Liu
070df689e6
[devops] fix extention building ( #5427 )
2024-03-05 15:35:54 +08:00
binmakeswell
822241a99c
[doc] sora release ( #5425 )
...
* [doc] sora release
* [doc] sora release
* [doc] sora release
* [doc] sora release
2024-03-05 12:08:58 +08:00
binmakeswell
a1c6cdb189
[doc] fix blog link
2024-02-29 15:01:43 +08:00
Frank Lee
705a62a565
[doc] updated installation command ( #5389 )
2024-02-19 16:54:03 +08:00
yixiaoer
69e3ad01ed
[doc] Fix typo ( #5361 )
2024-02-19 16:53:28 +08:00
Frank Lee
8823cc4831
Merge pull request #5310 from hpcaitech/feature/npu
...
Feature/npu
2024-01-29 13:49:39 +08:00
digger yu
bce9499ed3
fix some typo ( #5307 )
2024-01-25 13:56:27 +08:00
ver217
148469348a
Merge branch 'main' into sync/npu
2024-01-18 12:05:21 +08:00
Hongxin Liu
d202cc28c0
[npu] change device to accelerator api ( #5239 )
...
* update accelerator
* fix timer
* fix amp
* update
* fix
* update bug
* add error raise
* fix autocast
* fix set device
* remove doc accelerator
* update doc
* update doc
* update doc
* use nullcontext
* update cpu
* update null context
* change time limit for example
* udpate
* update
* update
* update
* [npu] polish accelerator code
---------
Co-authored-by: Xuanlei Zhao <xuanlei.zhao@gmail.com>
Co-authored-by: zxl <43881818+oahzxl@users.noreply.github.com>
2024-01-09 10:20:05 +08:00
binmakeswell
7bc6969ce6
[doc] SwiftInfer release ( #5236 )
...
* [doc] SwiftInfer release
* [doc] SwiftInfer release
* [doc] SwiftInfer release
* [doc] SwiftInfer release
* [doc] SwiftInfer release
2024-01-08 09:55:12 +08:00
binmakeswell
b9b32b15e6
[doc] add Colossal-LLaMA-2-13B ( #5234 )
...
* [doc] add Colossal-LLaMA-2-13B
* [doc] add Colossal-LLaMA-2-13B
* [doc] add Colossal-LLaMA-2-13B
2024-01-07 20:53:12 +08:00
flybird11111
681d9b12ef
[doc] update pytorch version in documents. ( #5177 )
...
* fix
aaa
fix
fix
fix
* fix
* fix
* test ci
* fix ci
fix
* update pytorch version in documents
2023-12-15 18:16:48 +08:00
binmakeswell
177c79f2d1
[doc] add moe news ( #5128 )
...
* [doc] add moe news
* [doc] add moe news
* [doc] add moe news
2023-11-28 17:44:06 +08:00
Wenhao Chen
7172459e74
[shardformer]: support gpt-j, falcon, Mistral and add interleaved pipeline for bert ( #5088 )
...
* [shardformer] implement policy for all GPT-J models and test
* [shardformer] support interleaved pipeline parallel for bert finetune
* [shardformer] shardformer support falcon (#4883 )
* [shardformer]: fix interleaved pipeline for bert model (#5048 )
* [hotfix]: disable seq parallel for gptj and falcon, and polish code (#5093 )
* Add Mistral support for Shardformer (#5103 )
* [shardformer] add tests to mistral (#5105 )
---------
Co-authored-by: Pengtai Xu <henryxu880@gmail.com>
Co-authored-by: ppt0011 <143150326+ppt0011@users.noreply.github.com>
Co-authored-by: flybird11111 <1829166702@qq.com>
Co-authored-by: eric8607242 <e0928021388@gmail.com>
2023-11-28 16:54:42 +08:00
digger yu
d5661f0f25
[nfc] fix typo change directoty to directory ( #5111 )
2023-11-27 18:25:53 +08:00
digger yu
2bdf76f1f2
fix typo change lazy_iniy to lazy_init ( #5099 )
2023-11-24 19:15:59 +08:00
digger yu
0d482302a1
[nfc] fix typo and author name ( #5089 )
2023-11-22 10:39:01 +08:00
digger yu
fd3567e089
[nfc] fix typo in docs/ ( #4972 )
2023-11-21 22:06:20 +08:00
ppt0011
335cb105e2
[doc] add supported feature diagram for hybrid parallel plugin ( #4996 )
2023-10-31 19:56:42 +08:00
digger yu
11009103be
[nfc] fix some typo with colossalai/ docs/ etc. ( #4920 )
2023-10-18 15:44:04 +08:00
Baizhou Zhang
21ba89cab6
[gemini] support gradient accumulation ( #4869 )
...
* add test
* fix no_sync bug in low level zero plugin
* fix test
* add argument for grad accum
* add grad accum in backward hook for gemini
* finish implementation, rewrite tests
* fix test
* skip stuck model in low level zero test
* update doc
* optimize communication & fix gradient checkpoint
* modify doc
* cleaning codes
* update cpu adam fp16 case
2023-10-17 14:07:21 +08:00
flybird11111
6a21f96a87
[doc] update advanced tutorials, training gpt with hybrid parallelism ( #4866 )
...
* [doc]update advanced tutorials, training gpt with hybrid parallelism
* [doc]update advanced tutorials, training gpt with hybrid parallelism
* update vit tutorials
* update vit tutorials
* update vit tutorials
* update vit tutorials
* update en/train_vit_with_hybrid_parallel.py
* fix
* resolve comments
* fix
2023-10-10 08:18:55 +00:00
Zhongkai Zhao
db40e086c8
[test] modify model supporting part of low_level_zero plugin (including correspoding docs)
2023-10-05 15:10:31 +08:00
binmakeswell
822051d888
[doc] update slack link ( #4823 )
2023-09-27 17:37:39 +08:00
Hongxin Liu
da15fdb9ca
[doc] add lazy init docs ( #4808 )
2023-09-27 10:24:04 +08:00
Baizhou Zhang
64a08b2dc3
[checkpointio] support unsharded checkpointIO for hybrid parallel ( #4774 )
...
* support unsharded saving/loading for model
* support optimizer unsharded saving
* update doc
* support unsharded loading for optimizer
* small fix
2023-09-26 10:58:03 +08:00
Baizhou Zhang
a2db75546d
[doc] polish shardformer doc ( #4779 )
...
* fix example format in docstring
* polish shardformer doc
2023-09-26 10:57:47 +08:00
binmakeswell
d512a4d38d
[doc] add llama2 domain-specific solution news ( #4789 )
...
* [doc] add llama2 domain-specific solution news
2023-09-25 10:44:15 +08:00
Baizhou Zhang
493a5efeab
[doc] add shardformer doc to sidebar ( #4768 )
2023-09-21 14:53:16 +08:00
Hongxin Liu
66f3926019
[doc] clean up outdated docs ( #4765 )
...
* [doc] clean up outdated docs
* [doc] fix linking
* [doc] fix linking
2023-09-21 11:36:20 +08:00
Pengtai Xu
4d7537ba25
[doc] put native colossalai plugins first in description section
2023-09-20 09:24:10 +08:00
Pengtai Xu
e10d9f087e
[doc] add model examples for each plugin
2023-09-19 18:01:23 +08:00
Pengtai Xu
a04337bfc3
[doc] put individual plugin explanation in front
2023-09-19 16:27:37 +08:00
Pengtai Xu
10513f203c
[doc] explain suitable use case for each plugin
2023-09-19 15:50:14 +08:00
Hongxin Liu
b5f9e37c70
[legacy] clean up legacy code ( #4743 )
...
* [legacy] remove outdated codes of pipeline (#4692 )
* [legacy] remove cli of benchmark and update optim (#4690 )
* [legacy] remove cli of benchmark and update optim
* [doc] fix cli doc test
* [legacy] fix engine clip grad norm
* [legacy] remove outdated colo tensor (#4694 )
* [legacy] remove outdated colo tensor
* [test] fix test import
* [legacy] move outdated zero to legacy (#4696 )
* [legacy] clean up utils (#4700 )
* [legacy] clean up utils
* [example] update examples
* [legacy] clean up amp
* [legacy] fix amp module
* [legacy] clean up gpc (#4742 )
* [legacy] clean up context
* [legacy] clean core, constants and global vars
* [legacy] refactor initialize
* [example] fix examples ci
* [example] fix examples ci
* [legacy] fix tests
* [example] fix gpt example
* [example] fix examples ci
* [devops] fix ci installation
* [example] fix examples ci
2023-09-18 16:31:06 +08:00
Baizhou Zhang
d151dcab74
[doc] explaination of loading large pretrained models ( #4741 )
2023-09-15 21:04:07 +08:00
Baizhou Zhang
451c3465fb
[doc] polish shardformer doc ( #4735 )
...
* arrange position of chapters
* fix typos in seq parallel doc
2023-09-15 17:39:10 +08:00
Bin Jia
6a03c933a0
[shardformer] update seq parallel document ( #4730 )
...
* update doc of seq parallel
* fix typo
2023-09-15 16:09:32 +08:00
flybird11111
46162632e5
[shardformer] update pipeline parallel document ( #4725 )
...
* [shardformer] update pipeline parallel document
* [shardformer] update pipeline parallel document
* [shardformer] update pipeline parallel document
* [shardformer] update pipeline parallel document
* [shardformer] update pipeline parallel document
* [shardformer] update pipeline parallel document
* [shardformer] update pipeline parallel document
* [shardformer] update pipeline parallel document
2023-09-15 14:32:04 +08:00
Baizhou Zhang
50e5602c2d
[doc] add shardformer support matrix/update tensor parallel documents ( #4728 )
...
* add compatibility matrix for shardformer doc
* update tp doc
2023-09-15 13:52:30 +08:00
github-actions[bot]
8c2dda7410
[format] applied code formatting on changed files in pull request 4726 ( #4727 )
...
Co-authored-by: github-actions <github-actions@github.com>
2023-09-15 13:17:32 +08:00
Baizhou Zhang
f911d5b09d
[doc] Add user document for Shardformer ( #4702 )
...
* create shardformer doc files
* add docstring for seq-parallel
* update ShardConfig docstring
* add links to llama example
* add outdated massage
* finish introduction & supporting information
* finish 'how shardformer works'
* finish shardformer.md English doc
* fix doctest fail
* add Chinese document
2023-09-15 10:56:39 +08:00
binmakeswell
ce97790ed7
[doc] fix llama2 code link ( #4726 )
...
* [doc] fix llama2 code link
* [doc] fix llama2 code link
* [doc] fix llama2 code link
2023-09-14 23:19:25 +08:00
Baizhou Zhang
1d454733c4
[doc] Update booster user documents. ( #4669 )
...
* update booster_api.md
* update booster_checkpoint.md
* update booster_plugins.md
* move transformers importing inside function
* fix Dict typing
* fix autodoc bug
* small fix
2023-09-12 10:47:23 +08:00