flybird11111
0a25e16e46
[shardformer]gather llama logits ( #5398 )
...
* gather llama logits
* fix
9 months ago
QinLuo
bf34c6fef6
[fsdp] impl save/load shard model/optimizer ( #5357 )
9 months ago
Stephan Kölker
5d380a1a21
[hotfix] Fix wrong import in meta_registry ( #5392 )
9 months ago
Hongxin Liu
7303801854
[llama] fix training and inference scripts ( #5384 )
...
* [llama] refactor inference example to fit sft
* [llama] fix training script to fit gemini
* [llama] fix inference script
9 months ago
Frank Lee
efef43b53c
Merge pull request #5372 from hpcaitech/exp/mixtral
10 months ago
Frank Lee
4c03347fc7
Merge pull request #5377 from hpcaitech/example/llama-npu
...
[llama] support npu for Colossal-LLaMA-2
10 months ago
ver217
06db94fbc9
[moe] fix tests
10 months ago
Hongxin Liu
da39d21b71
[moe] support mixtral ( #5309 )
...
* [moe] add mixtral block for single expert
* [moe] mixtral block fwd support uneven ep
* [moe] mixtral block bwd support uneven ep
* [moe] add mixtral moe layer
* [moe] simplify replace
* [meo] support save sharded mixtral
* [meo] support load sharded mixtral
* [meo] support save sharded optim
* [meo] integrate moe manager into plug
* [meo] fix optimizer load
* [meo] fix mixtral layer
10 months ago
Hongxin Liu
c904d2ae99
[moe] update capacity computing ( #5253 )
...
* [moe] top2 allow uneven input
* [moe] update capacity computing
* [moe] remove debug info
* [moe] update capacity computing
* [moe] update capacity computing
10 months ago
Xuanlei Zhao
7d8e0338a4
[moe] init mixtral impl
10 months ago
Hongxin Liu
c53ddda88f
[lr-scheduler] fix load state dict and add test ( #5369 )
10 months ago
Hongxin Liu
eb4f2d90f9
[llama] polish training script and fix optim ckpt ( #5368 )
10 months ago
Hongxin Liu
6c0fa7b9a8
[llama] fix dataloader for hybrid parallel ( #5358 )
...
* [plugin] refactor prepare dataloader
* [plugin] update train script
10 months ago
Hongxin Liu
2dd01e3a14
[gemini] fix param op hook when output is tuple ( #5355 )
...
* [gemini] fix param op hook when output is tuple
* [gemini] fix param op hook
10 months ago
Wenhao Chen
1c790c0877
[fix] remove unnecessary dp_size assert ( #5351 )
...
* fix: remove unnecessary assert
* test: add more 3d plugin tests
* fix: add warning
10 months ago
Hongxin Liu
ffffc32dc7
[checkpointio] fix gemini and hybrid parallel optim checkpoint ( #5347 )
...
* [checkpointio] fix hybrid parallel optim checkpoint
* [extension] fix cuda extension
* [checkpointio] fix gemini optimizer checkpoint
* polish code
10 months ago
digger yu
71321a07cf
fix typo change dosen't to doesn't ( #5308 )
10 months ago
flybird11111
388179f966
[tests] fix t5 test. ( #5322 )
...
* [ci] fix shardformer tests. (#5255 )
* fix ci
fix
* revert: revert p2p
* feat: add enable_metadata_cache option
* revert: enable t5 tests
---------
Co-authored-by: Wenhao Chen <cwher@outlook.com>
* fix t5 test
---------
Co-authored-by: Wenhao Chen <cwher@outlook.com>
10 months ago
FrankLeeeee
087d0cb1fc
[accelerator] fixed npu api
10 months ago
Frank Lee
8823cc4831
Merge pull request #5310 from hpcaitech/feature/npu
...
Feature/npu
10 months ago
Frank Lee
7cfed5f076
[feat] refactored extension module ( #5298 )
...
* [feat] refactored extension module
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* polish
10 months ago
digger yu
bce9499ed3
fix some typo ( #5307 )
10 months ago
ver217
148469348a
Merge branch 'main' into sync/npu
11 months ago
flybird11111
46e091651b
[shardformer] hybridparallelplugin support gradients accumulation. ( #5246 )
...
* support gradients acc
fix
fix
fix
fix
fix
fix
fix
fix
fix
fix
fix
fix
fix
* fix
fix
* fix
fix
fix
11 months ago
Wenhao Chen
ef4f0ee854
[hotfix]: add pp sanity check and fix mbs arg ( #5268 )
...
* fix: fix misleading mbs arg
* feat: add pp sanity check
* fix: fix 1f1b sanity check
11 months ago
binmakeswell
c174c4fc5f
[doc] fix doc typo ( #5256 )
...
* [doc] fix annotation display
* [doc] fix llama2 doc
11 months ago
flybird11111
e830ef917d
[ci] fix shardformer tests. ( #5255 )
...
* fix ci
fix
* revert: revert p2p
* feat: add enable_metadata_cache option
* revert: enable t5 tests
---------
Co-authored-by: Wenhao Chen <cwher@outlook.com>
11 months ago
Frank Lee
9102d655ab
[hotfix] removed unused flag ( #5242 )
11 months ago
Hongxin Liu
d202cc28c0
[npu] change device to accelerator api ( #5239 )
...
* update accelerator
* fix timer
* fix amp
* update
* fix
* update bug
* add error raise
* fix autocast
* fix set device
* remove doc accelerator
* update doc
* update doc
* update doc
* use nullcontext
* update cpu
* update null context
* change time limit for example
* udpate
* update
* update
* update
* [npu] polish accelerator code
---------
Co-authored-by: Xuanlei Zhao <xuanlei.zhao@gmail.com>
Co-authored-by: zxl <43881818+oahzxl@users.noreply.github.com>
11 months ago
Elsa Granger
d565df3821
[pipeline] A more general _communicate in p2p ( #5062 )
...
* A more general _communicate
* feat: finish tree_flatten version p2p
* fix: update p2p api calls
---------
Co-authored-by: Wenhao Chen <cwher@outlook.com>
11 months ago
Xuanlei Zhao
dd2c28a323
[npu] use extension for op builder ( #5172 )
...
* update extension
* update cpu adam
* update is
* add doc for cpu adam
* update kernel
* update commit
* update flash
* update memory efficient
* update flash attn
* update flash attention loader
* update api
* fix
* update doc
* update example time limit
* reverse change
* fix doc
* remove useless kernel
* fix
* not use warning
* update
* update
11 months ago
digger yu
b0b53a171c
[nfc] fix typo colossalai/shardformer/ ( #5133 )
11 months ago
flybird11111
451e9142b8
fix flash attn ( #5209 )
11 months ago
flybird11111
365671be10
fix-test ( #5210 )
...
fix-test
fix-test
11 months ago
Wenhao Chen
d799a3088f
[pipeline]: add p2p fallback order and fix interleaved pp deadlock ( #5214 )
...
* fix: add fallback order option and update 1f1b
* fix: fix deadlock comm in interleaved pp
* test: modify p2p test
11 months ago
Wenhao Chen
3c0d82b19b
[pipeline]: support arbitrary batch size in forward_only mode ( #5201 )
...
* fix: remove drop last in val & test dataloader
* feat: add run_forward_only, support arbitrary bs
* chore: modify ci script
11 months ago
flybird11111
02d2328a04
support linear accumulation fusion ( #5199 )
...
support linear accumulation fusion
support linear accumulation fusion
fix
11 months ago
Wenhao Chen
4fa689fca1
[pipeline]: fix p2p comm, add metadata cache and support llama interleaved pp ( #5134 )
...
* test: add more p2p tests
* fix: remove send_forward_recv_forward as p2p op list need to use the same group
* fix: make send and receive atomic
* feat: update P2PComm fn
* feat: add metadata cache in 1f1b
* feat: add metadata cache in interleaved pp
* feat: modify is_xx_stage fn
* revert: add _broadcast_object_list
* feat: add interleaved pp in llama policy
* feat: set NCCL_BUFFSIZE in HybridParallelPlugin
11 months ago
flybird11111
79718fae04
[shardformer] llama support DistCrossEntropy ( #5176 )
...
* fix
aaa
fix
fix
fix
* fix
* fix
* test ci
* fix ci
fix
* llama support dist-cross
fix
fix
fix
fix
fix
fix
fix
fix
* fix
* fix
* fix
fix
* test ci
* test ci
* fix
* [Colossal-Llama-2] Add finetuning Colossal-Llama-2 example (#4878 )
* Add finetuning Colossal-Llama-2 example
* Add finetuning Colossal-Llama-2 example 2
* Add finetuning Colossal-Llama-2 example and support NEFTuning
* Add inference example and refine neftune
* Modify readme file
* update the imports
---------
Co-authored-by: Xu Yuanchen <yuanchen.xu00@gmail.com>
Co-authored-by: Camille Zhong <44392324+Camille7777@users.noreply.github.com>
* llama support dist-cross
fix
fix
fix
fix
fix
fix
fix
fix
* fix
* fix
* fix
fix
* test ci
* test ci
* fix
* fix ci
* fix ci
---------
Co-authored-by: Yuanchen <70520919+chengeharrison@users.noreply.github.com>
Co-authored-by: Xu Yuanchen <yuanchen.xu00@gmail.com>
Co-authored-by: Camille Zhong <44392324+Camille7777@users.noreply.github.com>
12 months ago
flybird11111
21aa5de00b
[gemini] hotfix NaN loss while using Gemini + tensor_parallel ( #5150 )
...
* fix
aaa
fix
fix
fix
* fix
* fix
* test ci
* fix ci
fix
12 months ago
flybird11111
3dbbf83f1c
fix ( #5158 )
...
fix
12 months ago
flybird11111
2a2ec49aa7
[plugin]fix 3d checkpoint load when booster boost without optimizer. ( #5135 )
...
* fix 3d checkpoint load when booster boost without optimizer
fix 3d checkpoint load when booster boost without optimizer
* test ci
* revert ci
* fix
fix
1 year ago
Xuanlei Zhao
d6df19bae7
[npu] support triangle attention for llama ( #5130 )
...
* update fused attn
* update spda
* tri attn
* update triangle
* import
* fix
* fix
1 year ago
Frank Lee
f4e72c9992
[accelerator] init the accelerator module ( #5129 )
...
* [accelerator] init the accelerator module
* polish code
* polish code
* polish code
* polish code
1 year ago
github-actions[bot]
d10ee42f68
[format] applied code formatting on changed files in pull request 5088 ( #5127 )
...
Co-authored-by: github-actions <github-actions@github.com>
1 year ago
Wenhao Chen
7172459e74
[shardformer]: support gpt-j, falcon, Mistral and add interleaved pipeline for bert ( #5088 )
...
* [shardformer] implement policy for all GPT-J models and test
* [shardformer] support interleaved pipeline parallel for bert finetune
* [shardformer] shardformer support falcon (#4883 )
* [shardformer]: fix interleaved pipeline for bert model (#5048 )
* [hotfix]: disable seq parallel for gptj and falcon, and polish code (#5093 )
* Add Mistral support for Shardformer (#5103 )
* [shardformer] add tests to mistral (#5105 )
---------
Co-authored-by: Pengtai Xu <henryxu880@gmail.com>
Co-authored-by: ppt0011 <143150326+ppt0011@users.noreply.github.com>
Co-authored-by: flybird11111 <1829166702@qq.com>
Co-authored-by: eric8607242 <e0928021388@gmail.com>
1 year ago
アマデウス
126cf180bc
[hotfix] fixed memory usage of shardformer module replacement ( #5122 )
1 year ago
Xuanlei Zhao
68fcaa2225
remove duplicate import ( #5100 )
1 year ago
Xuanlei Zhao
3acbf6d496
[npu] add npu support for hybrid plugin and llama ( #5090 )
...
* llama 3d
* update
* fix autocast
1 year ago
flybird11111
aae496631c
[shardformer]fix flash attention, when mask is casual, just don't unpad it ( #5084 )
...
* fix flash attn
* fix
fix
1 year ago