Hongxin Liu
eaea88cf9e
[release] update version ( #5864 )
2024-06-28 10:49:55 +08:00
Runyu Lu
3c7cda0c9a
[Inference]Lazy Init Support ( #5785 )
...
* lazy init support
* lazy init llama support
* :lazy init support for baichuan
* aligh rpc
* add note for baichuan
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-06-27 18:02:15 +08:00
Guangyao Zhang
d9d5e7ea1f
[shardformer] Support the T5ForTokenClassification model ( #5816 )
...
* t5 token, still pytest fail
* Resolve T5 Pytest Failure
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fix typos
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-06-27 16:40:38 +08:00
Hongxin Liu
5dfbcd7746
[zero] use bucket during allgather ( #5860 )
...
* [zero] use bucket during allgather
* [zero] rename api
2024-06-27 16:34:44 +08:00
botbw
8e718a1421
[gemini] fixes for benchmarking ( #5847 )
...
* [gemini] fix missing return
* [gemini] fix missing arg pass
* [gemini] use gather tensor instead of list
* [test] enable flash attention for benchmark by default
* [test] enable flash attention for benchmark by default
---------
Co-authored-by: genghaozhe <939857490@qq.com>
2024-06-26 15:52:09 +08:00
Edenzzzz
2a25a2aff7
[Feature] optimize PP overlap ( #5735 )
...
* update to fully overlap, still debugging
* improve interface
* fixed deadlock bug
* debug NaN loss
* (experimental) use one comm group for send_fw_recv_fw to fix NaN
* cleaned up interfaces; use one batch p2p for all
* clean up; removed the double p2p batch case
* p2p test passsed
* improve overlap: send fwd before backward
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* tentatively use 2 p2p batches
* remove two p2p batches
* fix typos
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* remove pp.sh
---------
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: root <root@notebook-c55824c0-7742-45e8-9591-c855bb77ad29-0.notebook-c55824c0-7742-45e8-9591-c855bb77ad29.colossal-ai.svc.cluster.local>
2024-06-26 14:48:02 +08:00
binmakeswell
4ccaaaab63
[doc] add GPU cloud playground ( #5851 )
...
* [doc] add GPU cloud playground
* [doc] add GPU cloud playground
* [doc] add GPU cloud playground
* [doc] add GPU cloud playground
* [doc] add GPU cloud playground
* [doc] add GPU cloud playground
* [doc] add GPU cloud playground
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-06-25 11:03:16 +08:00
binmakeswell
7266f82d03
[doc] fix open sora model weight link ( #5848 )
...
* [doc] fix open sora model weight link
* [doc] fix open sora model weight link
2024-06-21 22:48:34 +08:00
binmakeswell
8f445729a4
[doc] opensora v1.2 news ( #5846 )
...
* [doc] opensora v1.2 news
* [doc] opensora v1.2 news
2024-06-21 14:20:45 +08:00
botbw
8a5c86439a
[gemini] fix missing return ( #5845 )
2024-06-21 11:38:40 +08:00
Hongxin Liu
bd3e34fef6
[release] update version ( #5833 )
2024-06-20 13:33:24 +08:00
Yuanheng Zhao
7b249c76e5
[Fix] Fix spec-dec Glide LlamaModel for compatibility with transformers ( #5837 )
...
* fix glide llama model
* revise
2024-06-19 15:37:53 +08:00
Guangyao Zhang
fd1dc417d8
[shardformer] Change atol in test command-r weight-check to pass pytest ( #5835 )
2024-06-19 13:59:22 +08:00
Guangyao Zhang
2014cce870
[devops] Remove building on PR when edited to avoid skip issue ( #5836 )
2024-06-19 13:58:05 +08:00
Kai Lv
0adca5b688
[launch] Support IPv4 host initialization in launch ( #5822 )
2024-06-18 19:18:29 +08:00
Guangyao Zhang
639394b0d4
Merge pull request #5818 from GuangyaoZhang/command-r
...
[shardformer] Support the Command-R model
2024-06-18 19:01:21 +08:00
Edenzzzz
7f9ec599be
[misc] Add dist optim to doc sidebar ( #5806 )
...
* add to sidebar
* fix chinese
2024-06-18 13:52:47 +08:00
GuangyaoZhang
4adbc36913
Merge branch 'command-r' of github.com:GuangyaoZhang/ColossalAI into command-r
2024-06-18 03:33:02 +00:00
GuangyaoZhang
d84d68601a
change 'xxx if xxx else None' to 'xxx or None'
2024-06-18 03:32:42 +00:00
pre-commit-ci[bot]
996c65077e
[pre-commit.ci] auto fixes from pre-commit.com hooks
...
for more information, see https://pre-commit.ci
2024-06-18 03:32:30 +00:00
GuangyaoZhang
a83a2336e8
rebase master llama change
2024-06-18 02:56:47 +00:00
GuangyaoZhang
20c0b06ff5
Merge branch 'command-r' of github.com:GuangyaoZhang/ColossalAI into command-r
2024-06-18 02:37:14 +00:00
GuangyaoZhang
363cde6957
merge model and attention forward
2024-06-18 02:32:41 +00:00
GuangyaoZhang
7a2b08646f
Remove CohereLayerNorm and use existing layernorm
2024-06-18 02:32:41 +00:00
GuangyaoZhang
fe2e74c03a
fix precommit
2024-06-18 02:31:33 +00:00
GuangyaoZhang
98da648a4a
Fix Code Factor check
2024-06-18 02:31:33 +00:00
GuangyaoZhang
f656d61778
change command
2024-06-18 02:31:33 +00:00
GuangyaoZhang
0b81163bc0
Copy llama to command
2024-06-18 02:31:33 +00:00
Edenzzzz
8795bb2e80
Support 4d parallel + flash attention ( #5789 )
...
* support tp + sp + pp
* remove comments
---------
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
2024-06-17 17:40:47 +08:00
GuangyaoZhang
3c7302ad0e
merge model and attention forward
2024-06-17 08:50:05 +00:00
GuangyaoZhang
8c3f524660
Remove CohereLayerNorm and use existing layernorm
2024-06-14 09:14:01 +00:00
GuangyaoZhang
c9025ebd7c
Merge branch 'command-r' of github.com:GuangyaoZhang/ColossalAI into command-r
2024-06-14 08:10:31 +00:00
GuangyaoZhang
9a290ab013
fix precommit
2024-06-14 08:09:24 +00:00
pre-commit-ci[bot]
2a7fa2e7d0
[pre-commit.ci] auto fixes from pre-commit.com hooks
...
for more information, see https://pre-commit.ci
2024-06-14 08:05:07 +00:00
GuangyaoZhang
1016bb3257
Fix Code Factor check
2024-06-14 08:04:29 +00:00
GuangyaoZhang
94fbde6055
change command
2024-06-14 07:55:13 +00:00
GuangyaoZhang
431b7bcf8f
Copy llama to command
2024-06-14 03:07:01 +00:00
flybird11111
2ddf624a86
[shardformer] upgrade transformers to 4.39.3 ( #5815 )
...
* [shardformer]upgrade transformers for gpt2/gptj/whisper (#5807 )
* [shardformer] fix modeling of gpt2 and gptj
* [shardformer] fix whisper modeling
* [misc] update requirements
---------
Co-authored-by: ver217 <lhx0217@gmail.com>
* [shardformer]upgrade transformers for mistral (#5808 )
* upgrade transformers for mistral
* fix
* fix
* [shardformer]upgrade transformers for llama (#5809 )
* update transformers
fix
* fix
* fix
* [inference] upgrade transformers (#5810 )
* update transformers
fix
* fix
* fix
* fix
* fix
* [gemini] update transformers for gemini (#5814 )
---------
Co-authored-by: ver217 <lhx0217@gmail.com>
2024-06-14 10:59:33 +08:00
botbw
3bcbba9262
[gemini] quick fix on possible async operation ( #5803 )
...
* [gemini] quick fix on possible async operation
* [gemini] quick fix on possible async operation
2024-06-13 10:35:17 +08:00
Haze188
d9dddf574f
[Gemini] Use async stream to prefetch and h2d data moving ( #5781 )
...
* use async stream to prefetch and h2d data moving
* Remove redundant code
2024-06-12 15:48:52 +08:00
Li Xingjian
8554585a5f
[Inference] Fix flash-attn import and add model test ( #5794 )
...
* Fix torch int32 dtype
Signed-off-by: char-1ee <xingjianli59@gmail.com>
* Fix flash-attn import
Signed-off-by: char-1ee <xingjianli59@gmail.com>
* Add generalized model test
Signed-off-by: char-1ee <xingjianli59@gmail.com>
* Remove exposed path to model
Signed-off-by: char-1ee <xingjianli59@gmail.com>
* Add default value for use_flash_attn
Signed-off-by: char-1ee <xingjianli59@gmail.com>
* Rename model test
Signed-off-by: char-1ee <xingjianli59@gmail.com>
---------
Signed-off-by: char-1ee <xingjianli59@gmail.com>
2024-06-12 14:13:50 +08:00
Guangyao Zhang
aac941ef78
[test] fix qwen2 pytest distLarge ( #5797 )
2024-06-12 12:13:51 +08:00
Hongxin Liu
aa125bcc91
[shardformer] fix modeling of bloom and falcon ( #5796 )
2024-06-11 17:43:50 +08:00
Hongxin Liu
587bbf4c6d
[test] fix chatglm test kit ( #5793 )
2024-06-11 16:54:31 +08:00
YeAnbang
74f4a29734
Merge pull request #5759 from hpcaitech/colossalchat_upgrade
...
[ColossalChat] Colossalchat upgrade
2024-06-11 12:49:53 +08:00
Runyu Lu
c0948aff97
[Inference]refactor baichuan ( #5791 )
...
* refactor baichuan
* remove unused code and add TODO for lazyinit
2024-06-11 10:52:01 +08:00
YeAnbang
84eab13078
update sft trainning script
2024-06-11 02:44:20 +00:00
Li Xingjian
77a219a082
Merge pull request #5771 from char-1ee/refactor/modeling
...
[Inference] Refactor modeling attention layer by abstracting attention backends
2024-06-10 11:52:22 +08:00
char-1ee
b303976a27
Fix test import
...
Signed-off-by: char-1ee <xingjianli59@gmail.com>
2024-06-10 02:03:30 +00:00
YeAnbang
2abdede1d7
fix readme
2024-06-10 01:08:42 +00:00