GuangyaoZhang
3c7302ad0e
merge model and attention forward
5 months ago
GuangyaoZhang
8c3f524660
Remove CohereLayerNorm and use existing layernorm
6 months ago
GuangyaoZhang
c9025ebd7c
Merge branch 'command-r' of github.com:GuangyaoZhang/ColossalAI into command-r
6 months ago
GuangyaoZhang
9a290ab013
fix precommit
6 months ago
pre-commit-ci[bot]
2a7fa2e7d0
[pre-commit.ci] auto fixes from pre-commit.com hooks
...
for more information, see https://pre-commit.ci
6 months ago
GuangyaoZhang
1016bb3257
Fix Code Factor check
6 months ago
GuangyaoZhang
94fbde6055
change command
6 months ago
GuangyaoZhang
431b7bcf8f
Copy llama to command
6 months ago
flybird11111
2ddf624a86
[shardformer] upgrade transformers to 4.39.3 ( #5815 )
...
* [shardformer]upgrade transformers for gpt2/gptj/whisper (#5807 )
* [shardformer] fix modeling of gpt2 and gptj
* [shardformer] fix whisper modeling
* [misc] update requirements
---------
Co-authored-by: ver217 <lhx0217@gmail.com>
* [shardformer]upgrade transformers for mistral (#5808 )
* upgrade transformers for mistral
* fix
* fix
* [shardformer]upgrade transformers for llama (#5809 )
* update transformers
fix
* fix
* fix
* [inference] upgrade transformers (#5810 )
* update transformers
fix
* fix
* fix
* fix
* fix
* [gemini] update transformers for gemini (#5814 )
---------
Co-authored-by: ver217 <lhx0217@gmail.com>
6 months ago
botbw
3bcbba9262
[gemini] quick fix on possible async operation ( #5803 )
...
* [gemini] quick fix on possible async operation
* [gemini] quick fix on possible async operation
6 months ago
Haze188
d9dddf574f
[Gemini] Use async stream to prefetch and h2d data moving ( #5781 )
...
* use async stream to prefetch and h2d data moving
* Remove redundant code
6 months ago
Li Xingjian
8554585a5f
[Inference] Fix flash-attn import and add model test ( #5794 )
...
* Fix torch int32 dtype
Signed-off-by: char-1ee <xingjianli59@gmail.com>
* Fix flash-attn import
Signed-off-by: char-1ee <xingjianli59@gmail.com>
* Add generalized model test
Signed-off-by: char-1ee <xingjianli59@gmail.com>
* Remove exposed path to model
Signed-off-by: char-1ee <xingjianli59@gmail.com>
* Add default value for use_flash_attn
Signed-off-by: char-1ee <xingjianli59@gmail.com>
* Rename model test
Signed-off-by: char-1ee <xingjianli59@gmail.com>
---------
Signed-off-by: char-1ee <xingjianli59@gmail.com>
6 months ago
Guangyao Zhang
aac941ef78
[test] fix qwen2 pytest distLarge ( #5797 )
6 months ago
Hongxin Liu
aa125bcc91
[shardformer] fix modeling of bloom and falcon ( #5796 )
6 months ago
Hongxin Liu
587bbf4c6d
[test] fix chatglm test kit ( #5793 )
6 months ago
YeAnbang
74f4a29734
Merge pull request #5759 from hpcaitech/colossalchat_upgrade
...
[ColossalChat] Colossalchat upgrade
6 months ago
Runyu Lu
c0948aff97
[Inference]refactor baichuan ( #5791 )
...
* refactor baichuan
* remove unused code and add TODO for lazyinit
6 months ago
YeAnbang
84eab13078
update sft trainning script
6 months ago
Li Xingjian
77a219a082
Merge pull request #5771 from char-1ee/refactor/modeling
...
[Inference] Refactor modeling attention layer by abstracting attention backends
6 months ago
char-1ee
b303976a27
Fix test import
...
Signed-off-by: char-1ee <xingjianli59@gmail.com>
6 months ago
YeAnbang
2abdede1d7
fix readme
6 months ago
char-1ee
f5981e808e
Remove flash attention backend
...
Signed-off-by: char-1ee <xingjianli59@gmail.com>
6 months ago
YeAnbang
77db21610a
replace the customized dataloader setup with the build-in one
6 months ago
YeAnbang
0d7ff10ea5
replace the customized dataloader setup with the build-in one
6 months ago
char-1ee
ceba662d22
Clean up
...
Signed-off-by: char-1ee <xingjianli59@gmail.com>
6 months ago
char-1ee
5f398fc000
Pass inference model shard configs for module init
...
Signed-off-by: char-1ee <xingjianli59@gmail.com>
6 months ago
char-1ee
eec77e5702
Fix tests and naming
...
Signed-off-by: char-1ee <xingjianli59@gmail.com>
6 months ago
char-1ee
04386d9eff
Refactor modeling by adding attention backend
...
Signed-off-by: char-1ee <xingjianli59@gmail.com>
6 months ago
YeAnbang
790e1362a6
merge
6 months ago
YeAnbang
ac1520cb8f
remove baichuan from template test due to transformer version conflict
6 months ago
YeAnbang
e16ccc272a
update ci
6 months ago
YeAnbang
45195ac53d
remove local data path
6 months ago
YeAnbang
bf57b13dda
remove models that require huggingface auth from ci
6 months ago
YeAnbang
0bbac158ed
fix datasets version
6 months ago
YeAnbang
62eb28b929
remove duplicated test
6 months ago
YeAnbang
b8b5cacf38
fix transformers version
6 months ago
pre-commit-ci[bot]
1b880ce095
[pre-commit.ci] auto fixes from pre-commit.com hooks
...
for more information, see https://pre-commit.ci
6 months ago
YeAnbang
b1031f7244
fix ci
6 months ago
YeAnbang
7ae87b3159
fix training script
6 months ago
YeAnbang
0b4a33548c
moupdate ci tests, st ci test cases passed, tp failed in generation for ppo, sp is buggy
6 months ago
YeAnbang
7e65b71815
run pre-commit
6 months ago
YeAnbang
929e1e3da4
upgrade ppo dpo rm script
6 months ago
YeAnbang
7a7e86987d
upgrade colossal-chat support tp_group>1, add sp for sft
6 months ago
Hongxin Liu
73e88a5553
[shardformer] fix import ( #5788 )
6 months ago
Hongxin Liu
5ead00ffc5
[misc] update requirements ( #5787 )
6 months ago
flybird11111
a1e39f4c0d
[install]fix setup ( #5786 )
...
* fix
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
6 months ago
Hongxin Liu
b9d646fe9e
[misc] fix dist logger ( #5782 )
6 months ago
Charles Coulombe
c46e09715c
Allow building cuda extension without a device. ( #5535 )
...
Added FORCE_CUDA environment variable support, to enable building extensions where a GPU device is not present but cuda libraries are.
6 months ago
botbw
3f7e3131d9
[gemini] optimize reduce scatter d2h copy ( #5760 )
...
* [gemini] optimize reduce scatter d2h copy
* [fix] fix missing reduce variable
* [refactor] remove legacy async reduce scatter code
* [gemini] missing sync
* Revert "[refactor] remove legacy async reduce scatter code"
This reverts commit 58ad76d466
.
* [gemini] further optimize with async all reduce
* [fix] pass flag from manager to chunk
6 months ago
duanjunwen
10a19e22c6
[hotfix] fix testcase in test_fx/test_tracer ( #5779 )
...
* [fix] branch for fix testcase;
* [fix] fix test_analyzer & test_auto_parallel;
* [fix] remove local change about moe;
* [fix] rm local change moe;
* [fix] fix test_deepfm_model & test_dlrf_model;
* [fix] fix test_hf_albert & test_hf_gpt;
6 months ago