Commit Graph

3504 Commits (9688e19b322510bd64956a75bd6227850817cc69)

Author SHA1 Message Date
Hongxin Liu aa125bcc91
[shardformer] fix modeling of bloom and falcon (#5796) 2024-06-11 17:43:50 +08:00
Hongxin Liu 587bbf4c6d
[test] fix chatglm test kit (#5793) 2024-06-11 16:54:31 +08:00
YeAnbang 74f4a29734
Merge pull request #5759 from hpcaitech/colossalchat_upgrade
[ColossalChat] Colossalchat upgrade
2024-06-11 12:49:53 +08:00
Runyu Lu c0948aff97
[Inference]refactor baichuan (#5791)
* refactor baichuan

* remove unused code and add TODO for lazyinit
2024-06-11 10:52:01 +08:00
YeAnbang 84eab13078 update sft trainning script 2024-06-11 02:44:20 +00:00
Li Xingjian 77a219a082
Merge pull request #5771 from char-1ee/refactor/modeling
[Inference] Refactor modeling attention layer by abstracting attention backends
2024-06-10 11:52:22 +08:00
char-1ee b303976a27 Fix test import
Signed-off-by: char-1ee <xingjianli59@gmail.com>
2024-06-10 02:03:30 +00:00
YeAnbang 2abdede1d7 fix readme 2024-06-10 01:08:42 +00:00
char-1ee f5981e808e Remove flash attention backend
Signed-off-by: char-1ee <xingjianli59@gmail.com>
2024-06-07 10:02:19 +00:00
YeAnbang 77db21610a replace the customized dataloader setup with the build-in one 2024-06-07 09:44:25 +00:00
YeAnbang 0d7ff10ea5 replace the customized dataloader setup with the build-in one 2024-06-07 09:43:42 +00:00
char-1ee ceba662d22 Clean up
Signed-off-by: char-1ee <xingjianli59@gmail.com>
2024-06-07 09:09:29 +00:00
char-1ee 5f398fc000 Pass inference model shard configs for module init
Signed-off-by: char-1ee <xingjianli59@gmail.com>
2024-06-07 08:33:52 +00:00
char-1ee eec77e5702 Fix tests and naming
Signed-off-by: char-1ee <xingjianli59@gmail.com>
2024-06-07 08:33:47 +00:00
char-1ee 04386d9eff Refactor modeling by adding attention backend
Signed-off-by: char-1ee <xingjianli59@gmail.com>
2024-06-07 08:33:47 +00:00
YeAnbang 790e1362a6 merge 2024-06-07 07:01:32 +00:00
YeAnbang ac1520cb8f remove baichuan from template test due to transformer version conflict 2024-06-07 07:01:32 +00:00
YeAnbang e16ccc272a update ci 2024-06-07 07:01:32 +00:00
YeAnbang 45195ac53d remove local data path 2024-06-07 07:01:31 +00:00
YeAnbang bf57b13dda remove models that require huggingface auth from ci 2024-06-07 07:01:31 +00:00
YeAnbang 0bbac158ed fix datasets version 2024-06-07 07:01:31 +00:00
YeAnbang 62eb28b929 remove duplicated test 2024-06-07 07:01:31 +00:00
YeAnbang b8b5cacf38 fix transformers version 2024-06-07 07:01:31 +00:00
pre-commit-ci[bot] 1b880ce095 [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
2024-06-07 07:01:31 +00:00
YeAnbang b1031f7244 fix ci 2024-06-07 07:01:31 +00:00
YeAnbang 7ae87b3159 fix training script 2024-06-07 07:01:31 +00:00
YeAnbang 0b4a33548c moupdate ci tests, st ci test cases passed, tp failed in generation for ppo, sp is buggy 2024-06-07 07:01:31 +00:00
YeAnbang 7e65b71815 run pre-commit 2024-06-07 07:01:30 +00:00
YeAnbang 929e1e3da4 upgrade ppo dpo rm script 2024-06-07 07:01:30 +00:00
YeAnbang 7a7e86987d upgrade colossal-chat support tp_group>1, add sp for sft 2024-06-07 07:01:30 +00:00
Hongxin Liu 73e88a5553
[shardformer] fix import (#5788) 2024-06-06 19:09:50 +08:00
Hongxin Liu 5ead00ffc5
[misc] update requirements (#5787) 2024-06-06 15:55:34 +08:00
flybird11111 a1e39f4c0d
[install]fix setup (#5786)
* fix

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-06-06 11:47:48 +08:00
Hongxin Liu b9d646fe9e
[misc] fix dist logger (#5782) 2024-06-05 15:04:22 +08:00
Charles Coulombe c46e09715c
Allow building cuda extension without a device. (#5535)
Added FORCE_CUDA environment variable support, to enable building extensions where a GPU device is not present but cuda libraries are.
2024-06-05 14:26:30 +08:00
botbw 3f7e3131d9
[gemini] optimize reduce scatter d2h copy (#5760)
* [gemini] optimize reduce scatter d2h copy

* [fix] fix missing reduce variable

* [refactor] remove legacy async reduce scatter code

* [gemini] missing sync

* Revert "[refactor] remove legacy async reduce scatter code"

This reverts commit 58ad76d466.

* [gemini] further optimize with async all reduce

* [fix] pass flag from manager to chunk
2024-06-05 14:23:13 +08:00
duanjunwen 10a19e22c6
[hotfix] fix testcase in test_fx/test_tracer (#5779)
* [fix] branch for fix testcase;

* [fix] fix test_analyzer & test_auto_parallel;

* [fix] remove local change about moe;

* [fix] rm local change moe;

* [fix] fix test_deepfm_model & test_dlrf_model;

* [fix] fix test_hf_albert & test_hf_gpt;
2024-06-05 11:29:32 +08:00
botbw 80c3c8789b
[Test/CI] remove test cases to reduce CI duration (#5753)
* [test] smaller gpt2 test case

* [test] reduce test cases: tests/test_zero/test_gemini/test_zeroddp_state_dict.py

* [test] reduce test cases: tests/test_zero/test_gemini/test_grad_accum.py

* [test] reduce test cases tests/test_zero/test_gemini/test_optim.py

* Revert "[test] smaller gpt2 test case"

Some tests might depend on the size of model (num of chunks)

This reverts commit df705a5210.

* [test] reduce test cases: tests/test_checkpoint_io/test_gemini_checkpoint_io.py

* [CI] smaller test model for two mwo the two modifid cases

* [CI] hardcode gpt model for tests/test_zero/test_gemini/test_search.py since we need a fixed answer there
2024-06-05 11:29:04 +08:00
Edenzzzz 79f7a7b211
[misc] Accelerate CI for zero and dist optim (#5758)
* remove fp16 from lamb

* remove d2h copy in checking states

---------

Co-authored-by: Edenzzzz <wtan45@wisc.edu>
2024-06-05 11:25:19 +08:00
flybird11111 50b4c8e8cf
[hotfix] fix llama flash attention forward (#5777) 2024-06-05 10:56:47 +08:00
yuehuayingxueluo b45000f839
[Inference]Add Streaming LLM (#5745)
* Add Streaming LLM

* add some parameters to llama_generation.py

* verify streamingllm config

* add test_streamingllm.py

* modified according to the opinions of review

* add Citation

* change _block_tables tolist
2024-06-05 10:51:19 +08:00
Hongxin Liu ee6fd38373
[devops] fix docker ci (#5780) 2024-06-04 17:47:39 +08:00
Hongxin Liu 32f4187806
[misc] update dockerfile (#5776)
* [misc] update dockerfile

* [misc] update dockerfile
2024-06-04 16:15:41 +08:00
Haze188 e22b82755d
[CI/tests] simplify some test case to reduce testing time (#5755)
* [ci/tests] simplify some test case to reduce testing time

* [ci/tests] continue to remove test case to reduce ci time cost

* restore some test config

* [ci/tests] continue to reduce ci time cost
2024-06-04 13:57:54 +08:00
Yuanheng Zhao 406443200f
[Hotfix] Add missing init file in inference.executor (#5774) 2024-06-03 22:29:39 +08:00
duanjunwen 1b76564e16
[test] Fix/fix testcase (#5770)
* [fix] branch for fix testcase;

* [fix] fix test_analyzer & test_auto_parallel;

* [fix] remove local change about moe;

* [fix] rm local change moe;
2024-06-03 15:26:01 +08:00
flybird11111 3f2be80530
fix (#5765) 2024-06-03 11:25:18 +08:00
Hongxin Liu 68359ed1e1
[release] update version (#5752)
* [release] update version

* [devops] update compatibility test

* [devops] update compatibility test

* [devops] update compatibility test

* [devops] update compatibility test

* [test] fix ddp plugin test

* [test] fix gptj and rpc test

* [devops] fix cuda ext compatibility

* [inference] fix flash decoding test

* [inference] fix flash decoding test
2024-05-31 19:40:26 +08:00
Yuanheng Zhao 677cbfacf8
[Fix/Example] Fix Llama Inference Loading Data Type (#5763)
* [fix/example] fix llama inference loading dtype

* revise loading dtype of benchmark llama3
2024-05-30 13:48:46 +08:00
botbw 023ea13cb5
Merge pull request #5749 from hpcaitech/prefetch
[Gemini] Prefetch next chunk before each op
2024-05-29 15:35:54 +08:00