Commit Graph

3531 Commits (fe24789eb178236ad77112824a7d6081ed50dabc)

Author SHA1 Message Date
pre-commit-ci[bot] 1b880ce095 [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
2024-06-07 07:01:31 +00:00
YeAnbang b1031f7244 fix ci 2024-06-07 07:01:31 +00:00
YeAnbang 7ae87b3159 fix training script 2024-06-07 07:01:31 +00:00
YeAnbang 0b4a33548c moupdate ci tests, st ci test cases passed, tp failed in generation for ppo, sp is buggy 2024-06-07 07:01:31 +00:00
YeAnbang 7e65b71815 run pre-commit 2024-06-07 07:01:30 +00:00
YeAnbang 929e1e3da4 upgrade ppo dpo rm script 2024-06-07 07:01:30 +00:00
YeAnbang 7a7e86987d upgrade colossal-chat support tp_group>1, add sp for sft 2024-06-07 07:01:30 +00:00
Hongxin Liu 73e88a5553
[shardformer] fix import (#5788) 2024-06-06 19:09:50 +08:00
Hongxin Liu 5ead00ffc5
[misc] update requirements (#5787) 2024-06-06 15:55:34 +08:00
flybird11111 a1e39f4c0d
[install]fix setup (#5786)
* fix

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-06-06 11:47:48 +08:00
Hongxin Liu b9d646fe9e
[misc] fix dist logger (#5782) 2024-06-05 15:04:22 +08:00
Charles Coulombe c46e09715c
Allow building cuda extension without a device. (#5535)
Added FORCE_CUDA environment variable support, to enable building extensions where a GPU device is not present but cuda libraries are.
2024-06-05 14:26:30 +08:00
botbw 3f7e3131d9
[gemini] optimize reduce scatter d2h copy (#5760)
* [gemini] optimize reduce scatter d2h copy

* [fix] fix missing reduce variable

* [refactor] remove legacy async reduce scatter code

* [gemini] missing sync

* Revert "[refactor] remove legacy async reduce scatter code"

This reverts commit 58ad76d466.

* [gemini] further optimize with async all reduce

* [fix] pass flag from manager to chunk
2024-06-05 14:23:13 +08:00
duanjunwen 10a19e22c6
[hotfix] fix testcase in test_fx/test_tracer (#5779)
* [fix] branch for fix testcase;

* [fix] fix test_analyzer & test_auto_parallel;

* [fix] remove local change about moe;

* [fix] rm local change moe;

* [fix] fix test_deepfm_model & test_dlrf_model;

* [fix] fix test_hf_albert & test_hf_gpt;
2024-06-05 11:29:32 +08:00
botbw 80c3c8789b
[Test/CI] remove test cases to reduce CI duration (#5753)
* [test] smaller gpt2 test case

* [test] reduce test cases: tests/test_zero/test_gemini/test_zeroddp_state_dict.py

* [test] reduce test cases: tests/test_zero/test_gemini/test_grad_accum.py

* [test] reduce test cases tests/test_zero/test_gemini/test_optim.py

* Revert "[test] smaller gpt2 test case"

Some tests might depend on the size of model (num of chunks)

This reverts commit df705a5210.

* [test] reduce test cases: tests/test_checkpoint_io/test_gemini_checkpoint_io.py

* [CI] smaller test model for two mwo the two modifid cases

* [CI] hardcode gpt model for tests/test_zero/test_gemini/test_search.py since we need a fixed answer there
2024-06-05 11:29:04 +08:00
Edenzzzz 79f7a7b211
[misc] Accelerate CI for zero and dist optim (#5758)
* remove fp16 from lamb

* remove d2h copy in checking states

---------

Co-authored-by: Edenzzzz <wtan45@wisc.edu>
2024-06-05 11:25:19 +08:00
flybird11111 50b4c8e8cf
[hotfix] fix llama flash attention forward (#5777) 2024-06-05 10:56:47 +08:00
yuehuayingxueluo b45000f839
[Inference]Add Streaming LLM (#5745)
* Add Streaming LLM

* add some parameters to llama_generation.py

* verify streamingllm config

* add test_streamingllm.py

* modified according to the opinions of review

* add Citation

* change _block_tables tolist
2024-06-05 10:51:19 +08:00
Hongxin Liu ee6fd38373
[devops] fix docker ci (#5780) 2024-06-04 17:47:39 +08:00
Hongxin Liu 32f4187806
[misc] update dockerfile (#5776)
* [misc] update dockerfile

* [misc] update dockerfile
2024-06-04 16:15:41 +08:00
Haze188 e22b82755d
[CI/tests] simplify some test case to reduce testing time (#5755)
* [ci/tests] simplify some test case to reduce testing time

* [ci/tests] continue to remove test case to reduce ci time cost

* restore some test config

* [ci/tests] continue to reduce ci time cost
2024-06-04 13:57:54 +08:00
Yuanheng Zhao 406443200f
[Hotfix] Add missing init file in inference.executor (#5774) 2024-06-03 22:29:39 +08:00
duanjunwen 1b76564e16
[test] Fix/fix testcase (#5770)
* [fix] branch for fix testcase;

* [fix] fix test_analyzer & test_auto_parallel;

* [fix] remove local change about moe;

* [fix] rm local change moe;
2024-06-03 15:26:01 +08:00
flybird11111 3f2be80530
fix (#5765) 2024-06-03 11:25:18 +08:00
Hongxin Liu 68359ed1e1
[release] update version (#5752)
* [release] update version

* [devops] update compatibility test

* [devops] update compatibility test

* [devops] update compatibility test

* [devops] update compatibility test

* [test] fix ddp plugin test

* [test] fix gptj and rpc test

* [devops] fix cuda ext compatibility

* [inference] fix flash decoding test

* [inference] fix flash decoding test
2024-05-31 19:40:26 +08:00
Yuanheng Zhao 677cbfacf8
[Fix/Example] Fix Llama Inference Loading Data Type (#5763)
* [fix/example] fix llama inference loading dtype

* revise loading dtype of benchmark llama3
2024-05-30 13:48:46 +08:00
botbw 023ea13cb5
Merge pull request #5749 from hpcaitech/prefetch
[Gemini] Prefetch next chunk before each op
2024-05-29 15:35:54 +08:00
hxwang 154720ba6e [chore] refactor profiler utils 2024-05-28 12:41:42 +00:00
hxwang 8547562884 [chore] remove unnecessary assert since compute list might not be recorded 2024-05-28 05:16:02 +00:00
hxwang e5e3320948 [bug] continue fix 2024-05-28 02:41:23 +00:00
hxwang 936dd96dbb [bug] workaround for idx fix 2024-05-28 02:33:12 +00:00
botbw e0dde8fda5
Merge pull request #5754 from Hz188/prefetch
[Gemini]Prefetch benchmark
2024-05-27 14:59:21 +08:00
botbw 157b4cc357
Merge branch 'prefetch' into prefetch 2024-05-27 14:58:57 +08:00
genghaozhe 87665d7922 correct argument help message 2024-05-27 06:03:53 +00:00
Haze188 4d097def96
[Gemini] add some code for reduce-scatter overlap, chunk prefetch in llama benchmark. (#5751)
* [bugs] fix args.profile=False DummyProfiler errro

* add args.prefetch_num for benchmark
2024-05-25 23:00:13 +08:00
genghaozhe b9269d962d add args.prefetch_num for benchmark 2024-05-25 14:55:50 +00:00
genghaozhe fba04e857b [bugs] fix args.profile=False DummyProfiler errro 2024-05-25 14:55:09 +00:00
Yuanheng Zhao b96c6390f4
[inference] Fix running time of test_continuous_batching (#5750) 2024-05-24 19:34:15 +08:00
Edenzzzz 5f8c0a0ac3
[Feature] auto-cast optimizers to distributed version (#5746)
* auto-cast optimizers to distributed

* fix galore casting

* logger

---------

Co-authored-by: Edenzzzz <wtan45@wisc.edu>
2024-05-24 17:24:16 +08:00
hxwang ca674549e0 [chore] remove unnecessary test & changes 2024-05-24 06:09:36 +00:00
hxwang ff507b755e Merge branch 'main' of github.com:hpcaitech/ColossalAI into prefetch 2024-05-24 04:05:07 +00:00
hxwang 63c057cd8e [example] add profile util for llama 2024-05-24 03:59:36 +00:00
botbw 2fc85abf43
[gemini] async grad chunk reduce (all-reduce&reduce-scatter) (#5713)
* [gemini] async grad chunk reduce (all-reduce&reduce-scatter)

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* [gemini] add test

* [gemini] rename func

* [gemini] update llama benchmark

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* [gemini] use tensor counter

* [gemini] change default config in GeminiPlugin and GeminiDDP

* [chore] typo

* [gemini] fix sync issue & add test cases

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-05-24 10:31:16 +08:00
Jianghai 85946d4236
[Inference]Fix readme and example for API server (#5742)
* fix chatapi readme and example

* updating doc

* add an api and change the doc

* remove

* add credits and del 'API' heading

* readme

* readme
2024-05-24 10:03:05 +08:00
hxwang 15d21a077a Merge remote-tracking branch 'origin/main' into prefetch 2024-05-23 15:49:33 +00:00
binmakeswell 4647ec28c8
[inference] release (#5747)
* [inference] release

* [inference] release

* [inference] release

* [inference] release

* [inference] release

* [inference] release

* [inference] release
2024-05-23 17:44:06 +08:00
Yuanheng Zhao df6747603f
[Colossal-Inference] (v0.1.0) Merge pull request #5739 from hpcaitech/feature/colossal-infer
[Inference] Merge feature/colossal-infer
2024-05-22 14:31:09 +08:00
Yuanheng Zhao 498f42c45b
[NFC] fix requirements (#5744) 2024-05-22 12:08:49 +08:00
Yuanheng Zhao bd38fe6b91
[NFC] Fix code factors on inference triton kernels (#5743) 2024-05-21 22:12:15 +08:00
Yuanheng Zhao c2c8c9cf17
[ci] Temporary fix for build on pr (#5741)
* temporary fix for CI

* timeout to 90
2024-05-21 18:20:57 +08:00