YeAnbang
82aecd6374
add SimPO
5 months ago
YeAnbang
84eab13078
update sft trainning script
5 months ago
YeAnbang
2abdede1d7
fix readme
5 months ago
YeAnbang
77db21610a
replace the customized dataloader setup with the build-in one
6 months ago
YeAnbang
0d7ff10ea5
replace the customized dataloader setup with the build-in one
6 months ago
YeAnbang
790e1362a6
merge
6 months ago
YeAnbang
ac1520cb8f
remove baichuan from template test due to transformer version conflict
6 months ago
YeAnbang
e16ccc272a
update ci
6 months ago
YeAnbang
45195ac53d
remove local data path
6 months ago
YeAnbang
bf57b13dda
remove models that require huggingface auth from ci
6 months ago
YeAnbang
0bbac158ed
fix datasets version
6 months ago
YeAnbang
62eb28b929
remove duplicated test
6 months ago
YeAnbang
b8b5cacf38
fix transformers version
6 months ago
pre-commit-ci[bot]
1b880ce095
[pre-commit.ci] auto fixes from pre-commit.com hooks
...
for more information, see https://pre-commit.ci
6 months ago
YeAnbang
b1031f7244
fix ci
6 months ago
YeAnbang
7ae87b3159
fix training script
6 months ago
YeAnbang
0b4a33548c
moupdate ci tests, st ci test cases passed, tp failed in generation for ppo, sp is buggy
6 months ago
YeAnbang
7e65b71815
run pre-commit
6 months ago
YeAnbang
929e1e3da4
upgrade ppo dpo rm script
6 months ago
YeAnbang
7a7e86987d
upgrade colossal-chat support tp_group>1, add sp for sft
6 months ago
Hongxin Liu
73e88a5553
[shardformer] fix import ( #5788 )
6 months ago
Hongxin Liu
5ead00ffc5
[misc] update requirements ( #5787 )
6 months ago
flybird11111
a1e39f4c0d
[install]fix setup ( #5786 )
...
* fix
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
6 months ago
Hongxin Liu
b9d646fe9e
[misc] fix dist logger ( #5782 )
6 months ago
Charles Coulombe
c46e09715c
Allow building cuda extension without a device. ( #5535 )
...
Added FORCE_CUDA environment variable support, to enable building extensions where a GPU device is not present but cuda libraries are.
6 months ago
botbw
3f7e3131d9
[gemini] optimize reduce scatter d2h copy ( #5760 )
...
* [gemini] optimize reduce scatter d2h copy
* [fix] fix missing reduce variable
* [refactor] remove legacy async reduce scatter code
* [gemini] missing sync
* Revert "[refactor] remove legacy async reduce scatter code"
This reverts commit 58ad76d466
.
* [gemini] further optimize with async all reduce
* [fix] pass flag from manager to chunk
6 months ago
duanjunwen
10a19e22c6
[hotfix] fix testcase in test_fx/test_tracer ( #5779 )
...
* [fix] branch for fix testcase;
* [fix] fix test_analyzer & test_auto_parallel;
* [fix] remove local change about moe;
* [fix] rm local change moe;
* [fix] fix test_deepfm_model & test_dlrf_model;
* [fix] fix test_hf_albert & test_hf_gpt;
6 months ago
botbw
80c3c8789b
[Test/CI] remove test cases to reduce CI duration ( #5753 )
...
* [test] smaller gpt2 test case
* [test] reduce test cases: tests/test_zero/test_gemini/test_zeroddp_state_dict.py
* [test] reduce test cases: tests/test_zero/test_gemini/test_grad_accum.py
* [test] reduce test cases tests/test_zero/test_gemini/test_optim.py
* Revert "[test] smaller gpt2 test case"
Some tests might depend on the size of model (num of chunks)
This reverts commit df705a5210
.
* [test] reduce test cases: tests/test_checkpoint_io/test_gemini_checkpoint_io.py
* [CI] smaller test model for two mwo the two modifid cases
* [CI] hardcode gpt model for tests/test_zero/test_gemini/test_search.py since we need a fixed answer there
6 months ago
Edenzzzz
79f7a7b211
[misc] Accelerate CI for zero and dist optim ( #5758 )
...
* remove fp16 from lamb
* remove d2h copy in checking states
---------
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
6 months ago
flybird11111
50b4c8e8cf
[hotfix] fix llama flash attention forward ( #5777 )
6 months ago
yuehuayingxueluo
b45000f839
[Inference]Add Streaming LLM ( #5745 )
...
* Add Streaming LLM
* add some parameters to llama_generation.py
* verify streamingllm config
* add test_streamingllm.py
* modified according to the opinions of review
* add Citation
* change _block_tables tolist
6 months ago
Hongxin Liu
ee6fd38373
[devops] fix docker ci ( #5780 )
6 months ago
Hongxin Liu
32f4187806
[misc] update dockerfile ( #5776 )
...
* [misc] update dockerfile
* [misc] update dockerfile
6 months ago
Haze188
e22b82755d
[CI/tests] simplify some test case to reduce testing time ( #5755 )
...
* [ci/tests] simplify some test case to reduce testing time
* [ci/tests] continue to remove test case to reduce ci time cost
* restore some test config
* [ci/tests] continue to reduce ci time cost
6 months ago
Yuanheng Zhao
406443200f
[Hotfix] Add missing init file in inference.executor ( #5774 )
6 months ago
duanjunwen
1b76564e16
[test] Fix/fix testcase ( #5770 )
...
* [fix] branch for fix testcase;
* [fix] fix test_analyzer & test_auto_parallel;
* [fix] remove local change about moe;
* [fix] rm local change moe;
6 months ago
flybird11111
3f2be80530
fix ( #5765 )
6 months ago
Hongxin Liu
68359ed1e1
[release] update version ( #5752 )
...
* [release] update version
* [devops] update compatibility test
* [devops] update compatibility test
* [devops] update compatibility test
* [devops] update compatibility test
* [test] fix ddp plugin test
* [test] fix gptj and rpc test
* [devops] fix cuda ext compatibility
* [inference] fix flash decoding test
* [inference] fix flash decoding test
6 months ago
Yuanheng Zhao
677cbfacf8
[Fix/Example] Fix Llama Inference Loading Data Type ( #5763 )
...
* [fix/example] fix llama inference loading dtype
* revise loading dtype of benchmark llama3
6 months ago
botbw
023ea13cb5
Merge pull request #5749 from hpcaitech/prefetch
...
[Gemini] Prefetch next chunk before each op
6 months ago
hxwang
154720ba6e
[chore] refactor profiler utils
6 months ago
hxwang
8547562884
[chore] remove unnecessary assert since compute list might not be recorded
6 months ago
hxwang
e5e3320948
[bug] continue fix
6 months ago
hxwang
936dd96dbb
[bug] workaround for idx fix
6 months ago
botbw
e0dde8fda5
Merge pull request #5754 from Hz188/prefetch
...
[Gemini]Prefetch benchmark
6 months ago
botbw
157b4cc357
Merge branch 'prefetch' into prefetch
6 months ago
genghaozhe
87665d7922
correct argument help message
6 months ago
Haze188
4d097def96
[Gemini] add some code for reduce-scatter overlap, chunk prefetch in llama benchmark. ( #5751 )
...
* [bugs] fix args.profile=False DummyProfiler errro
* add args.prefetch_num for benchmark
6 months ago
genghaozhe
b9269d962d
add args.prefetch_num for benchmark
6 months ago
genghaozhe
fba04e857b
[bugs] fix args.profile=False DummyProfiler errro
6 months ago