Commit Graph

190 Commits (576a2f7b10711bcdb43b86da6a5afaa98f4ad867)

Author SHA1 Message Date
Yuanchen 239cd92eff
Support mtbench (#5025)
Co-authored-by: Xu Yuanchen <yuanchen.xu00@gmail.com>
2023-11-09 13:41:50 +08:00
Yuanchen abe071b663
fix ColossalEval (#4992)
Co-authored-by: Xu Yuanchen <yuanchen.xu00@gmail.com>
2023-10-31 10:30:03 +08:00
github-actions[bot] a41cf88e9b
[format] applied code formatting on changed files in pull request 4908 (#4918)
Co-authored-by: github-actions <github-actions@github.com>
2023-10-17 10:48:24 +08:00
Zian(Andy) Zheng 7768afbad0 Update flash_attention_patch.py
To be compatible with the new change in the Transformers library, where a new argument 'padding_mask' was added to forward function of attention layer.
https://github.com/huggingface/transformers/pull/25598
2023-10-16 14:00:45 +08:00
Camille Zhong 652adc2215 Update README.md 2023-10-10 23:19:34 +08:00
Camille Zhong afe10a85fd Update README.md 2023-10-10 23:19:34 +08:00
Camille Zhong 3043d5d676 Update modelscope link in README.md
add modelscope link
2023-10-10 23:19:34 +08:00
Tong Li ed06731e00
update Colossal (#4832) 2023-09-28 16:05:05 +08:00
binmakeswell 822051d888
[doc] update slack link (#4823) 2023-09-27 17:37:39 +08:00
Yuanchen 1fa8c5e09f
Update Qwen-7B results (#4821)
Co-authored-by: Xu Yuanchen <yuanchen.xu00@gmail.com>
2023-09-27 17:33:54 +08:00
flybird11111 be400a0936
[chat] fix gemini strategy (#4698)
* [chat] fix gemini strategy

* [chat] fix gemini strategy

* [chat] fix gemini strategy

* [chat] fix gemini strategy

* g# This is a combination of 2 commits.

[chat] fix gemini strategy

fox

* [chat] fix gemini strategy

update llama2 example

[chat] fix gemini strategy

* [fix] fix gemini strategy

* [fix] fix gemini strategy

* [fix] fix gemini strategy

* [fix] fix gemini strategy

* [fix] fix gemini strategy

* [fix] fix gemini strategy

* [fix] fix gemini strategy

* [fix] fix gemini strategy

* [fix] fix gemini strategy

* [fix] fix gemini strategy

* fix

* fix

* fix

* fix

* fix

* Update train_prompts.py
2023-09-27 13:15:32 +08:00
Chandler-Bing b6cf0aca55
[hotfix] change llama2 Colossal-LLaMA-2 script filename (#4800)
change filename:
pretraining.py -> trainin.py
there is no file named pretraing.py. wrong writing
2023-09-26 11:44:27 +08:00
Tong Li 8cbce6184d update 2023-09-26 11:36:53 +08:00
Tong Li bd014673b0 update readme 2023-09-26 10:58:05 +08:00
binmakeswell d512a4d38d
[doc] add llama2 domain-specific solution news (#4789)
* [doc] add llama2 domain-specific solution news
2023-09-25 10:44:15 +08:00
Yuanchen ce777853ae
[feature] ColossalEval: Evaluation Pipeline for LLMs (#4786)
* Add ColossalEval

* Delete evaluate in Chat

---------

Co-authored-by: Xu Yuanchen <yuanchen.xu00@gmail.com>
Co-authored-by: Tong Li <tong.li352711588@gmail.com>
2023-09-24 23:14:11 +08:00
Tong Li 74aa7d964a
initial commit: add colossal llama 2 (#4784) 2023-09-24 23:12:26 +08:00
Wenhao Chen 901ab1eedd
[chat]: add lora merge weights config (#4766)
* feat: modify lora merge weights fn

* feat: add lora merge weights config
2023-09-21 16:23:59 +08:00
Wenhao Chen 7b9b86441f
[chat]: update rm, add wandb and fix bugs (#4471)
* feat: modify forward fn of critic and reward model

* feat: modify calc_action_log_probs

* to: add wandb in sft and rm trainer

* feat: update train_sft

* feat: update train_rm

* style: modify type annotation and add warning

* feat: pass tokenizer to ppo trainer

* to: modify trainer base and maker base

* feat: add wandb in ppo trainer

* feat: pass tokenizer to generate

* test: update generate fn tests

* test: update train tests

* fix: remove action_mask

* feat: remove unused code

* fix: fix wrong ignore_index

* fix: fix mock tokenizer

* chore: update requirements

* revert: modify make_experience

* fix: fix inference

* fix: add padding side

* style: modify _on_learn_batch_end

* test: use mock tokenizer

* fix: use bf16 to avoid overflow

* fix: fix workflow

* [chat] fix gemini strategy

* [chat] fix

* sync: update colossalai strategy

* fix: fix args and model dtype

* fix: fix checkpoint test

* fix: fix requirements

* fix: fix missing import and wrong arg

* fix: temporarily skip gemini test in stage 3

* style: apply pre-commit

* fix: temporarily skip gemini test in stage 1&2

---------

Co-authored-by: Mingyan Jiang <1829166702@qq.com>
2023-09-20 15:53:58 +08:00
Hongxin Liu 079bf3cb26
[misc] update pre-commit and run all files (#4752)
* [misc] update pre-commit

* [misc] run pre-commit

* [misc] remove useless configuration files

* [misc] ignore cuda for clang-format
2023-09-19 14:20:26 +08:00
digger yu e4fc57c3de
Optimized some syntax errors in the documentation and code under applications/ (#4127)
Co-authored-by: flybird11111 <1829166702@qq.com>
2023-09-15 14:18:22 +08:00
Hongxin Liu a39a5c66fe
Merge branch 'main' into feature/shardformer 2023-09-04 23:43:13 +08:00
Ying Liu c648dc093f fix colossalai version in coati examples 2023-08-30 11:14:19 +08:00
yingliu-hpc 1467e3b41b
[coati] add chatglm model (#4539)
* update configuration of chatglm and add support in coati

* add unit test & update chatglm default config & fix bos index issue

* remove chatglm due to oom

* add dataset pkg in requirement-text

* fix parameter issue in test_models

* add ref in tokenize & rm unnessary parts

* separate source & target tokenization in chatglm

* add unit test to chatglm

* fix test dataset issue

* update truncation of chatglm

* fix Colossalai version

* fix colossal ai version in test
2023-08-29 17:58:51 +08:00
Michelle 285fe7ba71
[chat] update config and prompt (#4139)
* update config and prompt

* update config

---------

Co-authored-by: Qianran Ma <qianranm@luchentech.com>
2023-08-21 14:30:25 +08:00
Hongxin Liu 26e29d58f0
[devops] add large-scale distributed test marker (#4452)
* [test] remove cpu marker

* [test] remove gpu marker

* [test] update pytest markers

* [ci] update unit test ci
2023-08-16 18:56:52 +08:00
Wenhao Chen 6d41c3f2aa
[doc] update Coati README (#4405)
* style: apply formatter

* fix: add outdated warnings

* docs: add dataset format and polish

* docs: polish README

* fix: fix json format

* fix: fix typos

* revert: revert 7b example
2023-08-14 15:26:27 +08:00
Wenhao Chen da4f7b855f
[chat] fix bugs and add unit tests (#4213)
* style: rename replay buffer

Experience replay is typically for off policy algorithms.
Use this name in PPO maybe misleading.

* fix: fix wrong zero2 default arg

* test: update experience tests

* style: rename zero_pad fn

* fix: defer init in CycledDataLoader

* test: add benchmark test

* style: rename internal fn of generation

* style: rename internal fn of lora

* fix: remove unused loss fn

* fix: remove unused utils fn

* refactor: remove generate_with_actor fn

* fix: fix type annotation

* test: add models tests

* fix: skip llama due to long execution time

* style: modify dataset

* style: apply formatter

* perf: update reward dataset

* fix: fix wrong IGNORE_INDEX in sft dataset

* fix: remove DataCollatorForSupervisedDataset

* test: add dataset tests

* style: apply formatter

* style: rename test_ci to test_train

* feat: add llama in inference

* test: add inference tests

* test: change test scripts directory

* fix: update ci

* fix: fix typo

* fix: skip llama due to oom

* fix: fix file mod

* style: apply formatter

* refactor: remove duplicated llama_gptq

* style: apply formatter

* to: update rm test

* feat: add tokenizer arg

* feat: add download model script

* test: update train tests

* fix: modify gemini load and save pretrained

* test: update checkpoint io test

* to: modify nproc_per_node

* fix: do not remove existing dir

* fix: modify save path

* test: add random choice

* fix: fix sft path

* fix: enlarge nproc_per_node to avoid oom

* fix: add num_retry

* fix: make lora config of rm and critic consistent

* fix: add warning about lora weights

* fix: skip some gpt2 tests

* fix: remove grad ckpt in rm and critic due to errors

* refactor: directly use Actor in train_sft

* test: add more arguments

* fix: disable grad ckpt when using lora

* fix: fix save_pretrained and related tests

* test: enable zero2 tests

* revert: remove useless fn

* style: polish code

* test: modify test args
2023-08-02 10:17:36 +08:00
Wenhao Chen 75c5389037
[chat] fix compute_approx_kl (#4338) 2023-08-01 10:21:45 +08:00
Yuanchen 5187c96b7c
support session-based training (#4313)
Co-authored-by: Yuanchen Xu <yuanchen.xu00@gmail.com>
2023-07-28 11:29:55 +08:00
yuxuan-lou 0991405361 [NFC] polish applications/Chat/coati/models/utils.py codestyle (#4277)
* [NFC] polish colossalai/context/random/__init__.py code style

* [NFC] polish applications/Chat/coati/models/utils.py code style
2023-07-26 14:12:57 +08:00
Zirui Zhu 9e512938f6 [NFC] polish applications/Chat/coati/trainer/strategies/base.py code style (#4278) 2023-07-26 14:12:57 +08:00
Ziheng Qin c972d65311 applications/Chat/.gitignore (#4279)
Co-authored-by: henryqin1997 <henryqin1997@gamil.com>
2023-07-26 14:12:57 +08:00
RichardoLuo 709e121cd5 [NFC] polish applications/Chat/coati/models/generation.py code style (#4275) 2023-07-26 14:12:57 +08:00
Yuanchen dc1b6127f9 [NFC] polish applications/Chat/inference/server.py code style (#4274)
Co-authored-by: Yuanchen Xu <yuanchen.xu00@gmail.com>
2023-07-26 14:12:57 +08:00
アマデウス caa4433072 [NFC] fix format of application/Chat/coati/trainer/utils.py (#4273) 2023-07-26 14:12:57 +08:00
Xu Kai 1ce997daaf [NFC] polish applications/Chat/examples/train_reward_model.py code style (#4271) 2023-07-26 14:12:57 +08:00
shenggan 798cb72907 [NFC] polish applications/Chat/coati/trainer/base.py code style (#4260) 2023-07-26 14:12:57 +08:00
Zheng Zangwei (Alex Zheng) b2debdc09b [NFC] polish applications/Chat/coati/dataset/sft_dataset.py code style (#4259) 2023-07-26 14:12:57 +08:00
CZYCW dee1c96344 [NFC] policy applications/Chat/examples/ray/mmmt_prompt.py code style (#4250) 2023-07-26 14:12:57 +08:00
Junming Wu 77c469e1ba [NFC] polish applications/Chat/coati/models/base/actor.py code style (#4248) 2023-07-26 14:12:57 +08:00
Camille Zhong 915ed8bed1 [NFC] polish applications/Chat/inference/requirements.txt code style (#4265) 2023-07-26 14:12:57 +08:00
Frank Lee f447ca1811 [chat] removed cache file (#4155) 2023-07-04 16:05:01 +08:00
wukong1992 c1c672d0f0 [shardformer] shardformer support t5 model (#3994)
test t5
2023-07-04 16:05:01 +08:00
Wenhao Chen 3d8d5d0d58
[chat] use official transformers and fix some issues (#4117)
* feat: remove on_learn_epoch fn as not used

* revert: add _on_learn_epoch fn

* feat: remove NaiveStrategy

* test: update train_prompts tests

* fix: remove prepare_llama_tokenizer_and_embedding

* test: add lora arg

* feat: remove roberta support in train_prompts due to runtime errs

* feat: remove deberta & roberta in rm as not used

* test: remove deberta and roberta tests

* feat: remove deberta and roberta models as not used

* fix: remove calls to roberta

* fix: remove prepare_llama_tokenizer_and_embedding

* chore: update transformers version

* docs: update transformers version

* fix: fix actor inference

* fix: fix ci

* feat: change llama pad token to unk

* revert: revert ddp setup_distributed

* fix: change llama pad token to unk

* revert: undo unnecessary changes

* fix: use pip to install transformers
2023-07-04 13:49:09 +08:00
Wenhao Chen edd75a59ea
[chat] remove naive strategy and split colossalai strategy (#4094)
* feat: remove on_learn_epoch fn as not used

* revert: add _on_learn_epoch fn

* to: remove the use of NaiveStrategy

* test: remove NaiveStrategy tests

* feat: remove NaiveStrategy

* style: modify comments and params

* feat: split ColossalAIStrategy into LowLevelZeroStrategy and GeminiStrategy

* fix: remove naive

* fix: align with modified colossal strategy

* fix: fix ddp _try_init_dist arg
2023-06-29 18:11:00 +08:00
Wenhao Chen b03d64d010
[chat] refactor trainer class (#4080)
* to: add SLTrainer

* refactor: refactor RMTrainer and SFTTrainer

* fix: fix init file

* feat: remove on_learn_epoch fn as not used

* fix: align with modified gemini arguments

* to: add OnPolicyTrainer

* revert: add _on_learn_epoch fn

* refactor: refactor PPOTrainer

* style: rename PPOTrainer argument

* fix: align with modified PPO arguments

* test: align with modified train_prompts arguments

* chore: modify train_prompts

* docs: align with modified arguments

* fix: remove unnecessary output

* fix: move dataloader to fit fn of SLTrainer

* fix: move dataloader to fit fn of OnPolicyTrainer

* fix: modify usage of prompt and pretrain dataloader
2023-06-29 10:48:09 +08:00
Baizhou Zhang 4da324cd60
[hotfix]fix argument naming in docs and examples (#4083) 2023-06-26 23:50:04 +08:00
Michelle e89b127d8e
[chat]: fix chat evaluation possible bug (#4064)
* fix chat eval

* fix utils

* fix utils

* add comment

---------

Co-authored-by: Qianran Ma <qianranm@luchentech.com>
2023-06-26 15:26:07 +08:00
Wenhao Chen 153b957a1b
[chat] refactor strategy class with booster api (#3987)
* refactor: adapt boost API in base and naive strategies

* fix: initialize plugin after setup_distributed

* fix: fix save_pretrained fn

* refactor: adapt boost API in DDPStrategy

* to: add _post_init check

* to: fix ddp backward, modify ddp dataloader and unwrap

* feat: adapt boost API in ColossalAIStrategy

* fix: call setup_distributed before use get_current_device

* fix: fix save_model and save_optimizer

* test: remove save_sharded_optimizer test

* style: apply formatter

* fix: fix stage check and add comments

* feat: allow dict type arg in strategy.prepare

* to: temporarily remove lr_scheduler for testing

* style: simplify init of ColossalAIStrategy

* fix: fix lr_scheduler in sft and rm

* style: modify comments

* test: add train_prompts tests

* fix: fix inference only case and use in train_prompts

* test: skip failed tests in ci

* style: fix CodeFactor check

* fix: do not use model.to('cpu') with GeminiPlugin

* test: enable colossalai_gemini tests

* test: set CUDA_VISIBLE_DEVICES in ci

* docs: add note
2023-06-25 17:36:21 +08:00