ColossalAI/applications/Chat/coati
Wenhao Chen 3d8d5d0d58
[chat] use official transformers and fix some issues (#4117)
* feat: remove on_learn_epoch fn as not used

* revert: add _on_learn_epoch fn

* feat: remove NaiveStrategy

* test: update train_prompts tests

* fix: remove prepare_llama_tokenizer_and_embedding

* test: add lora arg

* feat: remove roberta support in train_prompts due to runtime errs

* feat: remove deberta & roberta in rm as not used

* test: remove deberta and roberta tests

* feat: remove deberta and roberta models as not used

* fix: remove calls to roberta

* fix: remove prepare_llama_tokenizer_and_embedding

* chore: update transformers version

* docs: update transformers version

* fix: fix actor inference

* fix: fix ci

* feat: change llama pad token to unk

* revert: revert ddp setup_distributed

* fix: change llama pad token to unk

* revert: undo unnecessary changes

* fix: use pip to install transformers
2023-07-04 13:49:09 +08:00
..
dataset [chat] refactor actor class (#3968) 2023-06-13 13:31:56 +08:00
experience_maker [chat] refactor actor class (#3968) 2023-06-13 13:31:56 +08:00
kernels [CI] fix some spelling errors (#3707) 2023-05-10 17:12:03 +08:00
models [chat] use official transformers and fix some issues (#4117) 2023-07-04 13:49:09 +08:00
quant [chat] add distributed PPO trainer (#3740) 2023-06-07 10:41:16 +08:00
ray [chat] use official transformers and fix some issues (#4117) 2023-07-04 13:49:09 +08:00
replay_buffer [chat] polish code note typo (#3612) 2023-04-20 17:22:15 +08:00
trainer [chat] remove naive strategy and split colossalai strategy (#4094) 2023-06-29 18:11:00 +08:00
__init__.py [Coati] first commit (#3283) 2023-03-28 20:25:36 +08:00