* update configuration of chatglm and add support in coati
* add unit test & update chatglm default config & fix bos index issue
* remove chatglm due to oom
* add dataset pkg in requirement-text
* fix parameter issue in test_models
* add ref in tokenize & rm unnessary parts
* separate source & target tokenization in chatglm
* add unit test to chatglm
* fix test dataset issue
* update truncation of chatglm
* fix Colossalai version
* fix colossal ai version in test
* refactor: adapt boost API in base and naive strategies
* fix: initialize plugin after setup_distributed
* fix: fix save_pretrained fn
* refactor: adapt boost API in DDPStrategy
* to: add _post_init check
* to: fix ddp backward, modify ddp dataloader and unwrap
* feat: adapt boost API in ColossalAIStrategy
* fix: call setup_distributed before use get_current_device
* fix: fix save_model and save_optimizer
* test: remove save_sharded_optimizer test
* style: apply formatter
* fix: fix stage check and add comments
* feat: allow dict type arg in strategy.prepare
* to: temporarily remove lr_scheduler for testing
* style: simplify init of ColossalAIStrategy
* fix: fix lr_scheduler in sft and rm
* style: modify comments
* test: add train_prompts tests
* fix: fix inference only case and use in train_prompts
* test: skip failed tests in ci
* style: fix CodeFactor check
* fix: do not use model.to('cpu') with GeminiPlugin
* test: enable colossalai_gemini tests
* test: set CUDA_VISIBLE_DEVICES in ci
* docs: add note
* refactor: separate log_probs fn from Actor forward fn
* refactor: separate generate fn from Actor class
* feat: update unwrap_model and get_base_model
* unwrap_model returns model not wrapped by Strategy
* get_base_model returns HF model for Actor, Critic and RewardModel
* feat: simplify Strategy.prepare
* style: remove get_base_model method of Actor
* perf: tokenize text in batches
* refactor: move calc_action_log_probs to utils of model
* test: update test with new forward fn
* style: rename forward fn args
* fix: do not unwrap model in save_model fn of naive strategy
* test: add gemini test for train_prompts
* fix: fix _set_default_generate_kwargs
* [chat] strategy refactor unwrap model
* [chat] strategy refactor save model
* [chat] add docstr
* [chat] refactor trainer save model
* [chat] fix strategy typing
* [chat] refactor trainer save model
* [chat] update readme
* [chat] fix unit test