mirror of https://github.com/hpcaitech/ColossalAI
[format] applied code formatting on changed files in pull request 3296 (#3298)
Co-authored-by: github-actions <github-actions@github.com>pull/3301/head
parent
682af61396
commit
5134ad5d1a
|
@ -17,7 +17,7 @@
|
|||
- [Stage1 - Supervised instructs tuning](#stage1---supervised-instructs-tuning)
|
||||
- [Stage2 - Training reward model](#stage2---training-reward-model)
|
||||
- [Stage3 - Training model with reinforcement learning by human feedback](#stage3---training-model-with-reinforcement-learning-by-human-feedback)
|
||||
- [Inference - After Training](#inference---after-training)
|
||||
- [Inference - After Training](#inference---after-training)
|
||||
- [Coati7B examples](#coati7b-examples)
|
||||
- [Generation](#generation)
|
||||
- [Open QA](#open-qa)
|
||||
|
|
|
@ -100,7 +100,7 @@ Model performance in [Anthropics paper](https://arxiv.org/abs/2204.05862):
|
|||
- --max_len: max sentence length for generation, type=int, default=512
|
||||
- --test: whether is only tesing, if it's ture, the dataset will be small
|
||||
|
||||
## Stage3 - Training model using prompts with RL
|
||||
## Stage3 - Training model using prompts with RL
|
||||
|
||||
Stage3 uses reinforcement learning algorithm, which is the most complex part of the training process, as shown below:
|
||||
|
||||
|
|
Loading…
Reference in New Issue