[format] applied code formatting on changed files in pull request 3296 (#3298)

Co-authored-by: github-actions <github-actions@github.com>
pull/3301/head
github-actions[bot] 2023-03-29 02:35:40 +08:00 committed by GitHub
parent 682af61396
commit 5134ad5d1a
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 2 additions and 2 deletions

View File

@ -17,7 +17,7 @@
- [Stage1 - Supervised instructs tuning](#stage1---supervised-instructs-tuning)
- [Stage2 - Training reward model](#stage2---training-reward-model)
- [Stage3 - Training model with reinforcement learning by human feedback](#stage3---training-model-with-reinforcement-learning-by-human-feedback)
- [Inference - After Training](#inference---after-training)
- [Inference - After Training](#inference---after-training)
- [Coati7B examples](#coati7b-examples)
- [Generation](#generation)
- [Open QA](#open-qa)

View File

@ -100,7 +100,7 @@ Model performance in [Anthropics paper](https://arxiv.org/abs/2204.05862):
- --max_len: max sentence length for generation, type=int, default=512
- --test: whether is only tesing, if it's ture, the dataset will be small
## Stage3 - Training model using prompts with RL
## Stage3 - Training model using prompts with RL
Stage3 uses reinforcement learning algorithm, which is the most complex part of the training process, as shown below: