* Add RoBERTa for RLHF Stage 2 & 3 (test)
RoBERTa for RLHF Stage 2 & 3 (still in testing)
* Revert "Add RoBERTa for RLHF Stage 2 & 3 (test)"
This reverts commit 06741d894d.
* Add RoBERTa for RLHF stage 2 & 3
1. add roberta folder under model folder
2. add roberta option in train_reward_model.py
3. add some test in testci
* Update test_ci.sh
* Revert "Update test_ci.sh"
This reverts commit 9c7352b81766f3177d31eeec0ec178a301df966a.
* Add RoBERTa for RLHF Stage 2 & 3 (test)
RoBERTa for RLHF Stage 2 & 3 (still in testing)
* Revert "Add RoBERTa for RLHF Stage 2 & 3 (test)"
This reverts commit 06741d894d.
* Add RoBERTa for RLHF stage 2 & 3
1. add roberta folder under model folder
2. add roberta option in train_reward_model.py
3. add some test in testci
* Update test_ci.sh
* Revert "Update test_ci.sh"
This reverts commit 9c7352b81766f3177d31eeec0ec178a301df966a.
* update roberta with coati
* chat ci update
* Revert "chat ci update"
This reverts commit 17ae7ae01fa752bd3289fc39069868fde99cf846.
* [Chat] fix the tokenizer "int too big to convert" error in SFT training
fix the tokenizer error during SFT training using Bloom and OPT
* Add RoBERTa for RLHF Stage 2 & 3 (test)
RoBERTa for RLHF Stage 2 & 3 (still in testing)
* Revert "Add RoBERTa for RLHF Stage 2 & 3 (test)"
This reverts commit 06741d894d.
* Add RoBERTa for RLHF stage 2 & 3
1. add roberta folder under model folder
2. add roberta option in train_reward_model.py
3. add some test in testci
* add test for reward model training
* Update test_ci.sh
* Revert "Update test_ci.sh"
This reverts commit 9c7352b81766f3177d31eeec0ec178a301df966a.
* Add RoBERTa for RLHF Stage 2 & 3 (test)
RoBERTa for RLHF Stage 2 & 3 (still in testing)
* Revert "Add RoBERTa for RLHF Stage 2 & 3 (test)"
This reverts commit 06741d894d.
* Add RoBERTa for RLHF stage 2 & 3
1. add roberta folder under model folder
2. add roberta option in train_reward_model.py
3. add some test in testci
* Update test_ci.sh
* Revert "Update test_ci.sh"
This reverts commit 9c7352b81766f3177d31eeec0ec178a301df966a.
* update roberta with coati
* add normalize function to value_head in bloom rm
* add normalization to value_function in gpt_rm
* add normalization to value_head of opt_rm
* add Anthropic/hh-rlhf dataset
* Update __init__.py
* Add LogExpLoss in RM training
* Update __init__.py
* update rm trainer to use acc as target
* update example/train_rm
* Update train_rm.sh
* code style
* Update README.md
* Update README.md
* add rm test to ci
* fix tokenier
* fix typo
* change batchsize to avoid oom in ci
* Update test_ci.sh