Fazzie-Maqianli
4fd4bd9d9a
[chatgpt] support instuct training ( #3216 )
2023-03-23 16:46:20 +08:00
Yuanchen
9998d5ef64
[chatgpt]add reward model code for deberta ( #3199 )
...
Co-authored-by: Yuanchen Xu <yuanchen.xu00@gmail.com>
2023-03-22 19:09:39 +08:00
pgzhang
b429529365
[chatgpt] add supervised learning fine-tune code ( #3183 )
...
* [chatgpt] add supervised fine-tune code
* [chatgpt] delete unused code and modified comment code
* [chatgpt] use pytorch distributed sampler instead
---------
Co-authored-by: zhangpengpeng <zhangpengpeng@joyy.com>
2023-03-22 09:59:42 +08:00
BlueRum
7548ca5a54
[chatgpt]Reward Model Training Process update ( #3133 )
...
* add normalize function to value_head in bloom rm
* add normalization to value_function in gpt_rm
* add normalization to value_head of opt_rm
* add Anthropic/hh-rlhf dataset
* Update __init__.py
* Add LogExpLoss in RM training
* Update __init__.py
* update rm trainer to use acc as target
* update example/train_rm
* Update train_rm.sh
* code style
* Update README.md
* Update README.md
* add rm test to ci
* fix tokenier
* fix typo
* change batchsize to avoid oom in ci
* Update test_ci.sh
2023-03-20 09:59:06 +08:00
ver217
c474fda282
[chatgpt] fix ppo training hanging problem with gemini ( #3162 )
...
* [chatgpt] fix generation early stopping
* [chatgpt] fix train prompts example
2023-03-17 15:41:47 +08:00
BlueRum
23cd5e2ccf
[chatgpt]update ci ( #3087 )
...
* [chatgpt]update ci
* Update test_ci.sh
* Update test_ci.sh
* Update test_ci.sh
* test
* Update train_prompts.py
* Update train_dummy.py
* add save_path
* polish
* add save path
* polish
* add save path
* polish
* delete bloom-560m test
delete bloom-560m test because of oom
* add ddp test
2023-03-14 11:01:17 +08:00
BlueRum
68577fbc43
[chatgpt]Fix examples ( #3116 )
...
* fix train_dummy
* fix train-prompts
2023-03-13 11:12:22 +08:00
Fazzie-Maqianli
c21b11edce
change nn to models ( #3032 )
2023-03-07 16:34:22 +08:00
github-actions[bot]
e86d9bb2e1
[format] applied code formatting on changed files in pull request 3025 ( #3026 )
...
Co-authored-by: github-actions <github-actions@github.com>
2023-03-07 12:55:17 +08:00
BlueRum
55dcd3051a
[chatgpt] fix readme ( #3025 )
2023-03-07 10:21:25 +08:00
LuGY
287d60499e
[chatgpt] Add saving ckpt callback for PPO ( #2880 )
...
* add checkpoint callback for chatgpt
* add save ckpt callbacks for ppo
---------
Co-authored-by: Fazzie-Maqianli <55798671+Fazziekey@users.noreply.github.com>
2023-03-07 10:13:25 +08:00
BlueRum
e588703454
[chatgpt]fix inference model load ( #2988 )
...
* fix lora bug
* polish
* fix lora gemini
* fix inference laod model bug
2023-03-07 09:17:52 +08:00
BlueRum
f5ca0397dd
[chatgpt] fix lora gemini conflict in RM training ( #2984 )
...
* fix lora bug
* polish
* fix lora gemini
2023-03-03 15:58:16 +08:00
ver217
19ad49fb3b
[chatgpt] making experience support dp ( #2971 )
...
* [chatgpt] making experience support dp
* [chatgpt] update example test ci
* [chatgpt] update example test ci
* [chatgpt] update example test ci
* [chatgpt] update example test ci
* [chatgpt] update sampler
* [chatgpt] update example test ci
* [chatgpt] refactor sampler
* [chatgpt] update example test ci
2023-03-03 15:51:19 +08:00
BlueRum
c9e27f0d1b
[chatgpt]fix lora bug ( #2974 )
...
* fix lora bug
* polish
2023-03-02 17:51:44 +08:00
BlueRum
82149e9d1b
[chatgpt] fix inference demo loading bug ( #2969 )
...
* [chatgpt] fix inference demo loading bug
* polish
2023-03-02 16:18:33 +08:00
Fazzie-Maqianli
bbf9c827c3
[ChatGPT] fix README ( #2966 )
...
* Update README.md
* fix README
* Update README.md
* Update README.md
---------
Co-authored-by: fastalgo <youyang@cs.berkeley.edu>
Co-authored-by: BlueRum <70618399+ht-zhou@users.noreply.github.com>
2023-03-02 15:00:05 +08:00
binmakeswell
b0a8766381
[doc] fix chatgpt inference typo ( #2964 )
2023-03-02 11:22:08 +08:00
BlueRum
489a9566af
[chatgpt]add inference example ( #2944 )
...
* [chatgpt] support inference example
* Create inference.sh
* Update README.md
* Delete inference.sh
* Update inference.py
2023-03-01 13:39:39 +08:00
BlueRum
2e16f842a9
[chatgpt]support opt & gpt for rm training ( #2876 )
2023-02-22 16:58:11 +08:00
BlueRum
34ca324b0d
[chatgpt] Support saving ckpt in examples ( #2846 )
...
* [chatgpt]fix train_rm bug with lora
* [chatgpt]support colossalai strategy to train rm
* fix pre-commit
* fix pre-commit 2
* [chatgpt]fix rm eval typo
* fix rm eval
* fix pre commit
* add support of saving ckpt in examples
* fix single-gpu save
2023-02-22 10:00:26 +08:00
BlueRum
3eebc4dff7
[chatgpt] fix rm eval ( #2829 )
...
* [chatgpt]fix train_rm bug with lora
* [chatgpt]support colossalai strategy to train rm
* fix pre-commit
* fix pre-commit 2
* [chatgpt]fix rm eval typo
* fix rm eval
* fix pre commit
2023-02-21 11:35:45 +08:00
ver217
4ee311c026
[chatgpt] startegy add prepare method ( #2766 )
...
* [chatgpt] startegy add prepare method
* [chatgpt] refactor examples
* [chatgpt] refactor strategy.prepare
* [chatgpt] support save/load checkpoint
* [chatgpt] fix unwrap actor
* [chatgpt] fix unwrap actor
2023-02-17 11:27:27 +08:00
BlueRum
613efebc5c
[chatgpt] support colossalai strategy to train rm ( #2742 )
...
* [chatgpt]fix train_rm bug with lora
* [chatgpt]support colossalai strategy to train rm
* fix pre-commit
* fix pre-commit 2
2023-02-16 11:24:07 +08:00
ver217
9c0943ecdb
[chatgpt] optimize generation kwargs ( #2717 )
...
* [chatgpt] ppo trainer use default generate args
* [chatgpt] example remove generation preparing fn
* [chatgpt] benchmark remove generation preparing fn
* [chatgpt] fix ci
2023-02-15 13:59:58 +08:00
ver217
1b34701027
[app] add chatgpt application ( #2698 )
2023-02-14 22:17:25 +08:00