BlueRum
|
c9e27f0d1b
|
[chatgpt]fix lora bug (#2974)
* fix lora bug
* polish
|
2 years ago |
BlueRum
|
82149e9d1b
|
[chatgpt] fix inference demo loading bug (#2969)
* [chatgpt] fix inference demo loading bug
* polish
|
2 years ago |
Fazzie-Maqianli
|
bbf9c827c3
|
[ChatGPT] fix README (#2966)
* Update README.md
* fix README
* Update README.md
* Update README.md
---------
Co-authored-by: fastalgo <youyang@cs.berkeley.edu>
Co-authored-by: BlueRum <70618399+ht-zhou@users.noreply.github.com>
|
2 years ago |
binmakeswell
|
b0a8766381
|
[doc] fix chatgpt inference typo (#2964)
|
2 years ago |
BlueRum
|
489a9566af
|
[chatgpt]add inference example (#2944)
* [chatgpt] support inference example
* Create inference.sh
* Update README.md
* Delete inference.sh
* Update inference.py
|
2 years ago |
binmakeswell
|
8264cd7ef1
|
[doc] add env scope (#2933)
|
2 years ago |
BlueRum
|
2e16f842a9
|
[chatgpt]support opt & gpt for rm training (#2876)
|
2 years ago |
BlueRum
|
34ca324b0d
|
[chatgpt] Support saving ckpt in examples (#2846)
* [chatgpt]fix train_rm bug with lora
* [chatgpt]support colossalai strategy to train rm
* fix pre-commit
* fix pre-commit 2
* [chatgpt]fix rm eval typo
* fix rm eval
* fix pre commit
* add support of saving ckpt in examples
* fix single-gpu save
|
2 years ago |
BlueRum
|
3eebc4dff7
|
[chatgpt] fix rm eval (#2829)
* [chatgpt]fix train_rm bug with lora
* [chatgpt]support colossalai strategy to train rm
* fix pre-commit
* fix pre-commit 2
* [chatgpt]fix rm eval typo
* fix rm eval
* fix pre commit
|
2 years ago |
ver217
|
b6a108cb91
|
[chatgpt] add test checkpoint (#2797)
* [chatgpt] add test checkpoint
* [chatgpt] test checkpoint use smaller model
|
2 years ago |
ver217
|
a619a190df
|
[chatgpt] update readme about checkpoint (#2792)
* [chatgpt] add save/load checkpoint sample code
* [chatgpt] add save/load checkpoint readme
* [chatgpt] refactor save/load checkpoint readme
|
2 years ago |
ver217
|
4ee311c026
|
[chatgpt] startegy add prepare method (#2766)
* [chatgpt] startegy add prepare method
* [chatgpt] refactor examples
* [chatgpt] refactor strategy.prepare
* [chatgpt] support save/load checkpoint
* [chatgpt] fix unwrap actor
* [chatgpt] fix unwrap actor
|
2 years ago |
ver217
|
a88bc828d5
|
[chatgpt] disable shard init for colossalai (#2767)
|
2 years ago |
BlueRum
|
613efebc5c
|
[chatgpt] support colossalai strategy to train rm (#2742)
* [chatgpt]fix train_rm bug with lora
* [chatgpt]support colossalai strategy to train rm
* fix pre-commit
* fix pre-commit 2
|
2 years ago |
BlueRum
|
648183a960
|
[chatgpt]fix train_rm bug with lora (#2741)
|
2 years ago |
CH.Li
|
7aacfad8af
|
fix typo (#2721)
|
2 years ago |
ver217
|
9c0943ecdb
|
[chatgpt] optimize generation kwargs (#2717)
* [chatgpt] ppo trainer use default generate args
* [chatgpt] example remove generation preparing fn
* [chatgpt] benchmark remove generation preparing fn
* [chatgpt] fix ci
|
2 years ago |
binmakeswell
|
d4d3387f45
|
[doc] add open-source contribution invitation (#2714)
* [doc] fix typo
* [doc] add invitation
|
2 years ago |
binmakeswell
|
94f000515b
|
[doc] add Quick Preview (#2706)
|
2 years ago |
binmakeswell
|
8408c852a6
|
[app] fix ChatGPT requirements (#2704)
|
2 years ago |
ver217
|
1b34701027
|
[app] add chatgpt application (#2698)
|
2 years ago |