Commit Graph

169 Commits (d8ceeac14e54c5c568e916c061b86d9a53a54f30)

Author SHA1 Message Date
Fazzie-Maqianli bbf9c827c3
[ChatGPT] fix README (#2966)
* Update README.md

* fix README

* Update README.md

* Update README.md

---------

Co-authored-by: fastalgo <youyang@cs.berkeley.edu>
Co-authored-by: BlueRum <70618399+ht-zhou@users.noreply.github.com>
2023-03-02 15:00:05 +08:00
binmakeswell b0a8766381
[doc] fix chatgpt inference typo (#2964) 2023-03-02 11:22:08 +08:00
BlueRum 489a9566af
[chatgpt]add inference example (#2944)
* [chatgpt] support inference example

* Create inference.sh

* Update README.md

* Delete inference.sh

* Update inference.py
2023-03-01 13:39:39 +08:00
binmakeswell 8264cd7ef1
[doc] add env scope (#2933) 2023-02-28 15:39:51 +08:00
BlueRum 2e16f842a9
[chatgpt]support opt & gpt for rm training (#2876) 2023-02-22 16:58:11 +08:00
BlueRum 34ca324b0d
[chatgpt] Support saving ckpt in examples (#2846)
* [chatgpt]fix train_rm bug with lora

* [chatgpt]support colossalai strategy to train rm

* fix pre-commit

* fix pre-commit 2

* [chatgpt]fix rm eval typo

* fix rm eval

* fix pre commit

* add support of saving ckpt in examples

* fix single-gpu save
2023-02-22 10:00:26 +08:00
BlueRum 3eebc4dff7
[chatgpt] fix rm eval (#2829)
* [chatgpt]fix train_rm bug with lora

* [chatgpt]support colossalai strategy to train rm

* fix pre-commit

* fix pre-commit 2

* [chatgpt]fix rm eval typo

* fix rm eval

* fix pre commit
2023-02-21 11:35:45 +08:00
ver217 b6a108cb91
[chatgpt] add test checkpoint (#2797)
* [chatgpt] add test checkpoint

* [chatgpt] test checkpoint use smaller model
2023-02-20 15:22:36 +08:00
ver217 a619a190df
[chatgpt] update readme about checkpoint (#2792)
* [chatgpt] add save/load checkpoint sample code

* [chatgpt] add save/load checkpoint readme

* [chatgpt] refactor save/load checkpoint readme
2023-02-17 12:43:31 +08:00
ver217 4ee311c026
[chatgpt] startegy add prepare method (#2766)
* [chatgpt] startegy add prepare method

* [chatgpt] refactor examples

* [chatgpt] refactor strategy.prepare

* [chatgpt] support save/load checkpoint

* [chatgpt] fix unwrap actor

* [chatgpt] fix unwrap actor
2023-02-17 11:27:27 +08:00
ver217 a88bc828d5
[chatgpt] disable shard init for colossalai (#2767) 2023-02-16 20:09:34 +08:00
BlueRum 613efebc5c
[chatgpt] support colossalai strategy to train rm (#2742)
* [chatgpt]fix train_rm bug with lora

* [chatgpt]support colossalai strategy to train rm

* fix pre-commit

* fix pre-commit 2
2023-02-16 11:24:07 +08:00
BlueRum 648183a960
[chatgpt]fix train_rm bug with lora (#2741) 2023-02-16 10:25:17 +08:00
CH.Li 7aacfad8af
fix typo (#2721) 2023-02-15 14:54:53 +08:00
ver217 9c0943ecdb
[chatgpt] optimize generation kwargs (#2717)
* [chatgpt] ppo trainer use default generate args

* [chatgpt] example remove generation preparing fn

* [chatgpt] benchmark remove generation preparing fn

* [chatgpt] fix ci
2023-02-15 13:59:58 +08:00
binmakeswell d4d3387f45
[doc] add open-source contribution invitation (#2714)
* [doc] fix typo

* [doc] add invitation
2023-02-15 11:08:35 +08:00
binmakeswell 94f000515b
[doc] add Quick Preview (#2706) 2023-02-14 23:07:30 +08:00
binmakeswell 8408c852a6
[app] fix ChatGPT requirements (#2704) 2023-02-14 22:48:15 +08:00
ver217 1b34701027
[app] add chatgpt application (#2698) 2023-02-14 22:17:25 +08:00