BlueRum
c8b723d6c2
[chat]Update Readme ( #3296 )
...
* Update README.md
* Update README.md
* Update README.md
* update example readme
2023-03-29 02:32:17 +08:00
ver217
73b542a124
[coati] inference supports profanity check ( #3295 )
2023-03-29 02:14:35 +08:00
ver217
ce2cafae76
[coati] add repetition_penalty for inference ( #3294 )
2023-03-29 01:18:45 +08:00
Fazzie-Maqianli
a88ed0f83a
add limit ( #3293 )
2023-03-29 00:53:23 +08:00
Fazzie-Maqianli
c5484281aa
[ColossalChat]add cite for datasets ( #3292 )
2023-03-29 00:38:36 +08:00
Fazzie-Maqianli
ec7af22a43
fix image ( #3288 )
2023-03-28 23:34:21 +08:00
Fazzie-Maqianli
1f7d9afbf8
add example ( #3286 )
2023-03-28 23:07:15 +08:00
ver217
4905b21b94
[coati] fix inference output ( #3285 )
...
* [coati] fix inference requirements
* [coati] add output postprocess
* [coati] update inference readme
* [coati] fix inference requirements
2023-03-28 21:20:28 +08:00
Fazzie-Maqianli
bb6196e71a
remove chatgpt ( #3284 )
2023-03-28 20:29:09 +08:00
Fazzie-Maqianli
b0ce5a1032
[Coati] first commit ( #3283 )
2023-03-28 20:25:36 +08:00
YuliangLiu0306
fd6add575d
[examples] polish AutoParallel readme ( #3270 )
2023-03-28 10:40:07 +08:00
HELSON
02b058032d
[fx] meta registration compatibility ( #3253 )
...
* [fx] meta registration compatibility
* fix error
2023-03-27 15:22:17 +08:00
Frank Lee
73d3e4d309
[booster] implemented the torch ddd + resnet example ( #3232 )
...
* [booster] implemented the torch ddd + resnet example
* polish code
2023-03-27 10:24:14 +08:00
YH
1a229045af
Add interface for colo tesnor dp size ( #3227 )
2023-03-27 09:42:21 +08:00
Hakjin Lee
1653063fce
[CI] Fix pre-commit workflow ( #3238 )
2023-03-27 09:41:08 +08:00
NatalieC323
280fcdc485
polish code ( #3194 )
...
Co-authored-by: YuliangLiu0306 <72588413+YuliangLiu0306@users.noreply.github.com>
2023-03-24 18:44:43 +08:00
YuliangLiu0306
4d5d8f98a4
[API] implement device mesh manager ( #3221 )
...
* [API] implement device mesh manager
* polish
2023-03-24 13:39:12 +08:00
CsRic
052b03e83f
limit torch version ( #3213 )
...
Co-authored-by: csric <richcsr256@gmail.com>
2023-03-24 13:36:16 +08:00
binmakeswell
d32ef94ad9
[doc] fix typo ( #3222 )
...
* [doc] fix typo
* [doc] fix typo
2023-03-24 13:33:35 +08:00
YuliangLiu0306
045afa3ea2
[hotfix] skip torchaudio tracing test ( #3211 )
...
* [hotfix] skip torchaudio tracing test
* fix lazy init test issue
2023-03-24 12:15:33 +08:00
ver217
78fd31f9c1
[chatgpt] add precision option for colossalai ( #3233 )
2023-03-24 12:15:06 +08:00
Fazzie-Maqianli
bd39877da4
support instrcut training ( #3230 )
2023-03-24 11:45:01 +08:00
Camille Zhong
9bc702ab48
[doc] update chatgpt doc paper link ( #3229 )
...
#issue 3189
2023-03-24 11:21:39 +08:00
Fazzie-Maqianli
bbac6760e5
fix torch version ( #3225 )
2023-03-23 20:56:35 +08:00
Fazzie-Maqianli
fa97a9cab4
[chatgpt] unnify datasets ( #3218 )
2023-03-23 17:38:30 +08:00
Fazzie-Maqianli
4fd4bd9d9a
[chatgpt] support instuct training ( #3216 )
2023-03-23 16:46:20 +08:00
Frank Lee
cd142fbefa
[api] implemented the checkpoint io module ( #3205 )
...
* [api] implemented the checkpoint io module
* polish code
* polish code
2023-03-23 10:53:17 +08:00
ver217
f8289d4221
[lazyinit] combine lazy tensor with dtensor ( #3204 )
...
* [lazyinit] lazy tensor add distribute
* [lazyinit] refactor distribute
* [lazyinit] add test dist lazy init
* [lazyinit] add verbose info for dist lazy init
* [lazyinit] fix rnn flatten weight op
* [lazyinit] polish test
* [lazyinit] polish test
* [lazyinit] fix lazy tensor data setter
* [lazyinit] polish test
* [lazyinit] fix clean
* [lazyinit] make materialize inplace
* [lazyinit] refactor materialize
* [lazyinit] refactor test distribute
* [lazyinit] fix requires_grad
* [lazyinit] fix tolist after materialization
* [lazyinit] refactor distribute module
* [lazyinit] polish docstr
* [lazyinit] polish lazy init context
* [lazyinit] temporarily skip test
* [lazyinit] polish test
* [lazyinit] add docstr
2023-03-23 10:53:06 +08:00
Yan Fang
189347963a
[auto] fix requirements typo for issue #3125 ( #3209 )
2023-03-23 10:22:08 +08:00
Yuanchen
9998d5ef64
[chatgpt]add reward model code for deberta ( #3199 )
...
Co-authored-by: Yuanchen Xu <yuanchen.xu00@gmail.com>
2023-03-22 19:09:39 +08:00
Fazzie-Maqianli
1e1b9d2fea
[chatgpt]support llama ( #3070 )
2023-03-22 15:44:31 +08:00
Frank Lee
e3ad88fb48
[booster] implemented the cluster module ( #3191 )
...
* [booster] implemented the cluster module
* polish code
2023-03-22 14:11:54 +08:00
YuliangLiu0306
019a847432
[Analyzer] fix analyzer tests ( #3197 )
2023-03-22 13:38:11 +08:00
YuliangLiu0306
f57d34958b
[FX] refactor experimental tracer and adapt it with hf models ( #3157 )
...
* pass gpt trace and meta_prop
* pass t5 trace and meta_prop
* [FX] refactor experimental tracer and adapt it with hf models
* pass all mainstream model zoo
* fix CI
* fix CI
* fix CI
* fix CI
* fix CI
* fix CI
* fix CI
* fix CI
* skip tests
* fix CI
* using packaging version
* polish
2023-03-22 10:40:33 +08:00
pgzhang
b429529365
[chatgpt] add supervised learning fine-tune code ( #3183 )
...
* [chatgpt] add supervised fine-tune code
* [chatgpt] delete unused code and modified comment code
* [chatgpt] use pytorch distributed sampler instead
---------
Co-authored-by: zhangpengpeng <zhangpengpeng@joyy.com>
2023-03-22 09:59:42 +08:00
Frank Lee
e7f3bed2d3
[booster] added the plugin base and torch ddp plugin ( #3180 )
...
* [booster] added the plugin base and torch ddp plugin
* polish code
* polish code
* polish code
2023-03-21 17:39:30 +08:00
NatalieC323
e5f668f280
[dreambooth] fixing the incompatibity in requirements.txt ( #3190 )
...
* Update requirements.txt
* Update environment.yaml
* Update README.md
* Update environment.yaml
* Update README.md
* Update README.md
* Delete requirements_colossalai.txt
* Update requirements.txt
* Update README.md
2023-03-21 16:01:13 +08:00
Zihao
18dbe76cae
[auto-parallel] add auto-offload feature ( #3154 )
...
* add auto-offload feature
* polish code
* fix syn offload runtime pass bug
* add offload example
* fix offload testing bug
* fix example testing bug
2023-03-21 14:17:41 +08:00
YuliangLiu0306
258b43317c
[hotfix] layout converting issue ( #3188 )
2023-03-21 13:24:18 +08:00
YH
80aed29cd3
[zero] Refactor ZeroContextConfig class using dataclass ( #3186 )
2023-03-21 12:36:47 +08:00
YH
9d644ff09f
Fix docstr for zero statedict ( #3185 )
2023-03-21 11:48:21 +08:00
zbian
7bc0afc901
updated flash attention usage
2023-03-20 17:57:04 +08:00
Frank Lee
085e7f4eff
[test] fixed torchrec registration in model zoo ( #3177 )
...
* [test] fixed torchrec registration in model zoo
* polish code
* polish code
* polish code
2023-03-20 16:19:06 +08:00
NatalieC323
4e921cfbd6
[examples] Solving the diffusion issue of incompatibility issue#3169 ( #3170 )
...
* Update requirements.txt
* Update environment.yaml
* Update README.md
* Update environment.yaml
2023-03-20 14:19:05 +08:00
Frank Lee
a9b8402d93
[booster] added the accelerator implementation ( #3159 )
2023-03-20 13:59:24 +08:00
Frank Lee
1ad3a636b1
[test] fixed torchrec model test ( #3167 )
...
* [test] fixed torchrec model test
* polish code
* polish code
* polish code
* polish code
* polish code
* polish code
2023-03-20 11:40:25 +08:00
Saurav Maheshkar
20d1c99444
[refactor] update docs ( #3174 )
...
* refactor: README-zh-Hans
* refactor: REFERENCE
* docs: update paths in README
2023-03-20 10:52:01 +08:00
BlueRum
7548ca5a54
[chatgpt]Reward Model Training Process update ( #3133 )
...
* add normalize function to value_head in bloom rm
* add normalization to value_function in gpt_rm
* add normalization to value_head of opt_rm
* add Anthropic/hh-rlhf dataset
* Update __init__.py
* Add LogExpLoss in RM training
* Update __init__.py
* update rm trainer to use acc as target
* update example/train_rm
* Update train_rm.sh
* code style
* Update README.md
* Update README.md
* add rm test to ci
* fix tokenier
* fix typo
* change batchsize to avoid oom in ci
* Update test_ci.sh
2023-03-20 09:59:06 +08:00
ver217
1e58d31bb7
[chatgpt] fix trainer generate kwargs ( #3166 )
2023-03-17 17:31:22 +08:00
ver217
c474fda282
[chatgpt] fix ppo training hanging problem with gemini ( #3162 )
...
* [chatgpt] fix generation early stopping
* [chatgpt] fix train prompts example
2023-03-17 15:41:47 +08:00