Hongxin Liu
f8288315d9
[chat] polish performance evaluator ( #3647 )
2023-04-26 17:34:59 +08:00
Hongxin Liu
50793b35f4
[gemini] accelerate inference ( #3641 )
...
* [gemini] support don't scatter after inference
* [chat] update colossalai strategy
* [chat] fix opt benchmark
* [chat] update opt benchmark
* [gemini] optimize inference
* [test] add gemini inference test
* [chat] fix unit test ci
* [chat] fix ci
* [chat] fix ci
* [chat] skip checkpoint test
2023-04-26 16:32:40 +08:00
Hongxin Liu
4b3240cb59
[booster] add low level zero plugin ( #3594 )
...
* [booster] add low level zero plugin
* [booster] fix gemini plugin test
* [booster] fix precision
* [booster] add low level zero plugin test
* [test] fix booster plugin test oom
* [test] fix booster plugin test oom
* [test] fix googlenet and inception output trans
* [test] fix diffuser clip vision model
* [test] fix torchaudio_wav2vec2_base
* [test] fix low level zero plugin test
2023-04-26 14:37:25 +08:00
digger-yu
b9a8dff7e5
[doc] Fix typo under colossalai and doc( #3618 )
...
* Fixed several spelling errors under colossalai
* Fix the spelling error in colossalai and docs directory
* Cautious Changed the spelling error under the example folder
* Update runtime_preparation_pass.py
revert autograft to autograd
* Update search_chunk.py
utile to until
* Update check_installation.py
change misteach to mismatch in line 91
* Update 1D_tensor_parallel.md
revert to perceptron
* Update 2D_tensor_parallel.md
revert to perceptron in line 73
* Update 2p5D_tensor_parallel.md
revert to perceptron in line 71
* Update 3D_tensor_parallel.md
revert to perceptron in line 80
* Update README.md
revert to resnet in line 42
* Update reorder_graph.py
revert to indice in line 7
* Update p2p.py
revert to megatron in line 94
* Update initialize.py
revert to torchrun in line 198
* Update routers.py
change to detailed in line 63
* Update routers.py
change to detailed in line 146
* Update README.md
revert random number in line 402
2023-04-26 11:38:43 +08:00
Tong Li
e1b0a78afa
Merge pull request #3621 from zhang-yi-chi/fix/chat-train-prompts-single-gpu
...
[chat] fix single gpu training bug in examples/train_prompts.py
2023-04-24 22:13:54 +08:00
ddobokki
df309fc6ab
[Chat] Remove duplicate functions ( #3625 )
2023-04-24 12:23:15 +08:00
Hongxin Liu
179558a87a
[devops] fix chat ci ( #3628 )
2023-04-24 10:55:14 +08:00
zhang-yi-chi
739cfe3360
[chat] fix enable single gpu training bug
2023-04-22 14:16:08 +08:00
digger-yu
d7bf284706
[chat] polish code note typo ( #3612 )
2023-04-20 17:22:15 +08:00
Yuanchen
c4709d34cf
Chat evaluate ( #3608 )
...
Co-authored-by: Yuanchen Xu <yuanchen.xu00@gmail.com>
2023-04-20 11:12:24 +08:00
digger-yu
633bac2f58
[doc] .github/workflows/README.md ( #3605 )
...
Fixed several word spelling errors
change "compatiblity" to "compatibility" etc.
2023-04-20 10:36:28 +08:00
digger-yu
becd3b0f54
[doc] fix setup.py typo ( #3603 )
...
Optimization Code
change "vairable" to "variable"
2023-04-19 17:28:15 +08:00
digger-yu
7570d9ae3d
[doc] fix op_builder/README.md ( #3597 )
...
Optimization Code
change "requries" to "requires"
2023-04-19 15:56:01 +08:00
Hongxin Liu
12eff9eb4c
[gemini] state dict supports fp16 ( #3590 )
...
* [gemini] save state dict support fp16
* [gemini] save state dict shard support fp16
* [gemini] fix state dict
* [gemini] fix state dict
2023-04-19 11:01:48 +08:00
github-actions[bot]
d544ed4345
[bot] Automated submodule synchronization ( #3596 )
...
Co-authored-by: github-actions <github-actions@github.com>
2023-04-19 10:38:12 +08:00
digger-yu
d96567bb5d
[misc] op_builder/builder.py ( #3593 )
...
Optimization Code
The source code has not been modified, only a few spelling errors in the comments have been changed
2023-04-18 19:14:59 +08:00
binmakeswell
5a79cffdfd
[coati] fix install cmd ( #3592 )
2023-04-18 18:19:48 +08:00
Yuanchen
1ec0d386a9
reconstruct chat trainer and fix training script ( #3588 )
...
Co-authored-by: Yuanchen Xu <yuanchen.xu00@gmail.com>
2023-04-18 16:44:03 +08:00
Hongxin Liu
dac127d0ee
[fx] fix meta tensor registration ( #3589 )
...
* [meta] fix torch 1.13.1
* [meta] fix torch 2.0.0
* [meta] fix torch 1.13.0
* [meta] polish code
2023-04-18 16:20:36 +08:00
Camille Zhong
36a519b49f
Update test_ci.sh
...
update
Update test_ci.sh
Update test_ci.sh
Update test_ci.sh
Update test_ci.sh
Update test_ci.sh
Update test_ci.sh
Update run_chatgpt_examples.yml
Update run_chatgpt_examples.yml
Update run_chatgpt_examples.yml
Update run_chatgpt_examples.yml
Update run_chatgpt_examples.yml
Update run_chatgpt_examples.yml
Update test_ci.sh
Update test_ci.sh
update
Update run_chatgpt_examples.yml
Update run_chatgpt_examples.yml
update ci
Update test_ci.sh
Update run_chatgpt_examples.yml
Update run_chatgpt_examples.yml
Update run_chatgpt_examples.yml
Update run_chatgpt_examples.yml
Update run_chatgpt_examples.yml
Update run_chatgpt_examples.yml
Update run_chatgpt_examples.yml
Update test_ci.sh
Update test_ci.sh
Update run_chatgpt_examples.yml
Update test_ci.sh
Update test_ci.sh
Update test_ci.sh
update test ci
RoBERTa for RLHF Stage 2 & 3 (still in testing)
Revert "Add RoBERTa for RLHF Stage 2 & 3 (test)"
This reverts commit 06741d894d
.
Add RoBERTa for RLHF stage 2 & 3
1. add roberta folder under model folder
2. add roberta option in train_reward_model.py
3. add some test in testci
Update test_ci.sh
Revert "Update test_ci.sh"
This reverts commit 9c7352b81766f3177d31eeec0ec178a301df966a.
Add RoBERTa for RLHF Stage 2 & 3 (test)
RoBERTa for RLHF Stage 2 & 3 (still in testing)
Revert "Add RoBERTa for RLHF Stage 2 & 3 (test)"
This reverts commit 06741d894d
.
Add RoBERTa for RLHF stage 2 & 3
1. add roberta folder under model folder
2. add roberta option in train_reward_model.py
3. add some test in testci
Update test_ci.sh
Revert "Update test_ci.sh"
This reverts commit 9c7352b81766f3177d31eeec0ec178a301df966a.
update roberta with coati
chat ci update
Revert "chat ci update"
This reverts commit 17ae7ae01fa752bd3289fc39069868fde99cf846.
[test]chat_update_ci
Update test_ci.sh
Update test_ci.sh
test
Update gpt_critic.py
Update gpt_critic.py
Update run_chatgpt_unit_tests.yml
update test ci
update
update
update
update
Update test_ci.sh
update
Update test_ci.sh
Update test_ci.sh
Update run_chatgpt_examples.yml
Update run_chatgpt_examples.yml
2023-04-18 14:33:12 +08:00
digger-yu
d0fbd4b86f
[example] fix community doc ( #3586 )
...
Adjusted the style of Community Examples to be consistent with other titles
2023-04-18 10:37:34 +08:00
Hongxin Liu
f313babd11
[gemini] support save state dict in shards ( #3581 )
...
* [gemini] support state dict shard
* [gemini] add test state dict shard
* [gemini] polish docstr
* [gemini] fix merge
* [gemini] polish code
2023-04-17 17:11:09 +08:00
tingfeng cao
7788e0b0a5
fix: fix sft ( #3568 )
2023-04-17 16:47:44 +08:00
digger-yu
6e7e43c6fe
[doc] Update .github/workflows/README.md ( #3577 )
...
Optimization Code
I think there were two extra $ entered here, which have been deleted
2023-04-17 16:27:38 +08:00
Fazzie-Maqianli
6b1a39b17b
[coati] add costom model suppor tguide ( #3579 )
2023-04-17 15:40:41 +08:00
binmakeswell
cc1eec2f53
[chat] update reward model sh ( #3578 )
2023-04-17 15:02:55 +08:00
csric
e355144375
[chatgpt] Detached PPO Training ( #3195 )
...
* run the base
* working on dist ppo
* sync
* detached trainer
* update detached trainer. no maker update function
* facing init problem
* 1 maker 1 trainer detached run. but no model update
* facing cuda problem
* fix save functions
* verified maker update
* nothing
* add ignore
* analyize loss issue
* remove some debug codes
* facing 2m1t stuck issue
* 2m1t verified
* do not use torchrun
* working on 2m2t
* working on 2m2t
* initialize strategy in ray actor env
* facing actor's init order issue
* facing ddp model update issue (need unwarp ddp)
* unwrap ddp actor
* checking 1m2t stuck problem
* nothing
* set timeout for trainer choosing. It solves the stuck problem!
* delete some debug output
* rename to sync with upstream
* rename to sync with upstream
* coati rename
* nothing
* I am going to detach the replaybuffer from trainer and make it a Ray Actor. Two benefits: 1. support TP trainer. 2. asynchronized buffer operations
* experience_maker_holder performs target-revolving _send_experience() instead of length comparison.
* move code to ray subfolder
* working on pipeline inference
* apply comments
---------
Co-authored-by: csric <richcsr256@gmail.com>
2023-04-17 14:46:50 +08:00
YH
d329c294ec
Add docstr for zero3 chunk search utils ( #3572 )
2023-04-17 12:44:17 +08:00
digger-yu
9edeadfb24
[doc] Update 1D_tensor_parallel.md ( #3573 )
...
Display format optimization , same as fix#3562
Simultaneous modification of en version
2023-04-17 12:19:53 +08:00
Hongxin Liu
173dad0562
[misc] add verbose arg for zero and op builder ( #3552 )
...
* [misc] add print verbose
* [gemini] add print verbose
* [zero] add print verbose for low level
* [misc] add print verbose for op builder
2023-04-17 11:25:35 +08:00
Hongxin Liu
4341f5e8e6
[lazyinit] fix clone and deepcopy ( #3553 )
2023-04-17 11:25:13 +08:00
digger-yu
1c7734bc94
[doc] Update 1D_tensor_parallel.md ( #3563 )
...
Display format optimization, fix bug#3562
Specific changes
1. "This is called a column-parallel fashion" Translate to Chinese
2. use the ```math code block syntax to display a math expression as a block, No modification of formula content
Please check that the math formula is displayed correctly
If OK, I will change the format of the English version of the formula in parallel
2023-04-14 22:12:32 +08:00
binmakeswell
f1b3d60cae
[example] reorganize for community examples ( #3557 )
2023-04-14 16:27:48 +08:00
MisterLin1995
1a809eddaa
[chat] ChatGPT train prompts on ray example ( #3309 )
...
* [feat][chatgpt]train prompts on ray example
* [fix]simplify code
* [fix]remove depreciated parameter
* [fix]add dependencies
* [fix]method calling
* [fix]experience maker
* [fix]missing loss function
* [fix]init optimizer
* [feat]add usage comment
* [fix]rename files
* [fix]add readme
* [fix]file path
* [fix]move directory
---------
Co-authored-by: jiangwen <zxl265370@antgroup.com>
2023-04-13 18:18:36 +08:00
binmakeswell
535b896435
[chat] polish tutorial doc ( #3551 )
...
* [chat] clean up duplicate tutorial
* [chat] clean up duplicate tutorial
* [chat] clean up duplicate tutorial
* [chat] clean up duplicate tutorial
2023-04-13 18:11:48 +08:00
digger-yu
77efdfe1dd
[doc] Update README.md ( #3549 )
...
Format Optimization ,Add [] outside of DeepSpeed
2023-04-13 17:11:55 +08:00
digger-yu
3f760da9f0
Update README.md ( #3548 )
...
Delete more ")"
2023-04-13 16:49:57 +08:00
digger-yu
a3ac48ef3d
[doc] Update README-zh-Hans.md ( #3541 )
...
Fixing document link errors using absolute paths
2023-04-12 23:09:30 +08:00
natalie_cao
de84c0311a
Polish Code
2023-04-12 18:19:46 +08:00
Hongxin Liu
152239bbfa
[gemini] gemini supports lazy init ( #3379 )
...
* [gemini] fix nvme optimizer init
* [gemini] gemini supports lazy init
* [gemini] add init example
* [gemini] add fool model
* [zero] update gemini ddp
* [zero] update init example
* add chunk method
* add chunk method
* [lazyinit] fix lazy tensor tolist
* [gemini] fix buffer materialization
* [misc] remove useless file
* [booster] update gemini plugin
* [test] update gemini plugin test
* [test] fix gemini plugin test
* [gemini] fix import
* [gemini] fix import
* [lazyinit] use new metatensor
* [lazyinit] use new metatensor
* [lazyinit] fix __set__ method
2023-04-12 16:03:25 +08:00
jiangmingyan
366a035552
[checkpoint] Shard saved checkpoint need to be compatible with the naming format of hf checkpoint files ( #3479 )
...
* [checkpoint] support huggingface style sharded checkpoint, to be compatible with hf file naming format
* [checkpoint] support huggingface style sharded checkpoint, to be compatible with hf file naming format
* [checkpoint] Shard saved checkpoint add 'variant' field to customize filename
* [checkpoint] Shard saved checkpoint add 'variant' field to customize filename
* [checkpoint] Shard saved checkpoint add 'variant' field to customize filename
* [checkpoint] Shard saved checkpoint add 'variant' field to customize filename
---------
Co-authored-by: luchen <luchen@luchendeMacBook-Pro.local>
Co-authored-by: luchen <luchen@luchendeMBP.lan>
2023-04-12 16:02:17 +08:00
Yuanchen
7182ac2a04
[chat]add examples of training with limited resources in chat readme ( #3536 )
...
Co-authored-by: Yuanchen Xu <yuanchen.xu00@gmail.com>
2023-04-12 15:47:09 +08:00
zhang-yi-chi
e6a132a449
[chat]: add vf_coef argument for PPOTrainer ( #3318 )
2023-04-11 09:54:59 +08:00
ver217
89fd10a1c9
[chat] add zero2 cpu strategy for sft training ( #3520 )
2023-04-10 19:00:13 +08:00
binmakeswell
990d4c3e4e
[doc] hide diffusion in application path ( #3519 )
...
- [ ] Stable Diffusion
- [ ] Dreambooth
It's easy for users to think that we don't support them yet. Add them after migrating them from example to application
https://github.com/hpcaitech/ColossalAI/tree/main/examples/images
2023-04-10 17:52:24 +08:00
binmakeswell
0c0455700f
[doc] add requirement and highlight application ( #3516 )
...
* [doc] add requirement and highlight application
* [doc] link example and application
2023-04-10 17:37:16 +08:00
NatalieC323
635d0a1baf
[Chat Community] Update README.md (fixed#3487) ( #3506 )
...
* Update README.md
* Update README.md
* Update README.md
* Update README.md
---------
Co-authored-by: Fazzie-Maqianli <55798671+Fazziekey@users.noreply.github.com>
2023-04-10 14:36:39 +08:00
YH
bcf0cbcbe7
[doc] Add docs for clip args in zero optim ( #3504 )
2023-04-10 11:11:28 +08:00
gongenlei
a7ca297281
[coati] Fix LlamaCritic ( #3475 )
...
* mv LlamaForCausalLM to LlamaModel
* rm unused imports
---------
Co-authored-by: gongenlei <gongenlei@baidu.com>
2023-04-07 11:39:09 +08:00
mandoxzhang
8f2c55f9c9
[example] remove redundant texts & update roberta ( #3493 )
...
* update roberta example
* update roberta example
* modify conflict & update roberta
2023-04-07 11:33:32 +08:00