Hongxin Liu
4b3240cb59
[booster] add low level zero plugin ( #3594 )
...
* [booster] add low level zero plugin
* [booster] fix gemini plugin test
* [booster] fix precision
* [booster] add low level zero plugin test
* [test] fix booster plugin test oom
* [test] fix booster plugin test oom
* [test] fix googlenet and inception output trans
* [test] fix diffuser clip vision model
* [test] fix torchaudio_wav2vec2_base
* [test] fix low level zero plugin test
2 years ago
digger-yu
b9a8dff7e5
[doc] Fix typo under colossalai and doc( #3618 )
...
* Fixed several spelling errors under colossalai
* Fix the spelling error in colossalai and docs directory
* Cautious Changed the spelling error under the example folder
* Update runtime_preparation_pass.py
revert autograft to autograd
* Update search_chunk.py
utile to until
* Update check_installation.py
change misteach to mismatch in line 91
* Update 1D_tensor_parallel.md
revert to perceptron
* Update 2D_tensor_parallel.md
revert to perceptron in line 73
* Update 2p5D_tensor_parallel.md
revert to perceptron in line 71
* Update 3D_tensor_parallel.md
revert to perceptron in line 80
* Update README.md
revert to resnet in line 42
* Update reorder_graph.py
revert to indice in line 7
* Update p2p.py
revert to megatron in line 94
* Update initialize.py
revert to torchrun in line 198
* Update routers.py
change to detailed in line 63
* Update routers.py
change to detailed in line 146
* Update README.md
revert random number in line 402
2 years ago
Tong Li
e1b0a78afa
Merge pull request #3621 from zhang-yi-chi/fix/chat-train-prompts-single-gpu
...
[chat] fix single gpu training bug in examples/train_prompts.py
2 years ago
ddobokki
df309fc6ab
[Chat] Remove duplicate functions ( #3625 )
2 years ago
Hongxin Liu
179558a87a
[devops] fix chat ci ( #3628 )
2 years ago
zhang-yi-chi
739cfe3360
[chat] fix enable single gpu training bug
2 years ago
digger-yu
d7bf284706
[chat] polish code note typo ( #3612 )
2 years ago
Yuanchen
c4709d34cf
Chat evaluate ( #3608 )
...
Co-authored-by: Yuanchen Xu <yuanchen.xu00@gmail.com>
2 years ago
digger-yu
633bac2f58
[doc] .github/workflows/README.md ( #3605 )
...
Fixed several word spelling errors
change "compatiblity" to "compatibility" etc.
2 years ago
digger-yu
becd3b0f54
[doc] fix setup.py typo ( #3603 )
...
Optimization Code
change "vairable" to "variable"
2 years ago
digger-yu
7570d9ae3d
[doc] fix op_builder/README.md ( #3597 )
...
Optimization Code
change "requries" to "requires"
2 years ago
Hongxin Liu
12eff9eb4c
[gemini] state dict supports fp16 ( #3590 )
...
* [gemini] save state dict support fp16
* [gemini] save state dict shard support fp16
* [gemini] fix state dict
* [gemini] fix state dict
2 years ago
github-actions[bot]
d544ed4345
[bot] Automated submodule synchronization ( #3596 )
...
Co-authored-by: github-actions <github-actions@github.com>
2 years ago
digger-yu
d96567bb5d
[misc] op_builder/builder.py ( #3593 )
...
Optimization Code
The source code has not been modified, only a few spelling errors in the comments have been changed
2 years ago
binmakeswell
5a79cffdfd
[coati] fix install cmd ( #3592 )
2 years ago
Yuanchen
1ec0d386a9
reconstruct chat trainer and fix training script ( #3588 )
...
Co-authored-by: Yuanchen Xu <yuanchen.xu00@gmail.com>
2 years ago
Hongxin Liu
dac127d0ee
[fx] fix meta tensor registration ( #3589 )
...
* [meta] fix torch 1.13.1
* [meta] fix torch 2.0.0
* [meta] fix torch 1.13.0
* [meta] polish code
2 years ago
Camille Zhong
36a519b49f
Update test_ci.sh
...
update
Update test_ci.sh
Update test_ci.sh
Update test_ci.sh
Update test_ci.sh
Update test_ci.sh
Update test_ci.sh
Update run_chatgpt_examples.yml
Update run_chatgpt_examples.yml
Update run_chatgpt_examples.yml
Update run_chatgpt_examples.yml
Update run_chatgpt_examples.yml
Update run_chatgpt_examples.yml
Update test_ci.sh
Update test_ci.sh
update
Update run_chatgpt_examples.yml
Update run_chatgpt_examples.yml
update ci
Update test_ci.sh
Update run_chatgpt_examples.yml
Update run_chatgpt_examples.yml
Update run_chatgpt_examples.yml
Update run_chatgpt_examples.yml
Update run_chatgpt_examples.yml
Update run_chatgpt_examples.yml
Update run_chatgpt_examples.yml
Update test_ci.sh
Update test_ci.sh
Update run_chatgpt_examples.yml
Update test_ci.sh
Update test_ci.sh
Update test_ci.sh
update test ci
RoBERTa for RLHF Stage 2 & 3 (still in testing)
Revert "Add RoBERTa for RLHF Stage 2 & 3 (test)"
This reverts commit 06741d894d
.
Add RoBERTa for RLHF stage 2 & 3
1. add roberta folder under model folder
2. add roberta option in train_reward_model.py
3. add some test in testci
Update test_ci.sh
Revert "Update test_ci.sh"
This reverts commit 9c7352b81766f3177d31eeec0ec178a301df966a.
Add RoBERTa for RLHF Stage 2 & 3 (test)
RoBERTa for RLHF Stage 2 & 3 (still in testing)
Revert "Add RoBERTa for RLHF Stage 2 & 3 (test)"
This reverts commit 06741d894d
.
Add RoBERTa for RLHF stage 2 & 3
1. add roberta folder under model folder
2. add roberta option in train_reward_model.py
3. add some test in testci
Update test_ci.sh
Revert "Update test_ci.sh"
This reverts commit 9c7352b81766f3177d31eeec0ec178a301df966a.
update roberta with coati
chat ci update
Revert "chat ci update"
This reverts commit 17ae7ae01fa752bd3289fc39069868fde99cf846.
[test]chat_update_ci
Update test_ci.sh
Update test_ci.sh
test
Update gpt_critic.py
Update gpt_critic.py
Update run_chatgpt_unit_tests.yml
update test ci
update
update
update
update
Update test_ci.sh
update
Update test_ci.sh
Update test_ci.sh
Update run_chatgpt_examples.yml
Update run_chatgpt_examples.yml
2 years ago
digger-yu
d0fbd4b86f
[example] fix community doc ( #3586 )
...
Adjusted the style of Community Examples to be consistent with other titles
2 years ago
Hongxin Liu
f313babd11
[gemini] support save state dict in shards ( #3581 )
...
* [gemini] support state dict shard
* [gemini] add test state dict shard
* [gemini] polish docstr
* [gemini] fix merge
* [gemini] polish code
2 years ago
tingfeng cao
7788e0b0a5
fix: fix sft ( #3568 )
2 years ago
digger-yu
6e7e43c6fe
[doc] Update .github/workflows/README.md ( #3577 )
...
Optimization Code
I think there were two extra $ entered here, which have been deleted
2 years ago
Fazzie-Maqianli
6b1a39b17b
[coati] add costom model suppor tguide ( #3579 )
2 years ago
binmakeswell
cc1eec2f53
[chat] update reward model sh ( #3578 )
2 years ago
csric
e355144375
[chatgpt] Detached PPO Training ( #3195 )
...
* run the base
* working on dist ppo
* sync
* detached trainer
* update detached trainer. no maker update function
* facing init problem
* 1 maker 1 trainer detached run. but no model update
* facing cuda problem
* fix save functions
* verified maker update
* nothing
* add ignore
* analyize loss issue
* remove some debug codes
* facing 2m1t stuck issue
* 2m1t verified
* do not use torchrun
* working on 2m2t
* working on 2m2t
* initialize strategy in ray actor env
* facing actor's init order issue
* facing ddp model update issue (need unwarp ddp)
* unwrap ddp actor
* checking 1m2t stuck problem
* nothing
* set timeout for trainer choosing. It solves the stuck problem!
* delete some debug output
* rename to sync with upstream
* rename to sync with upstream
* coati rename
* nothing
* I am going to detach the replaybuffer from trainer and make it a Ray Actor. Two benefits: 1. support TP trainer. 2. asynchronized buffer operations
* experience_maker_holder performs target-revolving _send_experience() instead of length comparison.
* move code to ray subfolder
* working on pipeline inference
* apply comments
---------
Co-authored-by: csric <richcsr256@gmail.com>
2 years ago
YH
d329c294ec
Add docstr for zero3 chunk search utils ( #3572 )
2 years ago
digger-yu
9edeadfb24
[doc] Update 1D_tensor_parallel.md ( #3573 )
...
Display format optimization , same as fix#3562
Simultaneous modification of en version
2 years ago
Hongxin Liu
173dad0562
[misc] add verbose arg for zero and op builder ( #3552 )
...
* [misc] add print verbose
* [gemini] add print verbose
* [zero] add print verbose for low level
* [misc] add print verbose for op builder
2 years ago
Hongxin Liu
4341f5e8e6
[lazyinit] fix clone and deepcopy ( #3553 )
2 years ago
digger-yu
1c7734bc94
[doc] Update 1D_tensor_parallel.md ( #3563 )
...
Display format optimization, fix bug#3562
Specific changes
1. "This is called a column-parallel fashion" Translate to Chinese
2. use the ```math code block syntax to display a math expression as a block, No modification of formula content
Please check that the math formula is displayed correctly
If OK, I will change the format of the English version of the formula in parallel
2 years ago
binmakeswell
f1b3d60cae
[example] reorganize for community examples ( #3557 )
2 years ago
MisterLin1995
1a809eddaa
[chat] ChatGPT train prompts on ray example ( #3309 )
...
* [feat][chatgpt]train prompts on ray example
* [fix]simplify code
* [fix]remove depreciated parameter
* [fix]add dependencies
* [fix]method calling
* [fix]experience maker
* [fix]missing loss function
* [fix]init optimizer
* [feat]add usage comment
* [fix]rename files
* [fix]add readme
* [fix]file path
* [fix]move directory
---------
Co-authored-by: jiangwen <zxl265370@antgroup.com>
2 years ago
binmakeswell
535b896435
[chat] polish tutorial doc ( #3551 )
...
* [chat] clean up duplicate tutorial
* [chat] clean up duplicate tutorial
* [chat] clean up duplicate tutorial
* [chat] clean up duplicate tutorial
2 years ago
digger-yu
77efdfe1dd
[doc] Update README.md ( #3549 )
...
Format Optimization ,Add [] outside of DeepSpeed
2 years ago
digger-yu
3f760da9f0
Update README.md ( #3548 )
...
Delete more ")"
2 years ago
digger-yu
a3ac48ef3d
[doc] Update README-zh-Hans.md ( #3541 )
...
Fixing document link errors using absolute paths
2 years ago
natalie_cao
de84c0311a
Polish Code
2 years ago
Hongxin Liu
152239bbfa
[gemini] gemini supports lazy init ( #3379 )
...
* [gemini] fix nvme optimizer init
* [gemini] gemini supports lazy init
* [gemini] add init example
* [gemini] add fool model
* [zero] update gemini ddp
* [zero] update init example
* add chunk method
* add chunk method
* [lazyinit] fix lazy tensor tolist
* [gemini] fix buffer materialization
* [misc] remove useless file
* [booster] update gemini plugin
* [test] update gemini plugin test
* [test] fix gemini plugin test
* [gemini] fix import
* [gemini] fix import
* [lazyinit] use new metatensor
* [lazyinit] use new metatensor
* [lazyinit] fix __set__ method
2 years ago
jiangmingyan
366a035552
[checkpoint] Shard saved checkpoint need to be compatible with the naming format of hf checkpoint files ( #3479 )
...
* [checkpoint] support huggingface style sharded checkpoint, to be compatible with hf file naming format
* [checkpoint] support huggingface style sharded checkpoint, to be compatible with hf file naming format
* [checkpoint] Shard saved checkpoint add 'variant' field to customize filename
* [checkpoint] Shard saved checkpoint add 'variant' field to customize filename
* [checkpoint] Shard saved checkpoint add 'variant' field to customize filename
* [checkpoint] Shard saved checkpoint add 'variant' field to customize filename
---------
Co-authored-by: luchen <luchen@luchendeMacBook-Pro.local>
Co-authored-by: luchen <luchen@luchendeMBP.lan>
2 years ago
Yuanchen
7182ac2a04
[chat]add examples of training with limited resources in chat readme ( #3536 )
...
Co-authored-by: Yuanchen Xu <yuanchen.xu00@gmail.com>
2 years ago
zhang-yi-chi
e6a132a449
[chat]: add vf_coef argument for PPOTrainer ( #3318 )
2 years ago
ver217
89fd10a1c9
[chat] add zero2 cpu strategy for sft training ( #3520 )
2 years ago
binmakeswell
990d4c3e4e
[doc] hide diffusion in application path ( #3519 )
...
- [ ] Stable Diffusion
- [ ] Dreambooth
It's easy for users to think that we don't support them yet. Add them after migrating them from example to application
https://github.com/hpcaitech/ColossalAI/tree/main/examples/images
2 years ago
binmakeswell
0c0455700f
[doc] add requirement and highlight application ( #3516 )
...
* [doc] add requirement and highlight application
* [doc] link example and application
2 years ago
NatalieC323
635d0a1baf
[Chat Community] Update README.md (fixed#3487) ( #3506 )
...
* Update README.md
* Update README.md
* Update README.md
* Update README.md
---------
Co-authored-by: Fazzie-Maqianli <55798671+Fazziekey@users.noreply.github.com>
2 years ago
YH
bcf0cbcbe7
[doc] Add docs for clip args in zero optim ( #3504 )
2 years ago
gongenlei
a7ca297281
[coati] Fix LlamaCritic ( #3475 )
...
* mv LlamaForCausalLM to LlamaModel
* rm unused imports
---------
Co-authored-by: gongenlei <gongenlei@baidu.com>
2 years ago
mandoxzhang
8f2c55f9c9
[example] remove redundant texts & update roberta ( #3493 )
...
* update roberta example
* update roberta example
* modify conflict & update roberta
2 years ago
mandoxzhang
ab5fd127e3
[example] update roberta with newer ColossalAI ( #3472 )
...
* update roberta example
* update roberta example
2 years ago
NatalieC323
fb8fae6f29
Revert "[dreambooth] fixing the incompatibity in requirements.txt ( #3190 ) ( #3378 )" ( #3481 )
2 years ago