Frank Lee
015af592f8
[shardformer] integrated linear 1D with dtensor ( #3996 )
...
* [shardformer] integrated linear 1D with dtensor
* polish code
2023-07-04 16:05:01 +08:00
FoolPlayer
d3bc530849
[shardformer] Refactor shardformer api ( #4001 )
...
* fix an error in readme
* simplify code
* refactor shardformer
* add todo
* remove slicer
* resolve code review
2023-07-04 16:05:01 +08:00
Frank Lee
611971248c
[device] support init device mesh from process group ( #3990 )
2023-07-04 16:05:01 +08:00
FoolPlayer
a2f9af810d
[shardformer] fix an error in readme ( #3988 )
...
* fix an error in readme
* simplify code
2023-07-04 16:05:01 +08:00
FoolPlayer
f7774ec0f3
[Shardformer] Downstream bert ( #3979 )
...
* add dist dropout in model
* update docstring and bert policy with dropout
* refactor basepolicy and sharded, update bert
* update format
* update gpt2 policy
* update bert policy
* remove unused code
* update readme for new policy usage
* add downstream model of bert
* remove unused code
2023-07-04 16:05:01 +08:00
wukong1992
c1c672d0f0
[shardformer] shardformer support t5 model ( #3994 )
...
test t5
2023-07-04 16:05:01 +08:00
wukong1992
6b30dfb7ce
[shardformer] support llama model using shardformer ( #3969 )
...
adjust layer attr
2023-07-04 16:05:01 +08:00
FoolPlayer
45927d5527
[shardformer] Add dropout layer in shard model and refactor policy api ( #3949 )
...
* add dist dropout in model
* update docstring and bert policy with dropout
* refactor basepolicy and sharded, update bert
* update format
* update gpt2 policy
* update bert policy
* remove unused code
* update readme for new policy usage
2023-07-04 16:05:01 +08:00
FoolPlayer
a73130482d
[shardformer] Unit test ( #3928 )
...
* fix bug in slicer, add slicer unit test
* add dropout test
* use pid as dropout seed
* updata dropout test with local pattern
* ad todo
2023-07-04 16:05:01 +08:00
FoolPlayer
f1cb5ac6bf
[shardformer] Align bert value ( #3907 )
...
* add bert align test, fix dist loss bug
* forward and backward align
* add ignore index
* add shardformer CI
* add gather_output optional for user in shardconfig
* update readme with optional gather_ouput
* add dist crossentropy loss test, remove unused files
* remove unused file
* remove unused file
* rename the file
* polish code
2023-07-04 16:05:01 +08:00
FoolPlayer
79f8d5d54b
[shardformer] add gpt2 policy and modify shard and slicer to support ( #3883 )
...
* add gpt2 policy and modify shard and slicer to support
* remove unused code
* polish code
2023-07-04 16:05:01 +08:00
FoolPlayer
70173e3123
update README ( #3909 )
2023-07-04 16:05:01 +08:00
FoolPlayer
ab8a47f830
[shardformer] add Dropout layer support different dropout pattern ( #3856 )
...
* add dropout layer, add dropout test
* modify seed manager as context manager
* add a copy of col_nn.layer
* add dist_crossentropy loss; separate module test
* polish the code
* fix dist crossentropy loss
2023-07-04 16:05:01 +08:00
FoolPlayer
c594dc2f1c
[shardformer] update readme with modules implement doc ( #3834 )
...
* update readme with modules content
* remove img
2023-07-04 16:05:01 +08:00
Frank Lee
4972e1f40e
[shardformer] refactored the user api ( #3828 )
...
* [shardformer] refactored the user api
* polish code
2023-07-04 16:05:01 +08:00
Frank Lee
235792f170
[shardformer] updated readme ( #3827 )
2023-07-04 16:05:01 +08:00
FoolPlayer
8cc11235c0
[shardformer]: Feature/shardformer, add some docstring and readme ( #3816 )
...
* init shardformer code structure
* add implement of sharder (inject and replace)
* add implement of replace layer to colossal layer
* separate different layer policy, add some notion
* implement 1d and 2d slicer, can tell col or row
* fix bug when slicing and inject model
* fix some bug; add inference test example
* add share weight and train example
* add train
* add docstring and readme
* add docstring for other files
* pre-commit
2023-07-04 16:05:01 +08:00
FoolPlayer
8d68de767d
[shardformer] init shardformer code structure ( #3731 )
...
* init shardformer code structure
* add implement of sharder (inject and replace)
* add implement of replace layer to colossal layer
* separate different layer policy, add some notion
* implement 1d and 2d slicer, can tell col or row
* fix bug when slicing and inject model
* fix some bug; add inference test example
2023-07-04 16:05:01 +08:00
Wenhao Chen
3d8d5d0d58
[chat] use official transformers and fix some issues ( #4117 )
...
* feat: remove on_learn_epoch fn as not used
* revert: add _on_learn_epoch fn
* feat: remove NaiveStrategy
* test: update train_prompts tests
* fix: remove prepare_llama_tokenizer_and_embedding
* test: add lora arg
* feat: remove roberta support in train_prompts due to runtime errs
* feat: remove deberta & roberta in rm as not used
* test: remove deberta and roberta tests
* feat: remove deberta and roberta models as not used
* fix: remove calls to roberta
* fix: remove prepare_llama_tokenizer_and_embedding
* chore: update transformers version
* docs: update transformers version
* fix: fix actor inference
* fix: fix ci
* feat: change llama pad token to unk
* revert: revert ddp setup_distributed
* fix: change llama pad token to unk
* revert: undo unnecessary changes
* fix: use pip to install transformers
2023-07-04 13:49:09 +08:00
Baizhou Zhang
1350ece492
[hotfix] fix import bug in checkpoint_io ( #4142 )
2023-07-03 22:14:37 +08:00
digger yu
8abc87798f
fix Tensor is not defined ( #4129 )
2023-07-03 17:10:18 +08:00
digger yu
7e46bc87b6
fix CheckpointIndexFile is not defined ( #4109 )
2023-07-03 17:09:06 +08:00
digger yu
09fe9dc704
[nfc]fix ColossalaiOptimizer is not defined ( #4122 )
2023-06-30 17:23:22 +08:00
Wenhao Chen
edd75a59ea
[chat] remove naive strategy and split colossalai strategy ( #4094 )
...
* feat: remove on_learn_epoch fn as not used
* revert: add _on_learn_epoch fn
* to: remove the use of NaiveStrategy
* test: remove NaiveStrategy tests
* feat: remove NaiveStrategy
* style: modify comments and params
* feat: split ColossalAIStrategy into LowLevelZeroStrategy and GeminiStrategy
* fix: remove naive
* fix: align with modified colossal strategy
* fix: fix ddp _try_init_dist arg
2023-06-29 18:11:00 +08:00
Wenhao Chen
b03d64d010
[chat] refactor trainer class ( #4080 )
...
* to: add SLTrainer
* refactor: refactor RMTrainer and SFTTrainer
* fix: fix init file
* feat: remove on_learn_epoch fn as not used
* fix: align with modified gemini arguments
* to: add OnPolicyTrainer
* revert: add _on_learn_epoch fn
* refactor: refactor PPOTrainer
* style: rename PPOTrainer argument
* fix: align with modified PPO arguments
* test: align with modified train_prompts arguments
* chore: modify train_prompts
* docs: align with modified arguments
* fix: remove unnecessary output
* fix: move dataloader to fit fn of SLTrainer
* fix: move dataloader to fit fn of OnPolicyTrainer
* fix: modify usage of prompt and pretrain dataloader
2023-06-29 10:48:09 +08:00
Jianghai
711e2b4c00
[doc] update and revise some typos and errs in docs ( #4107 )
...
* fix some typos and problems in doc
* fix some typos and problems in doc
* add doc test
2023-06-28 19:30:37 +08:00
digger yu
769cddcb2c
fix typo docs/ ( #4033 )
2023-06-28 15:30:30 +08:00
digger yu
2d40759a53
fix #3852 path error ( #4058 )
2023-06-28 15:29:44 +08:00
Frank Lee
1ee947f617
[workflow] added status check for test coverage workflow ( #4106 )
2023-06-28 14:33:43 +08:00
Jianghai
31dc302017
[examples] copy resnet example to image ( #4090 )
...
* copy resnet example
* add pytest package
* skip test_ci
* skip test_ci
* skip test_ci
2023-06-27 16:40:46 +08:00
Frank Lee
95e95b6d58
[testing] move pytest to be inside the function ( #4087 )
2023-06-27 11:02:25 +08:00
Baizhou Zhang
4da324cd60
[hotfix]fix argument naming in docs and examples ( #4083 )
2023-06-26 23:50:04 +08:00
Michelle
e89b127d8e
[chat]: fix chat evaluation possible bug ( #4064 )
...
* fix chat eval
* fix utils
* fix utils
* add comment
---------
Co-authored-by: Qianran Ma <qianranm@luchentech.com>
2023-06-26 15:26:07 +08:00
Baizhou Zhang
2c8ae37f61
Merge pull request #4056 from Fridge003/hotfix/fix_gemini_chunk_config_searching
...
[gemini] Rename arguments in chunk configuration searching
2023-06-25 17:37:37 +08:00
Wenhao Chen
153b957a1b
[chat] refactor strategy class with booster api ( #3987 )
...
* refactor: adapt boost API in base and naive strategies
* fix: initialize plugin after setup_distributed
* fix: fix save_pretrained fn
* refactor: adapt boost API in DDPStrategy
* to: add _post_init check
* to: fix ddp backward, modify ddp dataloader and unwrap
* feat: adapt boost API in ColossalAIStrategy
* fix: call setup_distributed before use get_current_device
* fix: fix save_model and save_optimizer
* test: remove save_sharded_optimizer test
* style: apply formatter
* fix: fix stage check and add comments
* feat: allow dict type arg in strategy.prepare
* to: temporarily remove lr_scheduler for testing
* style: simplify init of ColossalAIStrategy
* fix: fix lr_scheduler in sft and rm
* style: modify comments
* test: add train_prompts tests
* fix: fix inference only case and use in train_prompts
* test: skip failed tests in ci
* style: fix CodeFactor check
* fix: do not use model.to('cpu') with GeminiPlugin
* test: enable colossalai_gemini tests
* test: set CUDA_VISIBLE_DEVICES in ci
* docs: add note
2023-06-25 17:36:21 +08:00
Baizhou Zhang
0bb0b481b4
[gemini] fix argument naming during chunk configuration searching
2023-06-25 13:34:15 +08:00
Frank Lee
b463651f3e
[workflow] cover all public repositories in weekly report ( #4069 )
2023-06-22 14:41:25 +08:00
Hongxin Liu
4a81faa5f3
[devops] fix build on pr ci ( #4043 )
...
* [devops] fix build on pr ci
* [devops] fix build on pr ci
2023-06-19 17:12:56 +08:00
github-actions[bot]
a52f62082d
[format] applied code formatting on changed files in pull request 4021 ( #4022 )
...
Co-authored-by: github-actions <github-actions@github.com>
2023-06-19 11:23:24 +08:00
LuGY
160c64c645
[example] fix bucket size in example of gpt gemini ( #4028 )
2023-06-19 11:22:42 +08:00
digger yu
727c4598a9
[nfc] fix dim not defined and fix typo ( #3991 )
2023-06-19 11:21:55 +08:00
Frank Lee
ca768eb62d
Merge pull request #4025 from hpcaitech/develop
...
[sync] sync develop to main
2023-06-19 10:31:34 +08:00
Frank Lee
a5883aa790
[test] fixed codefactor format report ( #4026 )
2023-06-16 18:23:02 +08:00
Baizhou Zhang
822c3d4d66
[checkpointio] sharded optimizer checkpoint for DDP plugin ( #4002 )
2023-06-16 14:14:05 +08:00
Wenhao Chen
725af3eeeb
[booster] make optimizer argument optional for boost ( #3993 )
...
* feat: make optimizer optional in Booster.boost
* test: skip unet test if diffusers version > 0.10.2
2023-06-15 17:38:42 +08:00
Baizhou Zhang
c9cff7e7fa
[checkpointio] General Checkpointing of Sharded Optimizers ( #3984 )
2023-06-15 15:21:26 +08:00
digger yu
d4fb7bfda7
fix typo applications/Chat/coati/ ( #3947 )
2023-06-15 10:43:11 +08:00
Baizhou Zhang
e8ad3c88f5
[doc] add a note about unit-testing to CONTRIBUTING.md ( #3970 )
2023-06-14 16:32:39 +08:00
Yuanchen
2925f47399
[evaluate] support gpt evaluation with reference ( #3972 )
...
Co-authored-by: Yuanchen Xu <yuanchen.xu00@gmail.com>
2023-06-13 15:12:29 +08:00
Frank Lee
8bcad73677
[workflow] fixed the directory check in build ( #3980 )
2023-06-13 14:42:35 +08:00