Commit Graph

2665 Commits (1edc9b5fb3f8a9c9ec5d71a62bb33914a0d5f0c4)

Author SHA1 Message Date
FoolPlayer 7740c55c55 support kit use for bert/gpt test (#4055)
* support kit use for bert test

* support kit test for gpt2
2023-07-04 16:05:01 +08:00
Frank Lee f22ddacef0 [shardformer] refactored the shardformer layer structure (#4053) 2023-07-04 16:05:01 +08:00
Frank Lee 58df720570 [shardformer] adapted T5 and LLaMa test to use kit (#4049)
* [shardformer] adapted T5 and LLaMa test to use kit

* polish code
2023-07-04 16:05:01 +08:00
FoolPlayer 4021b9a8a2 [shardformer] add gpt2 test and layer class refactor (#4041)
* add gpt2 test and layer class refactor

* add dropout in gpt2 policy
2023-07-04 16:05:01 +08:00
Frank Lee d857f3dbba [shardformer] supported T5 and its variants (#4045) 2023-07-04 16:05:01 +08:00
Frank Lee c1d5453e9f [shardformer] adapted llama to the new API (#4036) 2023-07-04 16:05:01 +08:00
FoolPlayer 74d176c8d8 [shardformer] fix bert and gpt downstream with new api (#4024)
* fix bert downstream with new api

* remove comment line
2023-07-04 16:05:01 +08:00
Frank Lee e253a07007 [shardformer] updated doc (#4016) 2023-07-04 16:05:01 +08:00
FoolPlayer df018fc305 support bert with new api 2023-07-04 16:05:01 +08:00
FoolPlayer 507c0ad368 add vocabembedding layer 2023-07-04 16:05:01 +08:00
Frank Lee 45d9384346 [shardformer] removed inplace tensor sharding (#4018) 2023-07-04 16:05:01 +08:00
Frank Lee 3893fa1a8d [shardformer] refactored embedding and dropout to parallel module (#4013)
* [shardformer] refactored embedding and dropout to parallel module

* polish code
2023-07-04 16:05:01 +08:00
FoolPlayer dfca9678fa integrate with dist layer (#4011) 2023-07-04 16:05:01 +08:00
Frank Lee 015af592f8 [shardformer] integrated linear 1D with dtensor (#3996)
* [shardformer] integrated linear 1D with dtensor

* polish code
2023-07-04 16:05:01 +08:00
FoolPlayer d3bc530849 [shardformer] Refactor shardformer api (#4001)
* fix an error in readme

* simplify code

* refactor shardformer

* add todo

* remove slicer

* resolve code review
2023-07-04 16:05:01 +08:00
Frank Lee 611971248c [device] support init device mesh from process group (#3990) 2023-07-04 16:05:01 +08:00
FoolPlayer a2f9af810d [shardformer] fix an error in readme (#3988)
* fix an error in readme

* simplify code
2023-07-04 16:05:01 +08:00
FoolPlayer f7774ec0f3 [Shardformer] Downstream bert (#3979)
* add dist dropout in model

* update docstring and bert policy with dropout

* refactor basepolicy and sharded, update bert

* update format

* update gpt2 policy

* update bert policy

* remove unused code

* update readme for new policy usage

* add downstream model of bert

* remove unused code
2023-07-04 16:05:01 +08:00
wukong1992 c1c672d0f0 [shardformer] shardformer support t5 model (#3994)
test t5
2023-07-04 16:05:01 +08:00
wukong1992 6b30dfb7ce [shardformer] support llama model using shardformer (#3969)
adjust layer attr
2023-07-04 16:05:01 +08:00
FoolPlayer 45927d5527 [shardformer] Add dropout layer in shard model and refactor policy api (#3949)
* add dist dropout in model

* update docstring and bert policy with dropout

* refactor basepolicy and sharded, update bert

* update format

* update gpt2 policy

* update bert policy

* remove unused code

* update readme for new policy usage
2023-07-04 16:05:01 +08:00
FoolPlayer a73130482d [shardformer] Unit test (#3928)
* fix bug in slicer, add slicer unit test

* add dropout test

* use pid as dropout seed

* updata dropout test with local pattern

* ad todo
2023-07-04 16:05:01 +08:00
FoolPlayer f1cb5ac6bf [shardformer] Align bert value (#3907)
* add bert align test, fix dist loss bug

* forward and backward align

* add ignore index

* add shardformer CI

* add gather_output optional for user in shardconfig

* update readme with optional gather_ouput

* add dist crossentropy loss test, remove unused files

* remove unused file

* remove unused file

* rename the file

* polish code
2023-07-04 16:05:01 +08:00
FoolPlayer 79f8d5d54b [shardformer] add gpt2 policy and modify shard and slicer to support (#3883)
* add gpt2 policy and modify shard and slicer to support

* remove unused code

* polish code
2023-07-04 16:05:01 +08:00
FoolPlayer 70173e3123 update README (#3909) 2023-07-04 16:05:01 +08:00
FoolPlayer ab8a47f830 [shardformer] add Dropout layer support different dropout pattern (#3856)
* add dropout layer, add dropout test

* modify seed manager as context manager

* add a copy of col_nn.layer

* add dist_crossentropy loss; separate module test

* polish the code

* fix dist crossentropy loss
2023-07-04 16:05:01 +08:00
FoolPlayer c594dc2f1c [shardformer] update readme with modules implement doc (#3834)
* update readme with modules content

* remove img
2023-07-04 16:05:01 +08:00
Frank Lee 4972e1f40e [shardformer] refactored the user api (#3828)
* [shardformer] refactored the user api

* polish code
2023-07-04 16:05:01 +08:00
Frank Lee 235792f170 [shardformer] updated readme (#3827) 2023-07-04 16:05:01 +08:00
FoolPlayer 8cc11235c0 [shardformer]: Feature/shardformer, add some docstring and readme (#3816)
* init shardformer code structure

* add implement of sharder (inject and replace)

* add implement of replace layer to colossal layer

* separate different layer policy, add some notion

* implement 1d and 2d slicer, can tell col or row

* fix bug when slicing and inject model

* fix some bug; add inference test example

* add share weight and train example

* add train

* add docstring and readme

* add docstring for other files

* pre-commit
2023-07-04 16:05:01 +08:00
FoolPlayer 8d68de767d [shardformer] init shardformer code structure (#3731)
* init shardformer code structure

* add implement of sharder (inject and replace)

* add implement of replace layer to colossal layer

* separate different layer policy, add some notion

* implement 1d and 2d slicer, can tell col or row

* fix bug when slicing and inject model

* fix some bug; add inference test example
2023-07-04 16:05:01 +08:00
Wenhao Chen 3d8d5d0d58
[chat] use official transformers and fix some issues (#4117)
* feat: remove on_learn_epoch fn as not used

* revert: add _on_learn_epoch fn

* feat: remove NaiveStrategy

* test: update train_prompts tests

* fix: remove prepare_llama_tokenizer_and_embedding

* test: add lora arg

* feat: remove roberta support in train_prompts due to runtime errs

* feat: remove deberta & roberta in rm as not used

* test: remove deberta and roberta tests

* feat: remove deberta and roberta models as not used

* fix: remove calls to roberta

* fix: remove prepare_llama_tokenizer_and_embedding

* chore: update transformers version

* docs: update transformers version

* fix: fix actor inference

* fix: fix ci

* feat: change llama pad token to unk

* revert: revert ddp setup_distributed

* fix: change llama pad token to unk

* revert: undo unnecessary changes

* fix: use pip to install transformers
2023-07-04 13:49:09 +08:00
Baizhou Zhang 1350ece492
[hotfix] fix import bug in checkpoint_io (#4142) 2023-07-03 22:14:37 +08:00
digger yu 8abc87798f
fix Tensor is not defined (#4129) 2023-07-03 17:10:18 +08:00
digger yu 7e46bc87b6
fix CheckpointIndexFile is not defined (#4109) 2023-07-03 17:09:06 +08:00
digger yu 09fe9dc704
[nfc]fix ColossalaiOptimizer is not defined (#4122) 2023-06-30 17:23:22 +08:00
Wenhao Chen edd75a59ea
[chat] remove naive strategy and split colossalai strategy (#4094)
* feat: remove on_learn_epoch fn as not used

* revert: add _on_learn_epoch fn

* to: remove the use of NaiveStrategy

* test: remove NaiveStrategy tests

* feat: remove NaiveStrategy

* style: modify comments and params

* feat: split ColossalAIStrategy into LowLevelZeroStrategy and GeminiStrategy

* fix: remove naive

* fix: align with modified colossal strategy

* fix: fix ddp _try_init_dist arg
2023-06-29 18:11:00 +08:00
Wenhao Chen b03d64d010
[chat] refactor trainer class (#4080)
* to: add SLTrainer

* refactor: refactor RMTrainer and SFTTrainer

* fix: fix init file

* feat: remove on_learn_epoch fn as not used

* fix: align with modified gemini arguments

* to: add OnPolicyTrainer

* revert: add _on_learn_epoch fn

* refactor: refactor PPOTrainer

* style: rename PPOTrainer argument

* fix: align with modified PPO arguments

* test: align with modified train_prompts arguments

* chore: modify train_prompts

* docs: align with modified arguments

* fix: remove unnecessary output

* fix: move dataloader to fit fn of SLTrainer

* fix: move dataloader to fit fn of OnPolicyTrainer

* fix: modify usage of prompt and pretrain dataloader
2023-06-29 10:48:09 +08:00
Jianghai 711e2b4c00
[doc] update and revise some typos and errs in docs (#4107)
* fix some typos and problems in doc

* fix some typos and problems in doc

* add doc test
2023-06-28 19:30:37 +08:00
digger yu 769cddcb2c
fix typo docs/ (#4033) 2023-06-28 15:30:30 +08:00
digger yu 2d40759a53
fix #3852 path error (#4058) 2023-06-28 15:29:44 +08:00
Frank Lee 1ee947f617
[workflow] added status check for test coverage workflow (#4106) 2023-06-28 14:33:43 +08:00
Jianghai 31dc302017
[examples] copy resnet example to image (#4090)
* copy resnet example

* add pytest package

* skip test_ci

* skip test_ci

* skip test_ci
2023-06-27 16:40:46 +08:00
Frank Lee 95e95b6d58
[testing] move pytest to be inside the function (#4087) 2023-06-27 11:02:25 +08:00
Baizhou Zhang 4da324cd60
[hotfix]fix argument naming in docs and examples (#4083) 2023-06-26 23:50:04 +08:00
Michelle e89b127d8e
[chat]: fix chat evaluation possible bug (#4064)
* fix chat eval

* fix utils

* fix utils

* add comment

---------

Co-authored-by: Qianran Ma <qianranm@luchentech.com>
2023-06-26 15:26:07 +08:00
Baizhou Zhang 2c8ae37f61
Merge pull request #4056 from Fridge003/hotfix/fix_gemini_chunk_config_searching
[gemini] Rename arguments in chunk configuration searching
2023-06-25 17:37:37 +08:00
Wenhao Chen 153b957a1b
[chat] refactor strategy class with booster api (#3987)
* refactor: adapt boost API in base and naive strategies

* fix: initialize plugin after setup_distributed

* fix: fix save_pretrained fn

* refactor: adapt boost API in DDPStrategy

* to: add _post_init check

* to: fix ddp backward, modify ddp dataloader and unwrap

* feat: adapt boost API in ColossalAIStrategy

* fix: call setup_distributed before use get_current_device

* fix: fix save_model and save_optimizer

* test: remove save_sharded_optimizer test

* style: apply formatter

* fix: fix stage check and add comments

* feat: allow dict type arg in strategy.prepare

* to: temporarily remove lr_scheduler for testing

* style: simplify init of ColossalAIStrategy

* fix: fix lr_scheduler in sft and rm

* style: modify comments

* test: add train_prompts tests

* fix: fix inference only case and use in train_prompts

* test: skip failed tests in ci

* style: fix CodeFactor check

* fix: do not use model.to('cpu') with GeminiPlugin

* test: enable colossalai_gemini tests

* test: set CUDA_VISIBLE_DEVICES in ci

* docs: add note
2023-06-25 17:36:21 +08:00
Baizhou Zhang 0bb0b481b4 [gemini] fix argument naming during chunk configuration searching 2023-06-25 13:34:15 +08:00
Frank Lee b463651f3e
[workflow] cover all public repositories in weekly report (#4069) 2023-06-22 14:41:25 +08:00