Baizhou Zhang
38ccb8b1a3
[shardformer] support from_pretrained when loading model with HybridParallelPlugin ( #4575 )
...
* hybrid plugin support huggingface from_pretrained
* add huggingface compatibility tests
* add folder cleaning
* fix bugs
1 year ago
Baizhou Zhang
c9625dbb63
[shardformer] support sharded optimizer checkpointIO of HybridParallelPlugin ( #4540 )
...
* implement sharded optimizer saving
* add more param info
* finish implementation of sharded optimizer saving
* fix bugs in optimizer sharded saving
* add pp+zero test
* param group loading
* greedy loading of optimizer
* fix bug when loading
* implement optimizer sharded saving
* add optimizer test & arrange checkpointIO utils
* fix gemini sharding state_dict
* add verbose option
* add loading of master params
* fix typehint
* fix master/working mapping in fp16 amp
1 year ago
Baizhou Zhang
2c787d7f47
[shardformer] fix submodule replacement bug when enabling pp ( #4544 )
1 year ago
Hongxin Liu
c7b60f7547
[devops] cancel previous runs in the PR ( #4546 )
1 year ago
Tian Siyuan
f1ae8c9104
[example] change accelerate version ( #4431 )
...
Co-authored-by: Siyuan Tian <siyuant@vmware.com>
Co-authored-by: Hongxin Liu <lhx0217@gmail.com>
1 year ago
ChengDaqi2023
8e2e1992b8
[example] update streamlit 0.73.1 to 1.11.1 ( #4386 )
1 year ago
flybird11111
ec18fc7340
[shardformer] support pp+tp+zero1 tests ( #4531 )
...
* [shardformer] fix opt test hanging
* fix
* test
* test
* test
* fix test
* fix test
* remove print
* add fix
* [shardformer] pp+tp+zero1
[shardformer] pp+tp+zero1
[shardformer] pp+tp+zero1
[shardformer] pp+tp+zero1
[shardformer] pp+tp+zero1
[shardformer] pp+tp+zero1
* [shardformer] pp+tp+zero1
* [shardformer] pp+tp+zero1
* [shardformer] pp+tp+zero1
* [shardformer] pp+tp+zero1
1 year ago
Lufang Chen
12c95a9fed
fix runtime prepare pass ( #4502 )
...
Co-authored-by: lufang.chen <lufang.chen@nio.com>
1 year ago
Ying Liu
9f852f2489
keep requirements same with main branch
1 year ago
flybird11111
d367b88785
[shardformer] fix opt test hanging ( #4521 )
...
* [shardformer] fix opt test hanging
* fix
* test
* test
* test
* fix test
* fix test
* remove print
* add fix
1 year ago
Ying Liu
c648dc093f
fix colossalai version in coati examples
1 year ago
yingliu-hpc
661a1ef712
Merge pull request #4541 from ver217/coati/chatglm
...
[coati] update ci
1 year ago
ver217
1c43bfd54e
[coati] update ci
1 year ago
Bin Jia
e241b74f24
[shardformer] Add overlap support for gpt2 ( #4535 )
...
* add overlap support for gpt2
* remove unused code
* remove unused code
1 year ago
yingliu-hpc
1467e3b41b
[coati] add chatglm model ( #4539 )
...
* update configuration of chatglm and add support in coati
* add unit test & update chatglm default config & fix bos index issue
* remove chatglm due to oom
* add dataset pkg in requirement-text
* fix parameter issue in test_models
* add ref in tokenize & rm unnessary parts
* separate source & target tokenization in chatglm
* add unit test to chatglm
* fix test dataset issue
* update truncation of chatglm
* fix Colossalai version
* fix colossal ai version in test
1 year ago
Baizhou Zhang
0387a47e63
[shardformer] fix emerged bugs after updating transformers ( #4526 )
1 year ago
Hongxin Liu
0b00def881
[example] add llama2 example ( #4527 )
...
* [example] transfer llama-1 example
* [example] fit llama-2
* [example] refactor scripts folder
* [example] fit new gemini plugin
* [cli] fix multinode runner
* [example] fit gemini optim checkpoint
* [example] refactor scripts
* [example] update requirements
* [example] update requirements
* [example] rename llama to llama2
* [example] update readme and pretrain script
* [example] refactor scripts
1 year ago
Bin Jia
c554b7f559
[shardformer/fix overlap bug] fix overlap bug, add overlap as an option in shardco… ( #4516 )
...
* fix overlap bug and support bert, add overlap as an option in shardconfig
* support overlap for chatglm and bloom
1 year ago
Jianghai
376533a564
[shardformer] zero1+pp and the corresponding tests ( #4517 )
...
* pause
* finish pp+zero1
* Update test_shard_vit.py
1 year ago
Baizhou Zhang
44eab2b27f
[shardformer] support sharded checkpoint IO for models of HybridParallelPlugin ( #4506 )
...
* add APIs
* implement save_sharded_model
* add test for hybrid checkpointio
* implement naive loading for sharded model
* implement efficient sharded model loading
* open a new file for hybrid checkpoint_io
* small fix
* fix circular importing
* fix docstring
* arrange arguments and apis
* small fix
1 year ago
flybird11111
de8a65babc
[shardformer] opt fix. ( #4514 )
...
* [shardformer] chatglm support sequence parallel
[shardformer] chatglm support sequence parallel
[shardformer] chatglm support sequence parallel
[shardformer] chatglm support sequence parallel
[shardformer] chatglm support sequence parallel
[shardformer] chatglm support sequence parallel
* fix
fix
fix
fix
* [shardformer] jit fused fix
* [shardformer] jit fused fix
* [shardformer] jit fused fix
* [shardformer] jit fused fix
* [shardformer] jit fused fix
* [shardformer] jit fused fix
* [shardformer] jit fused fix
* activate checks
* [Test] test ci
* test ci
* test ci
* test ci
* test ci
* test ci
* test ci
* fix
1 year ago
LuGY
839847b7d7
[zero]support zero2 with gradient accumulation ( #4511 )
...
* support gradient accumulation with zero2
* fix type
1 year ago
github-actions[bot]
c0efc3ebcb
[format] applied code formatting on changed files in pull request 4479 ( #4504 )
...
Co-authored-by: github-actions <github-actions@github.com>
1 year ago
flybird11111
3353e55c80
[shardformer] vit/llama/t5 ignore the sequence parallelism flag and some fix. ( #4498 )
...
* [shardformer] chatglm support sequence parallel
[shardformer] chatglm support sequence parallel
[shardformer] chatglm support sequence parallel
[shardformer] chatglm support sequence parallel
[shardformer] chatglm support sequence parallel
[shardformer] chatglm support sequence parallel
* fix
fix
fix
fix
* [shardformer] jit fused fix
* [shardformer] jit fused fix
* [shardformer] jit fused fix
* [shardformer] jit fused fix
* [shardformer] jit fused fix
* [shardformer] jit fused fix
* [shardformer] jit fused fix
* activate checks
1 year ago
Hongxin Liu
27061426f7
[gemini] improve compatibility and add static placement policy ( #4479 )
...
* [gemini] remove distributed-related part from colotensor (#4379 )
* [gemini] remove process group dependency
* [gemini] remove tp part from colo tensor
* [gemini] patch inplace op
* [gemini] fix param op hook and update tests
* [test] remove useless tests
* [test] remove useless tests
* [misc] fix requirements
* [test] fix model zoo
* [test] fix model zoo
* [test] fix model zoo
* [test] fix model zoo
* [test] fix model zoo
* [misc] update requirements
* [gemini] refactor gemini optimizer and gemini ddp (#4398 )
* [gemini] update optimizer interface
* [gemini] renaming gemini optimizer
* [gemini] refactor gemini ddp class
* [example] update gemini related example
* [example] update gemini related example
* [plugin] fix gemini plugin args
* [test] update gemini ckpt tests
* [gemini] fix checkpoint io
* [example] fix opt example requirements
* [example] fix opt example
* [example] fix opt example
* [example] fix opt example
* [gemini] add static placement policy (#4443 )
* [gemini] add static placement policy
* [gemini] fix param offload
* [test] update gemini tests
* [plugin] update gemini plugin
* [plugin] update gemini plugin docstr
* [misc] fix flash attn requirement
* [test] fix gemini checkpoint io test
* [example] update resnet example result (#4457 )
* [example] update bert example result (#4458 )
* [doc] update gemini doc (#4468 )
* [example] update gemini related examples (#4473 )
* [example] update gpt example
* [example] update dreambooth example
* [example] update vit
* [example] update opt
* [example] update palm
* [example] update vit and opt benchmark
* [hotfix] fix bert in model zoo (#4480 )
* [hotfix] fix bert in model zoo
* [test] remove chatglm gemini test
* [test] remove sam gemini test
* [test] remove vit gemini test
* [hotfix] fix opt tutorial example (#4497 )
* [hotfix] fix opt tutorial example
* [hotfix] fix opt tutorial example
1 year ago
Jianghai
e04436a82a
[shardformer] tests for 3d parallel ( #4493 )
1 year ago
flybird11111
59e252ecdb
[shardformer] chatglm support sequence parallel ( #4482 )
...
* [shardformer] chatglm support sequence parallel
[shardformer] chatglm support sequence parallel
[shardformer] chatglm support sequence parallel
[shardformer] chatglm support sequence parallel
[shardformer] chatglm support sequence parallel
[shardformer] chatglm support sequence parallel
* fix
fix
fix
fix
1 year ago
Bin Jia
351351a36e
[shardformer/sequence parallel] not support opt of seq-parallel, add warning and fix a bug in gpt2 pp ( #4488 )
1 year ago
Jianghai
5545114fd8
rename chatglm to chatglm2 ( #4484 )
1 year ago
Michelle
285fe7ba71
[chat] update config and prompt ( #4139 )
...
* update config and prompt
* update config
---------
Co-authored-by: Qianran Ma <qianranm@luchentech.com>
1 year ago
Baizhou Zhang
1c7df566e2
[shardformer] support tp+zero for shardformer ( #4472 )
...
* support tp+zero/input type cast for hybridplugin
* add tp+zero tests
* fix bucket arguments
1 year ago
Jianghai
8739aa7fa0
[shardformer] Pipeline/whisper ( #4456 )
...
* add some base tests and policies
* finish whisper base model
* add conditional generation
* finish basic tests
* whisper
* finish whisper
* finish whisper
* del useless whisper test
* fix
* add argmin to replace
* finish revision
1 year ago
flybird11111
a27e0bb494
[shardformer] bert support sequence parallel. ( #4455 )
...
* [shardformer] bert support sequence parallel
[shardformer] bert support sequence parallel
[shardformer] bert support sequence parallel
[shardformer] bert support sequence parallel
[shardformer] bert support sequence parallel
[shardformer] bert support sequence parallel
[shardformer] bert support sequence parallel
[shardformer] bert support sequence parallel
[shardformer] bert support sequence parallel
* [shardformer] bert support sequence parallel
[shardformer] bert support sequence parallel
[shardformer] bert support sequence parallel
* [shardformer] bert support sequence parallel
1 year ago
flybird11111
0ecd71e041
[shardformer] bloom support sequence parallel ( #4465 )
...
[shardformer] bloom support sequence parallel
1 year ago
Bin Jia
7c8be77081
[shardformer/sequence parallel] support gpt2 seq parallel with pp/dp/tp ( #4460 )
...
* support gpt2 seq parallel with pp/dp/tp
* fix a bug when waiting for stream done
* delete unused gpt2_seq file
1 year ago
LuGY
a78daf6180
[shardformer] support interleaved pipeline ( #4448 )
...
* support interleaved pipeline
* fix unit test
* remove virtual stage test in stage mgr
* add droped type hint and updated bwd
1 year ago
Hongxin Liu
26e29d58f0
[devops] add large-scale distributed test marker ( #4452 )
...
* [test] remove cpu marker
* [test] remove gpu marker
* [test] update pytest markers
* [ci] update unit test ci
1 year ago
Baizhou Zhang
6ef33f75aa
[shardformer] support DDP in HybridPlugin/add tp+dp tests ( #4446 )
...
* support DDP for HybridPlugin/add tp+dp tests
* add docstring for HybridParallelPlugin
1 year ago
Bin Jia
424629fea0
[shardformer/sequence parallel] Cherry pick commit to new branch ( #4450 )
...
* [shardformer/sequence parallel] Support sequence parallel for gpt2 (#4384 )
* [sequence parallel] add sequence parallel linear col/row support (#4336 )
* add sequence parallel linear col/row support
* add annotation
* add annotation
* add support for gpt2 fused qkv linear layer
* support sequence parallel in GPT2
* add docstring and note
* add requirments
* remove unused flash-attb
* modify flash attn test
* modify flash attn setting
* modify flash attn code
* add assert before divide, rename forward function
* [shardformer/test] fix gpt2 test with seq-parallel
* [shardformer/sequence parallel] Overlap input gather and grad computation during col backward (#4401 )
* overlap gather input / grad computing during col backward
* modify test for overlap
* simplify code
* fix code and modify cuda stream synchronize
* [shardformer/sequence parallel] polish code
1 year ago
github-actions[bot]
d20dceb9a3
[format] applied code formatting on changed files in pull request 4441 ( #4445 )
...
Co-authored-by: github-actions <github-actions@github.com>
1 year ago
ver217
5d4efdf58f
[shardformer] fix import
1 year ago
ver217
73a4144b91
[shardformer] fix embedding
1 year ago
ver217
922302263b
[misc] update requirements
1 year ago
Hongxin Liu
172f7fa3cf
[misc] resolve code factor issues ( #4433 )
1 year ago
flybird11111
328a791d10
[shardformer] update bloom/llama/vit/chatglm tests ( #4420 )
...
[shardformer] update bloom/llama/vit/chatglm tests
[shardformer] update opt tests
[shardformer] update opt tests
[shardformer] update bloom/llama/vit/chatglm tests
[shardformer] update bloom/llama/vit/chatglm tests
[shardformer] update bloom/llama/vit/chatglm tests
1 year ago
flybird11111
108e54a0b4
[shardformer]update t5 tests for using all optimizations. ( #4407 )
...
* [shardformer] gpt2 tests fix
[shardformer] test all optimizations (#4399 )
[shardformer] test all optimizations
[shardformer] test all optimizations
[shardformer] test all optimizations
[shardformer] gpt2 tests fix
* [shardformer]update t5 to use all optimizations
1 year ago
flybird11111
1edc9b5fb3
[shardformer] update tests for all optimization ( #4413 )
...
[shardformer] update tests for all optimization
1 year ago
Baizhou Zhang
7711bd524a
[shardformer] rewrite tests for opt/bloom/llama/vit/chatglm ( #4395 )
...
* rewrite opt tests
* rewrite llama tests
* rewrite bloom & vit tests
* rewrite chatglm tests
* fix LinearCol for classfiers
* add judge for other tp layers, fix lazy init in util
1 year ago
flybird11111
21e0a42fd1
[shardformer]fix, test gpt2 for AMP+TP ( #4403 )
...
* [shardformer] gpt2 tests fix
[shardformer] test all optimizations (#4399 )
[shardformer] test all optimizations
[shardformer] test all optimizations
[shardformer] test all optimizations
[shardformer] gpt2 tests fix
* [shardformer] gpt2 tests fix
1 year ago
Jianghai
7596e9ae08
[pipeline] rewrite bert tests and fix some bugs ( #4409 )
...
* add pipeline policy and bert forward to be done
* add bertmodel pipeline forward and make tests
* add Bert_Policy and test for policy
* update formatting
* update formatting
* update the code
* fix bugs
* fix name confilt
* add bloom model and policy ,revise the base class of policy
* revise
* revision
* add bert_for_pretraining
* add bert_for_pretraining forward and policy
* fix typos
* cancel warning
* change the imediate output to default dict
* change the default output of get_shared_params
* rewrite bert test
* rewrite bert test
* fix some bugs
* del pipeline tests
* del pipeline tests
* del useless print
* del useless print
* rewrite data repeats
1 year ago