* fix cross-PP-stage position id length diff bug
* fix typo
* fix typo
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* use a one cross entropy func for all shardformer models
---------
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* [moe] removed openmoe-coupled code and rectify mixstral code (#5471)
* [Feauture] MoE refractor; Intergration with Mixtral (#5682)
* cherry pick from refractor-moe branch
* tests passed
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* support ep + zero
---------
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* add mixtral auto policy & move pipeline forward code to modeling folder
* [moe refactor] modify kernel test without Route Class
* [moe refactor] add moe tensor test path environment variable to github workflow
* fix typos
* fix moe test bug due to the code rebase
* [moe refactor] fix moe zero test, and little bug in low level zero
* fix typo
* add moe tensor path to github workflow
* remove some useless code
* fix typo & unify global variable XX_AXIS logic without using -1
* fix typo & prettifier the code
* remove print code & support zero 2 test
* remove useless code
* reanme function
* fix typo
* fix typo
* Further improve the test code
* remove print code
* [moe refactor] change test model from fake moe model to mixtral moe layer and remove useless test
* [moe refactor] skip some unit test which will be refactored later
* [moe refactor] fix unit import error
* [moe refactor] fix circular import issues
* [moe refactor] remove debug code
* [moe refactor] update github workflow
* [moe/zero] refactor low level optimizer (#5767)
* [zero] refactor low level optimizer
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* [Feature] MoE refactor with newest version of ZeRO (#5801)
* [zero] remove redundant members in BucketStore (#5802)
* [zero] align api with previous version
* [Moe/Zero] Update MoeHybridParallelPlugin with refactored ZeRO and Fix Zero bug (#5819)
* [moe refactor] update unit test with the refactored ZeRO and remove useless test
* move moe checkpoint to checkpoint folder and exchange global axis to class member
* update moe hybrid parallel plugin with newest version of zero & fix zero working/master params bug
* fix zero unit test
* Add an assertion to prevent users from using it incorrectly
* [hotfix]Solve the compatibility issue of zero refactor (#5823)
* [moe refactor] update unit test with the refactored ZeRO and remove useless test
* move moe checkpoint to checkpoint folder and exchange global axis to class member
* update moe hybrid parallel plugin with newest version of zero & fix zero working/master params bug
* fix zero unit test
* Add an assertion to prevent users from using it incorrectly
* Modify function parameter names to resolve compatibility issues
* [zero] fix missing hook removal (#5824)
* [MoE] Resolve .github conflict (#5829)
* [Fix/Example] Fix Llama Inference Loading Data Type (#5763)
* [fix/example] fix llama inference loading dtype
* revise loading dtype of benchmark llama3
* [release] update version (#5752)
* [release] update version
* [devops] update compatibility test
* [devops] update compatibility test
* [devops] update compatibility test
* [devops] update compatibility test
* [test] fix ddp plugin test
* [test] fix gptj and rpc test
* [devops] fix cuda ext compatibility
* [inference] fix flash decoding test
* [inference] fix flash decoding test
* fix (#5765)
* [test] Fix/fix testcase (#5770)
* [fix] branch for fix testcase;
* [fix] fix test_analyzer & test_auto_parallel;
* [fix] remove local change about moe;
* [fix] rm local change moe;
* [Hotfix] Add missing init file in inference.executor (#5774)
* [CI/tests] simplify some test case to reduce testing time (#5755)
* [ci/tests] simplify some test case to reduce testing time
* [ci/tests] continue to remove test case to reduce ci time cost
* restore some test config
* [ci/tests] continue to reduce ci time cost
* [misc] update dockerfile (#5776)
* [misc] update dockerfile
* [misc] update dockerfile
* [devops] fix docker ci (#5780)
* [Inference]Add Streaming LLM (#5745)
* Add Streaming LLM
* add some parameters to llama_generation.py
* verify streamingllm config
* add test_streamingllm.py
* modified according to the opinions of review
* add Citation
* change _block_tables tolist
* [hotfix] fix llama flash attention forward (#5777)
* [misc] Accelerate CI for zero and dist optim (#5758)
* remove fp16 from lamb
* remove d2h copy in checking states
---------
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
* [Test/CI] remove test cases to reduce CI duration (#5753)
* [test] smaller gpt2 test case
* [test] reduce test cases: tests/test_zero/test_gemini/test_zeroddp_state_dict.py
* [test] reduce test cases: tests/test_zero/test_gemini/test_grad_accum.py
* [test] reduce test cases tests/test_zero/test_gemini/test_optim.py
* Revert "[test] smaller gpt2 test case"
Some tests might depend on the size of model (num of chunks)
This reverts commit df705a5210.
* [test] reduce test cases: tests/test_checkpoint_io/test_gemini_checkpoint_io.py
* [CI] smaller test model for two mwo the two modifid cases
* [CI] hardcode gpt model for tests/test_zero/test_gemini/test_search.py since we need a fixed answer there
* [hotfix] fix testcase in test_fx/test_tracer (#5779)
* [fix] branch for fix testcase;
* [fix] fix test_analyzer & test_auto_parallel;
* [fix] remove local change about moe;
* [fix] rm local change moe;
* [fix] fix test_deepfm_model & test_dlrf_model;
* [fix] fix test_hf_albert & test_hf_gpt;
* [gemini] optimize reduce scatter d2h copy (#5760)
* [gemini] optimize reduce scatter d2h copy
* [fix] fix missing reduce variable
* [refactor] remove legacy async reduce scatter code
* [gemini] missing sync
* Revert "[refactor] remove legacy async reduce scatter code"
This reverts commit 58ad76d466.
* [gemini] further optimize with async all reduce
* [fix] pass flag from manager to chunk
* Allow building cuda extension without a device. (#5535)
Added FORCE_CUDA environment variable support, to enable building extensions where a GPU device is not present but cuda libraries are.
* [misc] fix dist logger (#5782)
* [install]fix setup (#5786)
* fix
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* [misc] update requirements (#5787)
* [shardformer] fix import (#5788)
* upgrade colossal-chat support tp_group>1, add sp for sft
* upgrade ppo dpo rm script
* run pre-commit
* moupdate ci tests, st ci test cases passed, tp failed in generation for ppo, sp is buggy
* fix training script
* fix ci
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fix transformers version
* remove duplicated test
* fix datasets version
* remove models that require huggingface auth from ci
* remove local data path
* update ci
* remove baichuan from template test due to transformer version conflict
* merge
* Refactor modeling by adding attention backend
Signed-off-by: char-1ee <xingjianli59@gmail.com>
* Fix tests and naming
Signed-off-by: char-1ee <xingjianli59@gmail.com>
* Pass inference model shard configs for module init
Signed-off-by: char-1ee <xingjianli59@gmail.com>
* Clean up
Signed-off-by: char-1ee <xingjianli59@gmail.com>
* replace the customized dataloader setup with the build-in one
* replace the customized dataloader setup with the build-in one
* Remove flash attention backend
Signed-off-by: char-1ee <xingjianli59@gmail.com>
* fix readme
* Fix test import
Signed-off-by: char-1ee <xingjianli59@gmail.com>
* update sft trainning script
* [Inference]refactor baichuan (#5791)
* refactor baichuan
* remove unused code and add TODO for lazyinit
* [test] fix chatglm test kit (#5793)
* [shardformer] fix modeling of bloom and falcon (#5796)
* [test] fix qwen2 pytest distLarge (#5797)
* [Inference] Fix flash-attn import and add model test (#5794)
* Fix torch int32 dtype
Signed-off-by: char-1ee <xingjianli59@gmail.com>
* Fix flash-attn import
Signed-off-by: char-1ee <xingjianli59@gmail.com>
* Add generalized model test
Signed-off-by: char-1ee <xingjianli59@gmail.com>
* Remove exposed path to model
Signed-off-by: char-1ee <xingjianli59@gmail.com>
* Add default value for use_flash_attn
Signed-off-by: char-1ee <xingjianli59@gmail.com>
* Rename model test
Signed-off-by: char-1ee <xingjianli59@gmail.com>
---------
Signed-off-by: char-1ee <xingjianli59@gmail.com>
* [Gemini] Use async stream to prefetch and h2d data moving (#5781)
* use async stream to prefetch and h2d data moving
* Remove redundant code
* [gemini] quick fix on possible async operation (#5803)
* [gemini] quick fix on possible async operation
* [gemini] quick fix on possible async operation
* [shardformer] upgrade transformers to 4.39.3 (#5815)
* [shardformer]upgrade transformers for gpt2/gptj/whisper (#5807)
* [shardformer] fix modeling of gpt2 and gptj
* [shardformer] fix whisper modeling
* [misc] update requirements
---------
Co-authored-by: ver217 <lhx0217@gmail.com>
* [shardformer]upgrade transformers for mistral (#5808)
* upgrade transformers for mistral
* fix
* fix
* [shardformer]upgrade transformers for llama (#5809)
* update transformers
fix
* fix
* fix
* [inference] upgrade transformers (#5810)
* update transformers
fix
* fix
* fix
* fix
* fix
* [gemini] update transformers for gemini (#5814)
---------
Co-authored-by: ver217 <lhx0217@gmail.com>
* Support 4d parallel + flash attention (#5789)
* support tp + sp + pp
* remove comments
---------
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
---------
Signed-off-by: char-1ee <xingjianli59@gmail.com>
Co-authored-by: Yuanheng Zhao <54058983+yuanheng-zhao@users.noreply.github.com>
Co-authored-by: Hongxin Liu <lhx0217@gmail.com>
Co-authored-by: flybird11111 <1829166702@qq.com>
Co-authored-by: duanjunwen <935724073@qq.com>
Co-authored-by: yuehuayingxueluo <867460659@qq.com>
Co-authored-by: Edenzzzz <wenxuan.tan@wisc.edu>
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
Co-authored-by: botbw <wang1570@e.ntu.edu.sg>
Co-authored-by: Charles Coulombe <ccoulombe@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: YeAnbang <anbangy2@outlook.com>
Co-authored-by: char-1ee <xingjianli59@gmail.com>
Co-authored-by: Runyu Lu <77330637+LRY89757@users.noreply.github.com>
Co-authored-by: YeAnbang <44796419+YeAnbang@users.noreply.github.com>
Co-authored-by: Guangyao Zhang <xjtu521@qq.com>
* [zero] fix hook bug
* [zero] add low level optimizer back (#5839)
* [zero] fix param & refactor
* [zero] add back original low level opt
* [zero] remove moe related
* [zero] pass zero tests
* [zero] refactor
* [chore] add del func back
* [zero] comments and naming (#5840)
* [zero] modify api (#5843)
* [zero] modify api
* [test] remove _grad_store access in tests
* [test] fix (#5857)
* [CI] skip openmoe CI check
* [CI] fox pre-commit
* [zero] remove redundant memebr init (#5862)
* [misc] remove useless code, modify the pg mesh implementation
* [misc] remove useless code, modify the pg mesh implementation
* [misc] use tempfile
* resolve conflict with main branch
* [misc] use tempfile in test_moe_checkpoint.py
* [misc] remove useless code, add assertion about sequence parallel, move logger into function
* [misc] remove useless code
---------
Signed-off-by: char-1ee <xingjianli59@gmail.com>
Co-authored-by: Frank Lee <somerlee.9@gmail.com>
Co-authored-by: Edenzzzz <wenxuan.tan@wisc.edu>
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: botbw <wang1570@e.ntu.edu.sg>
Co-authored-by: Yuanheng Zhao <54058983+yuanheng-zhao@users.noreply.github.com>
Co-authored-by: Hongxin Liu <lhx0217@gmail.com>
Co-authored-by: flybird11111 <1829166702@qq.com>
Co-authored-by: duanjunwen <935724073@qq.com>
Co-authored-by: yuehuayingxueluo <867460659@qq.com>
Co-authored-by: Charles Coulombe <ccoulombe@users.noreply.github.com>
Co-authored-by: YeAnbang <anbangy2@outlook.com>
Co-authored-by: char-1ee <xingjianli59@gmail.com>
Co-authored-by: Runyu Lu <77330637+LRY89757@users.noreply.github.com>
Co-authored-by: YeAnbang <44796419+YeAnbang@users.noreply.github.com>
Co-authored-by: Guangyao Zhang <xjtu521@qq.com>
* update to fully overlap, still debugging
* improve interface
* fixed deadlock bug
* debug NaN loss
* (experimental) use one comm group for send_fw_recv_fw to fix NaN
* cleaned up interfaces; use one batch p2p for all
* clean up; removed the double p2p batch case
* p2p test passsed
* improve overlap: send fwd before backward
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* tentatively use 2 p2p batches
* remove two p2p batches
* fix typos
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* remove pp.sh
---------
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: root <root@notebook-c55824c0-7742-45e8-9591-c855bb77ad29-0.notebook-c55824c0-7742-45e8-9591-c855bb77ad29.colossal-ai.svc.cluster.local>
* [gemini] async grad chunk reduce (all-reduce&reduce-scatter)
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* [gemini] add test
* [gemini] rename func
* [gemini] update llama benchmark
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* [gemini] use tensor counter
* [gemini] change default config in GeminiPlugin and GeminiDDP
* [chore] typo
* [gemini] fix sync issue & add test cases
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* [feat] Add distributed lamb; minor fixes in DeviceMesh (#5476)
* init: add dist lamb; add debiasing for lamb
* dist lamb tester mostly done
* all tests passed
* add comments
* all tests passed. Removed debugging statements
* moved setup_distributed inside plugin. Added dist layout caching
* organize better
---------
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
* [hotfix] Improve tester precision by removing ZeRO on vanilla lamb (#5576)
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
* [optim] add distributed came (#5526)
* test CAME under LowLevelZeroOptimizer wrapper
* test CAME TP row and col pass
* test CAME zero pass
* came zero add master and worker param id convert
* came zero test pass
* came zero test pass
* test distributed came passed
* reform code, Modify some expressions and add comments
* minor fix of test came
* minor fix of dist_came and test
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* minor fix of dist_came and test
* rebase dist-optim
* rebase dist-optim
* fix remaining comments
* add test dist came using booster api
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* [optim] Distributed Adafactor (#5484)
* [feature] solve conflict; update optimizer readme;
* [feature] update optimize readme;
* [fix] fix testcase;
* [feature] Add transformer-bert to testcase;solve a bug related to indivisible shape (induction in use_zero and tp is row parallel);
* [feature] Add transformers_bert model zoo in testcase;
* [feature] add user documentation to docs/source/feature.
* [feature] add API Reference & Sample to optimizer Readme; add state check for bert exam;
* [feature] modify user documentation;
* [fix] fix readme format issue;
* [fix] add zero=0 in testcase; cached augment in dict;
* [fix] fix percision issue;
* [feature] add distributed rms;
* [feature] remove useless comment in testcase;
* [fix] Remove useless test; open zero test; remove fp16 test in bert exam;
* [feature] Extract distributed rms function;
* [feature] add booster + lowlevelzeroPlugin in test;
* [feature] add Start_with_booster_API case in md; add Supporting Information in md;
* [fix] Also remove state movement in base adafactor;
* [feature] extract factor function;
* [feature] add LowLevelZeroPlugin test;
* [fix] add tp=False and zero=True in logic;
* [fix] fix use zero logic;
* [feature] add row residue logic in column parallel factor;
* [feature] add check optim state func;
* [feature] Remove duplicate logic;
* [feature] update optim state check func and percision test bug;
* [fix] update/fix optim state; Still exist percision issue;
* [fix] Add use_zero check in _rms; Add plugin support info in Readme; Add Dist Adafactor init Info;
* [feature] removed print & comments in utils;
* [feature] uodate Readme;
* [feature] add LowLevelZeroPlugin test with Bert model zoo;
* [fix] fix logic in _rms;
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* [fix] remove comments in testcase;
* [feature] add zh-Han Readme;
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* [Feature] refractor dist came; fix percision error; add low level zero test with bert model zoo; (#5676)
* [feature] daily update;
* [fix] fix dist came;
* [feature] refractor dist came; fix percision error; add low level zero test with bert model zoo;
* [fix] open rms; fix low level zero test; fix dist came test function name;
* [fix] remove redundant test;
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* [Feature] Add Galore (Adam, Adafactor) and distributed GaloreAdamW8bit (#5570)
* init: add dist lamb; add debiasing for lamb
* dist lamb tester mostly done
* all tests passed
* add comments
* all tests passed. Removed debugging statements
* moved setup_distributed inside plugin. Added dist layout caching
* organize better
* update comments
* add initial distributed galore
* add initial distributed galore
* add galore set param utils; change setup_distributed interface
* projected grad precision passed
* basic precision tests passed
* tests passed; located svd precision issue in fwd-bwd; banned these tests
* Plugin DP + TP tests passed
* move get_shard_dim to d_tensor
* add comments
* remove useless files
* remove useless files
* fix zero typo
* improve interface
* remove moe changes
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fix import
* fix deepcopy
* update came & adafactor to main
* fix param map
* fix typo
---------
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* [Hotfix] Remove one buggy test case from dist_adafactor for now (#5692)
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---------
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
Co-authored-by: chongqichuizi875 <107315010+chongqichuizi875@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: duanjunwen <54985467+duanjunwen@users.noreply.github.com>
Co-authored-by: Hongxin Liu <lhx0217@gmail.com>
* [misc] remove config arg from initialize
* [misc] remove old tensor contrusctor
* [plugin] add npu support for ddp
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* [devops] fix doc test ci
* [test] fix test launch
* [doc] update launch doc
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* [feature] qlora support
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* qlora follow commit
* migrate qutization folder to colossalai/
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* minor fixes
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* [devops] remove post commit ci
* [misc] run pre-commit on all files
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* sequence parallel optimization
* validate sequence parallel in llama (code to be polished)
* shardformer api writing
* integrate sequence parallel in ShardFormer
* fix pp bugs and sp bugs for LlaMa model
* integrating ring-based sequence parallelism into ShardFormer
* [sequence parallelism]: Add fused megatron function
* integrating ring-based sequence parallelism into ShardFormer
---------
Co-authored-by: linsj20 <linsj20@mails.tsinghua.edu.cn>
* fix bugs when useing sp and flashattention together
* fix operation function name
* support flash attention for ulysses-style sp
* clarify sp process group
* fix compatibility bugs in moe plugin
* fix fused linear bugs
* fix linear layer test
* support gpt model all-to-all sp
* modify shard data dimension (meant to be dim=-1)
* support megtron-style sp and distributed attn for llama model
* [shardformer] add megatron sp to llama
* support llama7B 128k with distributed attention
* [shardformer] robustness enhancement
* add block attn
* sp mode 1: keep input as a complete sequence
* fix sp compatability
* finish sp mode 3 support for gpt
* using all_to_all_single when batch size is 1
* support mode 2 sp in gpt2 (#5)
* [shardformer] add megatron sp to llama
* support llama7B 128k with distributed attention
* [shardformer] robustness enhancement
* add block attn
* sp mode 1: keep input as a complete sequence
* fix sp compatability
* refactor ring implementation
* support mode 2 sp in gpt2
* polish code
* enable distributed attn mask when using sp mode 2 and 3 in llama
* automatically enable flash attn when using sp mode 2 and 3 in llama
* inplace attn mask
* add zero2 support for sequence parallel
* polish code
* fix bugs
* fix gemini checkpoint io
* loose tensor checking atol and rtol
* add comment
* fix llama layernorm grad
* fix zero grad
* fix zero grad
* fix conflict
* update split and gather auto grad func
* sequence parallel: inside text split (#6)
* polish code (part 1)
* polish code (part 2)
* polish code (part 2.5)
* polish code (part 3)
* sequence parallel: inside text split
* miscellaneous minor fixes
* polish code
* fix ulysses style ZeRO
* sequence parallel: inside text split
* miscellaneous minor fixes
* disaggregate sp group and dp group for sp
* fix llama and gpt sp
* polish code
* move ulysses grad sync to ddp (#9)
* remove zero_stage and unbind the grad sync for alltoall sp
* add 2d group creation test
* move ulysses grad sync to ddp
* add 2d group creation test
* remove useless code
* change shard config not to enable sp when enable_all_optimizations
* add sp warnings for several model
* remove useless code
---------
Co-authored-by: linsj20 <linsj20@mails.tsinghua.edu.cn>
* fix: simplify merge_batch
* fix: use return_outputs=False to eliminate extra memory consumption
* feat: add return_outputs warning
* style: remove `return_outputs=False` as it is the default value
* [devops] fix compatibility
* [hotfix] update compatibility test on pr
* [devops] fix compatibility
* [devops] record duration during comp test
* [test] decrease test duration
* fix falcon
* test: add more p2p tests
* fix: remove send_forward_recv_forward as p2p op list need to use the same group
* fix: make send and receive atomic
* feat: update P2PComm fn
* feat: add metadata cache in 1f1b
* feat: add metadata cache in interleaved pp
* feat: modify is_xx_stage fn
* revert: add _broadcast_object_list
* feat: add interleaved pp in llama policy
* feat: set NCCL_BUFFSIZE in HybridParallelPlugin
* fix 3d checkpoint load when booster boost without optimizer
fix 3d checkpoint load when booster boost without optimizer
* test ci
* revert ci
* fix
fix
* [shardformer] implement policy for all GPT-J models and test
* [shardformer] support interleaved pipeline parallel for bert finetune
* [shardformer] shardformer support falcon (#4883)
* [shardformer]: fix interleaved pipeline for bert model (#5048)
* [hotfix]: disable seq parallel for gptj and falcon, and polish code (#5093)
* Add Mistral support for Shardformer (#5103)
* [shardformer] add tests to mistral (#5105)
---------
Co-authored-by: Pengtai Xu <henryxu880@gmail.com>
Co-authored-by: ppt0011 <143150326+ppt0011@users.noreply.github.com>
Co-authored-by: flybird11111 <1829166702@qq.com>
Co-authored-by: eric8607242 <e0928021388@gmail.com>
* [npu] setup device utils (#5047)
* [npu] add npu device support
* [npu] support low level zero
* [test] update npu zero plugin test
* [hotfix] fix import
* [test] recover tests
* [npu] gemini support npu (#5052)
* [npu] refactor device utils
* [gemini] support npu
* [example] llama2+gemini support npu
* [kernel] add arm cpu adam kernel (#5065)
* [kernel] add arm cpu adam
* [optim] update adam optimizer
* [kernel] arm cpu adam remove bf16 support
* add test
* fix no_sync bug in low level zero plugin
* fix test
* add argument for grad accum
* add grad accum in backward hook for gemini
* finish implementation, rewrite tests
* fix test
* skip stuck model in low level zero test
* update doc
* optimize communication & fix gradient checkpoint
* modify doc
* cleaning codes
* update cpu adam fp16 case
* [feature] support no master weights for low level zero plugin
* [feature] support no master weights for low level zero plugin, remove data copy when no master weights
* remove data copy and typecasting when no master weights
* not load weights to cpu when using no master weights
* fix grad: use fp16 grad when no master weights
* only do not update working param when no master weights
* fix: only do not update working param when no master weights
* fix: passing params in dict format in hybrid plugin
* fix: remove extra params (tp_process_group) in hybrid_parallel_plugin
* add APIs
* implement save_sharded_model
* add test for hybrid checkpointio
* implement naive loading for sharded model
* implement efficient sharded model loading
* open a new file for hybrid checkpoint_io
* small fix
* fix circular importing
* fix docstring
* arrange arguments and apis
* small fix
* [shardformer/sequence parallel] Support sequence parallel for gpt2 (#4384)
* [sequence parallel] add sequence parallel linear col/row support (#4336)
* add sequence parallel linear col/row support
* add annotation
* add annotation
* add support for gpt2 fused qkv linear layer
* support sequence parallel in GPT2
* add docstring and note
* add requirments
* remove unused flash-attb
* modify flash attn test
* modify flash attn setting
* modify flash attn code
* add assert before divide, rename forward function
* [shardformer/test] fix gpt2 test with seq-parallel
* [shardformer/sequence parallel] Overlap input gather and grad computation during col backward (#4401)
* overlap gather input / grad computing during col backward
* modify test for overlap
* simplify code
* fix code and modify cuda stream synchronize
* [shardformer/sequence parallel] polish code
* add naive optimizer for 3DPlugin/refactor gpt2 shardformer test
* merge tests of PP/DP/TP combinations into one test file
* fix bug when sync grad for dp in HybridPlugin
* update supported precisions for 3DPlugin/fix bug when shifting tp_degree
* improve the passing of lazy_init
* modify lazy_init/use sync_shared_params