hxwang
b2e9745888
[chore] sync
2024-05-16 04:45:06 +00:00
hxwang
6e38eafebe
[gemini] prefetch chunks
2024-05-15 16:51:44 +08:00
Jianghai
f47f2fbb24
[Inference] Fix API server, test and example ( #5712 )
...
* fix api server
* fix generation config
* fix api server
* fix comments
* fix infer hanging bug
* resolve comments, change backend to free port
2024-05-15 15:47:31 +08:00
Runyu Lu
74c47921fa
[Fix] Llama3 Load/Omit CheckpointIO Temporarily ( #5717 )
...
* Fix Llama3 Load error
* Omit Checkpoint IO Temporarily
2024-05-14 20:17:43 +08:00
Edenzzzz
43995ee436
[Feature] Distributed optimizers: Lamb, Galore, CAME and Adafactor ( #5694 )
...
* [feat] Add distributed lamb; minor fixes in DeviceMesh (#5476 )
* init: add dist lamb; add debiasing for lamb
* dist lamb tester mostly done
* all tests passed
* add comments
* all tests passed. Removed debugging statements
* moved setup_distributed inside plugin. Added dist layout caching
* organize better
---------
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
* [hotfix] Improve tester precision by removing ZeRO on vanilla lamb (#5576 )
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
* [optim] add distributed came (#5526 )
* test CAME under LowLevelZeroOptimizer wrapper
* test CAME TP row and col pass
* test CAME zero pass
* came zero add master and worker param id convert
* came zero test pass
* came zero test pass
* test distributed came passed
* reform code, Modify some expressions and add comments
* minor fix of test came
* minor fix of dist_came and test
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* minor fix of dist_came and test
* rebase dist-optim
* rebase dist-optim
* fix remaining comments
* add test dist came using booster api
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* [optim] Distributed Adafactor (#5484 )
* [feature] solve conflict; update optimizer readme;
* [feature] update optimize readme;
* [fix] fix testcase;
* [feature] Add transformer-bert to testcase;solve a bug related to indivisible shape (induction in use_zero and tp is row parallel);
* [feature] Add transformers_bert model zoo in testcase;
* [feature] add user documentation to docs/source/feature.
* [feature] add API Reference & Sample to optimizer Readme; add state check for bert exam;
* [feature] modify user documentation;
* [fix] fix readme format issue;
* [fix] add zero=0 in testcase; cached augment in dict;
* [fix] fix percision issue;
* [feature] add distributed rms;
* [feature] remove useless comment in testcase;
* [fix] Remove useless test; open zero test; remove fp16 test in bert exam;
* [feature] Extract distributed rms function;
* [feature] add booster + lowlevelzeroPlugin in test;
* [feature] add Start_with_booster_API case in md; add Supporting Information in md;
* [fix] Also remove state movement in base adafactor;
* [feature] extract factor function;
* [feature] add LowLevelZeroPlugin test;
* [fix] add tp=False and zero=True in logic;
* [fix] fix use zero logic;
* [feature] add row residue logic in column parallel factor;
* [feature] add check optim state func;
* [feature] Remove duplicate logic;
* [feature] update optim state check func and percision test bug;
* [fix] update/fix optim state; Still exist percision issue;
* [fix] Add use_zero check in _rms; Add plugin support info in Readme; Add Dist Adafactor init Info;
* [feature] removed print & comments in utils;
* [feature] uodate Readme;
* [feature] add LowLevelZeroPlugin test with Bert model zoo;
* [fix] fix logic in _rms;
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* [fix] remove comments in testcase;
* [feature] add zh-Han Readme;
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* [Feature] refractor dist came; fix percision error; add low level zero test with bert model zoo; (#5676 )
* [feature] daily update;
* [fix] fix dist came;
* [feature] refractor dist came; fix percision error; add low level zero test with bert model zoo;
* [fix] open rms; fix low level zero test; fix dist came test function name;
* [fix] remove redundant test;
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* [Feature] Add Galore (Adam, Adafactor) and distributed GaloreAdamW8bit (#5570 )
* init: add dist lamb; add debiasing for lamb
* dist lamb tester mostly done
* all tests passed
* add comments
* all tests passed. Removed debugging statements
* moved setup_distributed inside plugin. Added dist layout caching
* organize better
* update comments
* add initial distributed galore
* add initial distributed galore
* add galore set param utils; change setup_distributed interface
* projected grad precision passed
* basic precision tests passed
* tests passed; located svd precision issue in fwd-bwd; banned these tests
* Plugin DP + TP tests passed
* move get_shard_dim to d_tensor
* add comments
* remove useless files
* remove useless files
* fix zero typo
* improve interface
* remove moe changes
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fix import
* fix deepcopy
* update came & adafactor to main
* fix param map
* fix typo
---------
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* [Hotfix] Remove one buggy test case from dist_adafactor for now (#5692 )
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---------
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
Co-authored-by: chongqichuizi875 <107315010+chongqichuizi875@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: duanjunwen <54985467+duanjunwen@users.noreply.github.com>
Co-authored-by: Hongxin Liu <lhx0217@gmail.com>
2024-05-14 13:52:45 +08:00
Steve Luo
7806842f2d
add paged-attetionv2: support seq length split across thread block ( #5707 )
2024-05-14 12:46:54 +08:00
Runyu Lu
18d67d0e8e
[Feat]Inference RPC Server Support ( #5705 )
...
* rpc support source
* kv cache logical/physical disaggregation
* sampler refactor
* colossalai launch built in
* Unitest
* Rpyc support
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-05-14 10:00:55 +08:00
hugo-syn
393c8f5b7f
[hotfix] fix inference typo ( #5438 )
2024-05-13 21:06:44 +08:00
yuehuayingxueluo
de4bf3dedf
[Inference]Adapt repetition_penalty and no_repeat_ngram_size ( #5708 )
...
* Adapt repetition_penalty and no_repeat_ngram_size
* fix no_repeat_ngram_size_logit_process
* remove batch_updated
* fix annotation
* modified codes based on the review feedback.
* rm get_batch_token_ids
2024-05-11 15:13:25 +08:00
Wang Binluo
537f6a3855
[Shardformer]fix the num_heads assert for llama model and qwen model ( #5704 )
...
* fix the num_heads assert
* fix the transformers import
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fix the import
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-05-10 15:33:39 +08:00
Wang Binluo
a3cc68ca93
[Shardformer] Support the Qwen2 model ( #5699 )
...
* feat: support qwen2 model
* fix: modify model config and add Qwen2RMSNorm
* fix qwen2 model conflicts
* test: add qwen2 shard test
* to: add qwen2 auto policy
* support qwen model
* fix the conflicts
* add try catch
* add transformers version for qwen2
* add the ColoAttention for the qwen2 model
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* add the unit test version check
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fix the test input bug
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fix the version check
* fix the version check
---------
Co-authored-by: Wenhao Chen <cwher@outlook.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-05-09 20:04:25 +08:00
傅剑寒
bfad39357b
[Inference/Feat] Add quant kvcache interface ( #5700 )
...
* add quant kvcache interface
* delete unused output
* complete args comments
2024-05-09 18:03:24 +08:00
flybird11111
d4c5ef441e
[gemini]remove registered gradients hooks ( #5696 )
...
* fix gemini
fix gemini
* fix
fix
2024-05-09 10:29:49 +08:00
CjhHa1
bc9063adf1
resolve rebase conflicts on Branch feat/online-serving
2024-05-08 15:20:53 +00:00
Jianghai
61a1b2e798
[Inference] Fix bugs and docs for feat/online-server ( #5598 )
...
* fix test bugs
* add do sample test
* del useless lines
* fix comments
* fix tests
* delete version tag
* delete version tag
* add
* del test sever
* fix test
* fix
* Revert "add"
This reverts commit b9305fb024
.
2024-05-08 15:20:53 +00:00
CjhHa1
7bbb28e48b
[Inference] resolve rebase conflicts
...
fix
2024-05-08 15:20:53 +00:00
Jianghai
c064032865
[Online Server] Chat Api for streaming and not streaming response ( #5470 )
...
* fix bugs
* fix bugs
* fix api server
* fix api server
* add chat api and test
* del request.n
2024-05-08 15:20:53 +00:00
Jianghai
de378cd2ab
[Inference] Finish Online Serving Test, add streaming output api, continuous batching test and example ( #5432 )
...
* finish online test and add examples
* fix test_contionus_batching
* fix some bugs
* fix bash
* fix
* fix inference
* finish revision
* fix typos
* revision
2024-05-08 15:20:52 +00:00
Jianghai
69cd7e069d
[Inference] ADD async and sync Api server using FastAPI ( #5396 )
...
* add api server
* fix
* add
* add completion service and fix bug
* add generation config
* revise shardformer
* fix bugs
* add docstrings and fix some bugs
* fix bugs and add choices for prompt template
2024-05-08 15:18:28 +00:00
yuehuayingxueluo
d482922035
[Inference] Support the logic related to ignoring EOS token ( #5693 )
...
* Adapt temperature processing logic
* add ValueError for top_p and top_k
* add GQA Test
* fix except_msg
* support ignore EOS token
* change variable's name
* fix annotation
2024-05-08 19:59:10 +08:00
yuehuayingxueluo
9c2fe7935f
[Inference]Adapt temperature processing logic ( #5689 )
...
* Adapt temperature processing logic
* add ValueError for top_p and top_k
* add GQA Test
* fix except_msg
2024-05-08 17:58:29 +08:00
Wang Binluo
22297789ab
Merge pull request #5684 from wangbluo/parallel_output
...
[Shardformer] Add Parallel output for shardformer models
2024-05-07 22:59:42 -05:00
Yuanheng Zhao
55cc7f3df7
[Fix] Fix Inference Example, Tests, and Requirements ( #5688 )
...
* clean requirements
* modify example inference struct
* add test ci scripts
* mark test_infer as submodule
* rm deprecated cls & deps
* import of HAS_FLASH_ATTN
* prune inference tests to be run
* prune triton kernel tests
* increment pytest timeout mins
* revert import path in openmoe
2024-05-08 11:30:15 +08:00
Yuanheng Zhao
f9afe0addd
[hotfix] Fix KV Heads Number Assignment in KVCacheManager ( #5695 )
...
- Fix key value number assignment in KVCacheManager, as well as method of accessing
2024-05-07 23:13:14 +08:00
wangbluo
4e50cce26b
fix the mistral model
2024-05-07 09:17:56 +00:00
wangbluo
a8408b4d31
remove comment code
2024-05-07 07:08:56 +00:00
pre-commit-ci[bot]
ca56b93d83
[pre-commit.ci] auto fixes from pre-commit.com hooks
...
for more information, see https://pre-commit.ci
2024-05-07 07:07:09 +00:00
wangbluo
108ddfb795
add parallel_output for the opt model
2024-05-07 07:05:53 +00:00
pre-commit-ci[bot]
88f057ce7c
[pre-commit.ci] auto fixes from pre-commit.com hooks
...
for more information, see https://pre-commit.ci
2024-05-07 07:03:47 +00:00
flybird11111
77ec773388
[zero]remove registered gradients hooks ( #5687 )
...
* remove registered hooks
fix
fix
fix zero
fix
fix
fix
fix
fix zero
fix zero
fix
fix
fix
* fix
fix
fix
2024-05-07 12:01:38 +08:00
Yuanheng Zhao
8754abae24
[Fix] Fix & Update Inference Tests (compatibility w/ main)
2024-05-05 16:28:56 +00:00
Yuanheng Zhao
56ed09aba5
[sync] resolve conflicts of merging main
2024-05-05 05:14:00 +00:00
Yuanheng Zhao
537a3cbc4d
[kernel] Support New KCache Layout - Triton Kernel ( #5677 )
...
* kvmemcpy triton for new kcache layout
* revise tests for new kcache layout
* naive triton flash decoding - new kcache layout
* rotary triton kernel - new kcache layout
* remove redundancy - triton decoding
* remove redundancy - triton kvcache copy
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-05-03 17:20:45 +08:00
wangbluo
2632916329
remove useless code
2024-05-01 09:23:43 +00:00
yuehuayingxueluo
f79963199c
[inference]Add alibi to flash attn function ( #5678 )
...
* add alibi to flash attn function
* rm redundant modifications
2024-04-30 19:35:05 +08:00
wangbluo
9efc79ef24
add parallel output for mistral model
2024-04-30 08:10:20 +00:00
Steve Luo
5cd75ce4c7
[Inference/Kernel] refactor kvcache manager and rotary_embedding and kvcache_memcpy oper… ( #5663 )
...
* refactor kvcache manager and rotary_embedding and kvcache_memcpy operator
* refactor decode_kv_cache_memcpy
* enable alibi in pagedattention
2024-04-30 15:52:23 +08:00
yuehuayingxueluo
5f00002e43
[Inference] Adapt Baichuan2-13B TP ( #5659 )
...
* adapt to baichuan2 13B
* add baichuan2 13B TP
* update baichuan tp logic
* rm unused code
* Fix TP logic
* fix alibi slopes tp logic
* rm nn.Module
* Polished the code.
* change BAICHUAN_MODEL_NAME_OR_PATH
* Modified the logic for loading Baichuan weights.
* fix typos
2024-04-30 15:47:07 +08:00
Wang Binluo
d3f34ee8cc
[Shardformer] add assert for num of attention heads divisible by tp_size ( #5670 )
...
* add assert for num of attention heads divisible by tp_size
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-04-29 18:47:47 +08:00
flybird11111
6af6d6fc9f
[shardformer] support bias_gelu_jit_fused for models ( #5647 )
...
* support gelu_bias_fused for gpt2
* support gelu_bias_fused for gpt2
fix
fix
fix
* fix
fix
* fix
2024-04-29 15:33:51 +08:00
Hongxin Liu
7f8b16635b
[misc] refactor launch API and tensor constructor ( #5666 )
...
* [misc] remove config arg from initialize
* [misc] remove old tensor contrusctor
* [plugin] add npu support for ddp
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* [devops] fix doc test ci
* [test] fix test launch
* [doc] update launch doc
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-04-29 10:40:11 +08:00
linsj20
91fa553775
[Feature] qlora support ( #5586 )
...
* [feature] qlora support
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* qlora follow commit
* migrate qutization folder to colossalai/
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* minor fixes
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-04-28 10:51:27 +08:00
flybird11111
8954a0c2e2
[LowLevelZero] low level zero support lora ( #5153 )
...
* low level zero support lora
low level zero support lora
* add checkpoint test
* add checkpoint test
* fix
* fix
* fix
* fix
fix
fix
fix
* fix
* fix
fix
fix
fix
fix
fix
fix
* fix
* fix
fix
fix
fix
fix
fix
fix
* fix
* test ci
* git # This is a combination of 3 commits.
Update low_level_zero_plugin.py
Update low_level_zero_plugin.py
fix
fix
fix
* fix naming
fix naming
fix naming
fix
2024-04-28 10:51:27 +08:00
Baizhou Zhang
14b0d4c7e5
[lora] add lora APIs for booster, support lora for TorchDDP ( #4981 )
...
* add apis and peft requirement
* add liscense and implement apis
* add checkpointio apis
* add torchddp fwd_bwd test
* add support_lora methods
* add checkpointio test and debug
* delete unneeded codes
* remove peft from LICENSE
* add concrete methods for enable_lora
* simplify enable_lora api
* fix requirements
2024-04-28 10:51:27 +08:00
Yuanheng Zhao
5be590b99e
[kernel] Support new KCache Layout - Context Attention Triton Kernel ( #5658 )
...
* add context attn triton kernel - new kcache layout
* add benchmark triton
* tiny revise
* trivial - code style, comment
2024-04-26 17:51:49 +08:00
flybird11111
8b7d535977
fix gptj ( #5652 )
2024-04-26 11:52:27 +08:00
yuehuayingxueluo
3c91e3f176
[Inference]Adapt to baichuan2 13B ( #5614 )
...
* adapt to baichuan2 13B
* adapt to baichuan2 13B
* change BAICHUAN_MODEL_NAME_OR_PATH
* fix test_decoding_attn.py
* Modifications based on review comments.
* change BAICHUAN_MODEL_NAME_OR_PATH
* mv attn mask processes to test flash decoding
* mv get_alibi_slopes baichuan modeling
* fix bugs in test_baichuan.py
2024-04-25 23:11:30 +08:00
Hongxin Liu
1b387ca9fe
[shardformer] refactor pipeline grad ckpt config ( #5646 )
...
* [shardformer] refactor pipeline grad ckpt config
* [shardformer] refactor pipeline grad ckpt config
* [pipeline] fix stage manager
2024-04-25 15:19:30 +08:00
Season
7ef91606e1
[Fix]: implement thread-safety singleton to avoid deadlock for very large-scale training scenarios ( #5625 )
...
* implement thread-safety singleton
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* refactor singleton implementation
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-04-25 14:45:52 +08:00
Hongxin Liu
bbb2c21f16
[shardformer] fix chatglm implementation ( #5644 )
...
* [shardformer] fix chatglm policy
* [shardformer] fix chatglm flash attn
* [shardformer] update readme
* [shardformer] fix chatglm init
* [shardformer] fix chatglm test
* [pipeline] fix chatglm merge batch
2024-04-25 14:41:17 +08:00
Steve Luo
a8fd3b0342
[Inference/Kernel] Optimize paged attention: Refactor key cache layout ( #5643 )
...
* optimize flashdecodingattention: refactor code with different key cache layout(from [num_blocks, num_kv_heads, block_size, head_size] to [num_blocks, num_kv_heads, head_size/x, block_size, x])
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-04-25 14:24:02 +08:00
flybird11111
5d88ef1aaf
[shardformer] remove useless code ( #5645 )
2024-04-25 13:46:39 +08:00
flybird11111
148506c828
[coloattention]modify coloattention ( #5627 )
...
* modify coloattention
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fix
* fix
* fix
fxi
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-04-25 10:47:14 +08:00
Edenzzzz
7ee569b05f
[hotfix] Fixed fused layernorm bug without apex ( #5609 )
...
* fixed fused layernorm bug without apex
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* same for flash attn
* remove flash attn check
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-04-24 23:04:06 +08:00
Wang Binluo
0d0a582033
[shardformer] update transformers ( #5583 )
...
* flash_attention forward upgrade
* llama_model_forward
* remove useless comment
* update the requirements.txt
* add the transformers version requirements
* remove the LATEST VERSION try
* [shardformer] update bloom model (#5518 )
* update bloom model
* remove the version restriction
* [shardformer] update_falcon (#5520 )
* [shardformer] update mistral model (#5511 )
* [shardformer] update gpt2 (#5502 )
* [shardformer] update gptj model (#5503 )
* [shardformer] update opt (#5522 )
* [shardformer] update t5 model (#5524 )
* [shardformer] update whisper model (#5529 )
* [shardformer] update vit model (#5530 )
* update vit model
* remove the output_hidden_states
* [shardformer] fix llama modeling
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* [zero] support multiple (partial) backward passes (#5596 )
* [zero] support multiple (partial) backward passes
* [misc] update requirements
* [zero] support multiple (partial) backward passes (#5596 )
* [zero] support multiple (partial) backward passes
* [misc] update requirements
* fix conflicts
* [doc] fix ColossalMoE readme (#5599 )
* fix readme
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* merge with main
* merge with main
* llama_model_forward
* remove useless comment
* remove the LATEST VERSION try
* [shardformer] update bloom model (#5518 )
* update bloom model
* remove the version restriction
* [shardformer] update mistral model (#5511 )
* [shardformer] update opt (#5522 )
* [shardformer] update whisper model (#5529 )
* [shardformer] fix llama modeling
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* [hotfix] Fix examples no pad token & auto parallel codegen bug; (#5606 )
* fix no pad token bug
* fixed some auto parallel codegen bug, but might not run on torch 2.1
---------
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
* [shardformer] fix pipeline grad ckpt (#5620 )
* [shardformer] fix pipeline grad ckpt
* [shardformer] fix whisper (#5628 )
* [test] fix llama model test
* fix the opt upgrade (#5634 )
* [shardformer] fix attn replacement (#5636 )
* [shardformer] update flashattention replacement (#5637 )
* update transformers
update transformers
fix
fix
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* [test] fix llama test (#5638 )
* [gemini] fix buffer cast (#5639 )
* Fix shardformer upgrade (#5640 )
* fix llama model
* fix the mistral
* fix the shardformer model
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* [shardformer]support pipeline parallelism for mistral. (#5642 )
* [shardformer] fix attn replacement (#5636 )
* [shardformer] update flashattention replacement (#5637 )
* update transformers
update transformers
fix
fix
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* [Feature] Support LLaMA-3 CPT and ST (#5619 )
* support LLaMA-3
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Run pre-commit
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* [exampe] update llama example (#5626 )
* [plugin] support dp inside for hybriad parallel
* [example] update llama benchmark
* [example] update llama benchmark
* [example] update llama readme
* [example] update llama readme
* [example] llama3 (#5631 )
* release llama3
* [release] llama3
* [release] llama3
* [release] llama3
* [release] llama3
* [test] fix llama test (#5638 )
* [gemini] fix buffer cast (#5639 )
* support pp for mistral
* fix
* fix
fix
fix
* fix
---------
Co-authored-by: Hongxin Liu <lhx0217@gmail.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Tong Li <tong.li352711588@gmail.com>
Co-authored-by: binmakeswell <binmakeswell@gmail.com>
---------
Co-authored-by: Hongxin Liu <lhx0217@gmail.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Camille Zhong <44392324+Camille7777@users.noreply.github.com>
Co-authored-by: Edenzzzz <wenxuan.tan@wisc.edu>
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
Co-authored-by: flybird11111 <1829166702@qq.com>
Co-authored-by: Tong Li <tong.li352711588@gmail.com>
Co-authored-by: binmakeswell <binmakeswell@gmail.com>
2024-04-24 22:51:50 +08:00
Yuanheng Zhao
04863a9b14
[example] Update Llama Inference example ( #5629 )
...
* [example] add infernece benchmark llama3
* revise inference config - arg
* remove unused args
* add llama generation demo script
* fix init rope in llama policy
* add benchmark-llama3 - cleanup
2024-04-23 22:23:07 +08:00
Hongxin Liu
4de4e31818
[exampe] update llama example ( #5626 )
...
* [plugin] support dp inside for hybriad parallel
* [example] update llama benchmark
* [example] update llama benchmark
* [example] update llama readme
* [example] update llama readme
2024-04-23 14:12:20 +08:00
Yuanheng Zhao
5d4c1fe8f5
[Fix/Inference] Fix GQA Triton and Support Llama3 ( #5624 )
...
* [fix] GQA calling of flash decoding triton
* fix kv cache alloc shape
* fix rotary triton - GQA
* fix sequence max length assigning
* Sequence max length logic
* fix scheduling and spec-dec
* skip without import error
* fix pytest - skip without ImportError
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-04-23 13:09:55 +08:00
Hongxin Liu
e094933da1
[shardformer] fix pipeline grad ckpt ( #5620 )
...
* [shardformer] fix pipeline grad ckpt
2024-04-22 11:25:39 +08:00
Edenzzzz
d83c633ca6
[hotfix] Fix examples no pad token & auto parallel codegen bug; ( #5606 )
...
* fix no pad token bug
* fixed some auto parallel codegen bug, but might not run on torch 2.1
---------
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
2024-04-18 18:15:50 +08:00
Runyu Lu
e37ee2fb65
[Feat]Tensor Model Parallel Support For Inference ( #5563 )
...
* tensor parallel support naive source
* [fix]precision, model load and refactor the framework
* add tp unit test
* docstring
* fix do_sample
2024-04-18 16:56:46 +08:00
Steve Luo
be396ad6cc
[Inference/Kernel] Add Paged Decoding kernel, sequence split within the same thread block ( #5531 )
...
* feat flash decoding for paged attention
* refactor flashdecodingattention
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-04-18 16:45:07 +08:00
flybird11111
a0ad587c24
[shardformer] refactor embedding resize ( #5603 )
...
* [branch rebase] rebase main to Feature/resize_embedding (#5554 )
* fix
* [release] update version (#5411 )
* [hotfix] fix typo s/keywrods/keywords etc. (#5429 )
* [devops] fix compatibility (#5444 )
* [devops] fix compatibility
* [hotfix] update compatibility test on pr
* [devops] fix compatibility
* [devops] record duration during comp test
* [test] decrease test duration
* fix falcon
* [shardformer] fix gathering output when using tensor parallelism (#5431 )
* fix
* padding vocab_size when using pipeline parallellism
padding vocab_size when using pipeline parallellism
fix
fix
* fix
* fix
fix
fix
* fix gather output
* fix
* fix
* fix
fix resize embedding
fix resize embedding
* fix resize embedding
fix
* revert
* revert
* revert
* [doc] release Open-Sora 1.0 with model weights (#5468 )
* [doc] release Open-Sora 1.0 with model weights
* [doc] release Open-Sora 1.0 with model weights
* [doc] release Open-Sora 1.0 with model weights
* [doc] update open-sora demo (#5479 )
* [doc] update open-sora demo
* [doc] update open-sora demo
* [doc] update open-sora demo
* [example] add grok-1 inference (#5485 )
* [misc] add submodule
* remove submodule
* [example] support grok-1 tp inference
* [example] add grok-1 inference script
* [example] refactor code
* [example] add grok-1 readme
* [exmaple] add test ci
* [exmaple] update readme
---------
Co-authored-by: Hongxin Liu <lhx0217@gmail.com>
Co-authored-by: digger yu <digger-yu@outlook.com>
Co-authored-by: binmakeswell <binmakeswell@gmail.com>
* [CI] run pre-commit (#5577 )
* fix
* [release] update version (#5411 )
* [hotfix] fix typo s/keywrods/keywords etc. (#5429 )
* [devops] fix compatibility (#5444 )
* [devops] fix compatibility
* [hotfix] update compatibility test on pr
* [devops] fix compatibility
* [devops] record duration during comp test
* [test] decrease test duration
* fix falcon
* [shardformer] fix gathering output when using tensor parallelism (#5431 )
* fix
* padding vocab_size when using pipeline parallellism
padding vocab_size when using pipeline parallellism
fix
fix
* fix
* fix
fix
fix
* fix gather output
* fix
* fix
* fix
fix resize embedding
fix resize embedding
* fix resize embedding
fix
* revert
* revert
* revert
* [doc] release Open-Sora 1.0 with model weights (#5468 )
* [doc] release Open-Sora 1.0 with model weights
* [doc] release Open-Sora 1.0 with model weights
* [doc] release Open-Sora 1.0 with model weights
* [doc] update open-sora demo (#5479 )
* [doc] update open-sora demo
* [doc] update open-sora demo
* [doc] update open-sora demo
* [example] add grok-1 inference (#5485 )
* [misc] add submodule
* remove submodule
* [example] support grok-1 tp inference
* [example] add grok-1 inference script
* [example] refactor code
* [example] add grok-1 readme
* [exmaple] add test ci
* [exmaple] update readme
* run pre-commit
---------
Co-authored-by: Hongxin Liu <lhx0217@gmail.com>
Co-authored-by: digger yu <digger-yu@outlook.com>
Co-authored-by: binmakeswell <binmakeswell@gmail.com>
* [rebase] rebase main to resize-embedding (#5581 )
* [release] grok-1 314b inference (#5490 )
* [release] grok-1 inference
* [release] grok-1 inference
* [release] grok-1 inference
* [example] update Grok-1 inference (#5495 )
* revise grok-1 example
* remove unused arg in scripts
* prevent re-installing torch
* update readme
* revert modifying colossalai requirements
* add perf
* trivial
* add tokenizer url
* [hotfix] set return_outputs=False in examples and polish code (#5404 )
* fix: simplify merge_batch
* fix: use return_outputs=False to eliminate extra memory consumption
* feat: add return_outputs warning
* style: remove `return_outputs=False` as it is the default value
* [release] grok-1 inference benchmark (#5500 )
* [release] grok-1 inference benchmark
* [release] grok-1 inference benchmark
* [release] grok-1 inference benchmark
* [release] grok-1 inference benchmark
* [release] grok-1 inference benchmark
* [shardformer]Fix lm parallel. (#5480 )
* fix
* padding vocab_size when using pipeline parallellism
padding vocab_size when using pipeline parallellism
fix
fix
* fix
* fix
fix
fix
* fix gather output
* fix
* fix
* fix
fix resize embedding
fix resize embedding
* fix resize embedding
fix
* revert
* revert
* revert
* fix lm forward distribution
* fix
* test ci
* fix
* [fix] fix grok-1 example typo (#5506 )
* [devops] fix example test ci (#5504 )
* Fix ColoTensorSpec for py11 (#5440 )
* fixed layout converter caching and updated tester
* Empty-Commit
* [shardformer] update colo attention to support custom mask (#5510 )
* [feature] refactor colo attention (#5462 )
* [extension] update api
* [feature] add colo attention
* [feature] update sdpa
* [feature] update npu attention
* [feature] update flash-attn
* [test] add flash attn test
* [test] update flash attn test
* [shardformer] update modeling to fit colo attention (#5465 )
* [misc] refactor folder structure
* [shardformer] update llama flash-attn
* [shardformer] fix llama policy
* [devops] update tensornvme install
* [test] update llama test
* [shardformer] update colo attn kernel dispatch
* [shardformer] update blip2
* [shardformer] update chatglm
* [shardformer] update gpt2
* [shardformer] update gptj
* [shardformer] update opt
* [shardformer] update vit
* [shardformer] update colo attention mask prep
* [shardformer] update whisper
* [test] fix shardformer tests (#5514 )
* [test] fix shardformer tests
* [test] fix shardformer tests
* [format] applied code formatting on changed files in pull request 5510 (#5517 )
Co-authored-by: github-actions <github-actions@github.com>
* [shardformer] fix pipeline forward error if custom layer distribution is used (#5189 )
* Use self.[distribute_layers|get_stage_index] to exploit custom layer distribution
* Change static methods for t5 layer distribution to member functions
* Change static methods for whisper layer distribution to member functions
* Replace whisper policy usage with self one
* Fix test case to use non-static layer distribution methods
* fix: fix typo
---------
Co-authored-by: Wenhao Chen <cwher@outlook.com>
* [Fix] Grok-1 use tokenizer from the same pretrained path (#5532 )
* [fix] use tokenizer from the same pretrained path
* trust remote code
* [ColossalChat] Update RLHF V2 (#5286 )
* Add dpo. Fix sft, ppo, lora. Refactor all
* fix and tested ppo
* 2 nd round refactor
* add ci tests
* fix ci
* fix ci
* fix readme, style
* fix readme style
* fix style, fix benchmark
* reproduce benchmark result, remove useless files
* rename to ColossalChat
* use new image
* fix ci workflow
* fix ci
* use local model/tokenizer for ci tests
* fix ci
* fix ci
* fix ci
* fix ci timeout
* fix rm progress bar. fix ci timeout
* fix ci
* fix ci typo
* remove 3d plugin from ci temporary
* test environment
* cannot save optimizer
* support chat template
* fix readme
* fix path
* test ci locally
* restore build_or_pr
* fix ci data path
* fix benchmark
* fix ci, move ci tests to 3080, disable fast tokenizer
* move ci to 85
* support flash attention 2
* add all-in-one data preparation script. Fix colossal-llama2-chat chat template
* add hardware requirements
* move ci test data
* fix save_model, add unwrap
* fix missing bos
* fix missing bos; support grad accumulation with gemini
* fix ci
* fix ci
* fix ci
* fix llama2 chat template config
* debug sft
* debug sft
* fix colossalai version requirement
* fix ci
* add sanity check to prevent NaN loss
* fix requirements
* add dummy data generation script
* add dummy data generation script
* add dummy data generation script
* add dummy data generation script
* update readme
* update readme
* update readme and ignore
* fix logger bug
* support parallel_output
* modify data preparation logic
* fix tokenization
* update lr
* fix inference
* run pre-commit
---------
Co-authored-by: Tong Li <tong.li352711588@gmail.com>
* [shardformer, pipeline] add `gradient_checkpointing_ratio` and heterogenous shard policy for llama (#5508 )
* feat: add `GradientCheckpointConfig` and `PipelineGradientCheckpointConfig`
* feat: apply `GradientCheckpointConfig` to policy and llama_forward
* feat: move `distribute_layer` and `get_stage_index` to PipelineStageManager
* fix: add optional args for `distribute_layer` and `get_stage_index`
* fix: fix changed API calls
* test: update llama tests
* style: polish `GradientCheckpointConfig`
* fix: fix pipeline utils tests
* fix incorrect sharding without zero (#5545 )
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
* [shardformer] Sequence Parallelism Optimization (#5533 )
* sequence parallel optimization
* validate sequence parallel in llama (code to be polished)
* shardformer api writing
* integrate sequence parallel in ShardFormer
* fix pp bugs and sp bugs for LlaMa model
* integrating ring-based sequence parallelism into ShardFormer
* [sequence parallelism]: Add fused megatron function
* integrating ring-based sequence parallelism into ShardFormer
---------
Co-authored-by: linsj20 <linsj20@mails.tsinghua.edu.cn>
* fix bugs when useing sp and flashattention together
* fix operation function name
* support flash attention for ulysses-style sp
* clarify sp process group
* fix compatibility bugs in moe plugin
* fix fused linear bugs
* fix linear layer test
* support gpt model all-to-all sp
* modify shard data dimension (meant to be dim=-1)
* support megtron-style sp and distributed attn for llama model
* [shardformer] add megatron sp to llama
* support llama7B 128k with distributed attention
* [shardformer] robustness enhancement
* add block attn
* sp mode 1: keep input as a complete sequence
* fix sp compatability
* finish sp mode 3 support for gpt
* using all_to_all_single when batch size is 1
* support mode 2 sp in gpt2 (#5 )
* [shardformer] add megatron sp to llama
* support llama7B 128k with distributed attention
* [shardformer] robustness enhancement
* add block attn
* sp mode 1: keep input as a complete sequence
* fix sp compatability
* refactor ring implementation
* support mode 2 sp in gpt2
* polish code
* enable distributed attn mask when using sp mode 2 and 3 in llama
* automatically enable flash attn when using sp mode 2 and 3 in llama
* inplace attn mask
* add zero2 support for sequence parallel
* polish code
* fix bugs
* fix gemini checkpoint io
* loose tensor checking atol and rtol
* add comment
* fix llama layernorm grad
* fix zero grad
* fix zero grad
* fix conflict
* update split and gather auto grad func
* sequence parallel: inside text split (#6 )
* polish code (part 1)
* polish code (part 2)
* polish code (part 2.5)
* polish code (part 3)
* sequence parallel: inside text split
* miscellaneous minor fixes
* polish code
* fix ulysses style ZeRO
* sequence parallel: inside text split
* miscellaneous minor fixes
* disaggregate sp group and dp group for sp
* fix llama and gpt sp
* polish code
* move ulysses grad sync to ddp (#9 )
* remove zero_stage and unbind the grad sync for alltoall sp
* add 2d group creation test
* move ulysses grad sync to ddp
* add 2d group creation test
* remove useless code
* change shard config not to enable sp when enable_all_optimizations
* add sp warnings for several model
* remove useless code
---------
Co-authored-by: linsj20 <linsj20@mails.tsinghua.edu.cn>
* [hotfix] quick fixes to make legacy tutorials runnable (#5559 )
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
* [fix] fix typo s/muiti-node /multi-node etc. (#5448 )
* [hotfix] fix typo s/get_defualt_parser /get_default_parser (#5548 )
* [devops] remove post commit ci (#5566 )
* [devops] remove post commit ci
* [misc] run pre-commit on all files
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
---------
Co-authored-by: binmakeswell <binmakeswell@gmail.com>
Co-authored-by: Yuanheng Zhao <54058983+yuanheng-zhao@users.noreply.github.com>
Co-authored-by: Wenhao Chen <cwher@outlook.com>
Co-authored-by: Hongxin Liu <lhx0217@gmail.com>
Co-authored-by: Rocky Duan <dementrock@users.noreply.github.com>
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
Co-authored-by: Edenzzzz <wenxuan.tan@wisc.edu>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: github-actions <github-actions@github.com>
Co-authored-by: Insu Jang <insujang@umich.edu>
Co-authored-by: YeAnbang <44796419+YeAnbang@users.noreply.github.com>
Co-authored-by: Tong Li <tong.li352711588@gmail.com>
Co-authored-by: Zhongkai Zhao <kanezz620@gmail.com>
Co-authored-by: linsj20 <linsj20@mails.tsinghua.edu.cn>
Co-authored-by: digger yu <digger-yu@outlook.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* [shardformer]enable padding vocabulary size. (#5489 )
* padding vocab_size when using pipeline parallellism
padding vocab_size when using pipeline parallellism
fix
fix
* fix
* fix
fix
fix
* fix gather output
* fix
* fix
* fix
fix resize embedding
fix resize embedding
* fix resize embedding
fix
* revert
* revert
* revert
* padding vocab
* padding vocabe
* fix
* fix
* fxi
* test ci
* fix
fix
fix
fix
* fix
fix
* fix
* fix
* Update hybrid_parallel_plugin.py
fix
fix
fix
* fix
fix
* fix
fix
* fix
* resolve super init
resolve super init
resolve super init
resolve super init
* resolve comments
* fix
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* vocab checkpointio
* padding vocab_size when using pipeline parallellism
padding vocab_size when using pipeline parallellism
fix
fix
* fix
fix
fix
* fix
* fix
fix resize embedding
fix resize embedding
* fix resize embedding
fix
* revert
* revert
* padding vocab
* fix
* fix
fix
* fix
fix
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fix ci
* fix
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fix
* cherry-pick
* revert moe modify
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fix
fix
fix
fix
fix
fix
fix
fix
* resolve comments
resolve comments
resolve comments
resolve comments
resolve comments
* ptensor
ptensor
resolve comments
fix
fix
fix
fix
fix
resolve comments
resolve comments
resolve comments
resolve comments
resolve comments
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Hongxin Liu <lhx0217@gmail.com>
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fix rebase
* fix rebase
---------
Co-authored-by: Hongxin Liu <lhx0217@gmail.com>
Co-authored-by: digger yu <digger-yu@outlook.com>
Co-authored-by: binmakeswell <binmakeswell@gmail.com>
Co-authored-by: Yuanheng Zhao <54058983+yuanheng-zhao@users.noreply.github.com>
Co-authored-by: Wenhao Chen <cwher@outlook.com>
Co-authored-by: Rocky Duan <dementrock@users.noreply.github.com>
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
Co-authored-by: Edenzzzz <wenxuan.tan@wisc.edu>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: github-actions <github-actions@github.com>
Co-authored-by: Insu Jang <insujang@umich.edu>
Co-authored-by: YeAnbang <44796419+YeAnbang@users.noreply.github.com>
Co-authored-by: Tong Li <tong.li352711588@gmail.com>
Co-authored-by: Zhongkai Zhao <kanezz620@gmail.com>
Co-authored-by: linsj20 <linsj20@mails.tsinghua.edu.cn>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-04-18 16:10:18 +08:00
Hongxin Liu
3788fefc7a
[zero] support multiple (partial) backward passes ( #5596 )
...
* [zero] support multiple (partial) backward passes
* [misc] update requirements
2024-04-16 17:49:21 +08:00
yuehuayingxueluo
56b222eff8
[inference/model]Adapted to the baichuan2-7B model ( #5591 )
...
* Adapted to the baichuan2-7B model
* modified according to the review comments.
* Modified the method of obtaining random weights.
* modified according to the review comments.
* change mlp layewr 'NOTE'
2024-04-15 16:53:02 +08:00
Yuanheng
f8598e3ec5
[Fix] Llama Modeling Control with Spec-Dec ( #5580 )
...
- fix ref before asgmt
- fall back to use triton kernels when using spec-dec
2024-04-10 18:19:44 +08:00
Yuanheng Zhao
e60d430cf5
[Fix] resolve conflicts of rebasing feat/speculative-decoding ( #5557 )
...
- resolve conflicts of rebasing feat/speculative-decoding
2024-04-10 18:13:49 +08:00
Yuanheng Zhao
e1acb58423
[doc] Add inference/speculative-decoding README ( #5552 )
...
* add README for spec-dec
* update roadmap
2024-04-10 11:07:52 +08:00
Yuanheng Zhao
d85d91435a
[Inference/SpecDec] Support GLIDE Drafter Model ( #5455 )
...
* add glide-llama policy and modeling
* update glide modeling, compitable with transformers 4.36.2
* revise glide llama modeling/usage
* fix issues of glimpsing large kv
* revise the way re-loading params for glide drafter
* fix drafter and engine tests
* enable convert to glide strict=False
* revise glide llama modeling
* revise vicuna prompt template
* revise drafter and tests
* apply usage of glide model in engine
2024-04-10 11:07:52 +08:00
Yuanheng Zhao
912e24b2aa
[SpecDec] Fix inputs for speculation and revise past KV trimming ( #5449 )
...
* fix drafter pastkv and usage of batch bucket
2024-04-10 11:07:52 +08:00
Yuanheng Zhao
a37f82629d
[Inference/SpecDec] Add Speculative Decoding Implementation ( #5423 )
...
* fix flash decoding mask during verification
* add spec-dec
* add test for spec-dec
* revise drafter init
* remove drafter sampling
* retire past kv in drafter
* (trivial) rename attrs
* (trivial) rename arg
* revise how we enable/disable spec-dec
2024-04-10 11:07:52 +08:00
Yuanheng Zhao
5a9b05f7b2
[Inference/SpecDec] Add Basic Drafter Model Container ( #5405 )
...
* [Infer/Fix] Fix Dependency in test - RMSNorm kernel (#5399 )
fix dependency in pytest
* add drafter model container (basic ver)
2024-04-10 11:07:51 +08:00
Yuanheng Zhao
d63c469f45
[Infer] Revise and Adapt Triton Kernels for Spec-Dec ( #5401 )
...
* [Infer/Fix] Fix Dependency in test - RMSNorm kernel (#5399 )
fix dependency in pytest
* resolve conflicts for revising flash-attn
* adapt kv cache copy kernel for spec-dec
* fix seqlen-n kvcache copy kernel/tests
* test kvcache copy - use torch.equal
* add assertions
* (trivial) comment out
2024-04-10 11:07:51 +08:00
Yuanheng
7ca1d1c545
remove outdated triton test
2024-04-08 17:00:55 +08:00
Yuanheng
ce9401ad52
remove unused triton kernels
2024-04-08 16:25:12 +08:00
Yuanheng
ed5ebd1735
[Fix] resolve conflicts of merging main
2024-04-08 16:21:47 +08:00
Hongxin Liu
641b1ee71a
[devops] remove post commit ci ( #5566 )
...
* [devops] remove post commit ci
* [misc] run pre-commit on all files
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-04-08 15:09:40 +08:00
Edenzzzz
15055f9a36
[hotfix] quick fixes to make legacy tutorials runnable ( #5559 )
...
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
2024-04-07 12:06:27 +08:00
Zhongkai Zhao
8e412a548e
[shardformer] Sequence Parallelism Optimization ( #5533 )
...
* sequence parallel optimization
* validate sequence parallel in llama (code to be polished)
* shardformer api writing
* integrate sequence parallel in ShardFormer
* fix pp bugs and sp bugs for LlaMa model
* integrating ring-based sequence parallelism into ShardFormer
* [sequence parallelism]: Add fused megatron function
* integrating ring-based sequence parallelism into ShardFormer
---------
Co-authored-by: linsj20 <linsj20@mails.tsinghua.edu.cn>
* fix bugs when useing sp and flashattention together
* fix operation function name
* support flash attention for ulysses-style sp
* clarify sp process group
* fix compatibility bugs in moe plugin
* fix fused linear bugs
* fix linear layer test
* support gpt model all-to-all sp
* modify shard data dimension (meant to be dim=-1)
* support megtron-style sp and distributed attn for llama model
* [shardformer] add megatron sp to llama
* support llama7B 128k with distributed attention
* [shardformer] robustness enhancement
* add block attn
* sp mode 1: keep input as a complete sequence
* fix sp compatability
* finish sp mode 3 support for gpt
* using all_to_all_single when batch size is 1
* support mode 2 sp in gpt2 (#5 )
* [shardformer] add megatron sp to llama
* support llama7B 128k with distributed attention
* [shardformer] robustness enhancement
* add block attn
* sp mode 1: keep input as a complete sequence
* fix sp compatability
* refactor ring implementation
* support mode 2 sp in gpt2
* polish code
* enable distributed attn mask when using sp mode 2 and 3 in llama
* automatically enable flash attn when using sp mode 2 and 3 in llama
* inplace attn mask
* add zero2 support for sequence parallel
* polish code
* fix bugs
* fix gemini checkpoint io
* loose tensor checking atol and rtol
* add comment
* fix llama layernorm grad
* fix zero grad
* fix zero grad
* fix conflict
* update split and gather auto grad func
* sequence parallel: inside text split (#6 )
* polish code (part 1)
* polish code (part 2)
* polish code (part 2.5)
* polish code (part 3)
* sequence parallel: inside text split
* miscellaneous minor fixes
* polish code
* fix ulysses style ZeRO
* sequence parallel: inside text split
* miscellaneous minor fixes
* disaggregate sp group and dp group for sp
* fix llama and gpt sp
* polish code
* move ulysses grad sync to ddp (#9 )
* remove zero_stage and unbind the grad sync for alltoall sp
* add 2d group creation test
* move ulysses grad sync to ddp
* add 2d group creation test
* remove useless code
* change shard config not to enable sp when enable_all_optimizations
* add sp warnings for several model
* remove useless code
---------
Co-authored-by: linsj20 <linsj20@mails.tsinghua.edu.cn>
2024-04-03 17:15:47 +08:00
Edenzzzz
7e0ec5a85c
fix incorrect sharding without zero ( #5545 )
...
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
2024-04-02 20:11:18 +08:00
Yuanheng Zhao
4bb5d8923a
[Fix/Inference] Remove unused and non-functional functions ( #5543 )
...
* [fix] remove unused func
* rm non-functional partial
2024-04-02 14:16:59 +08:00
yuehuayingxueluo
04aca9e55b
[Inference/Kernel]Add get_cos_and_sin Kernel ( #5528 )
...
* Add get_cos_and_sin kernel
* fix code comments
* fix code typos
* merge common codes of get_cos_and_sin kernel.
* Fixed a typo
* Changed 'asset allclose' to 'assert equal'.
2024-04-01 13:47:14 +08:00
Wenhao Chen
e614aa34f3
[shardformer, pipeline] add `gradient_checkpointing_ratio` and heterogenous shard policy for llama ( #5508 )
...
* feat: add `GradientCheckpointConfig` and `PipelineGradientCheckpointConfig`
* feat: apply `GradientCheckpointConfig` to policy and llama_forward
* feat: move `distribute_layer` and `get_stage_index` to PipelineStageManager
* fix: add optional args for `distribute_layer` and `get_stage_index`
* fix: fix changed API calls
* test: update llama tests
* style: polish `GradientCheckpointConfig`
* fix: fix pipeline utils tests
2024-04-01 11:34:58 +08:00
Insu Jang
00525f7772
[shardformer] fix pipeline forward error if custom layer distribution is used ( #5189 )
...
* Use self.[distribute_layers|get_stage_index] to exploit custom layer distribution
* Change static methods for t5 layer distribution to member functions
* Change static methods for whisper layer distribution to member functions
* Replace whisper policy usage with self one
* Fix test case to use non-static layer distribution methods
* fix: fix typo
---------
Co-authored-by: Wenhao Chen <cwher@outlook.com>
2024-03-27 13:57:00 +08:00
github-actions[bot]
e6707a6e8d
[format] applied code formatting on changed files in pull request 5510 ( #5517 )
...
Co-authored-by: github-actions <github-actions@github.com>
2024-03-27 11:21:03 +08:00
Hongxin Liu
19e1a5cf16
[shardformer] update colo attention to support custom mask ( #5510 )
...
* [feature] refactor colo attention (#5462 )
* [extension] update api
* [feature] add colo attention
* [feature] update sdpa
* [feature] update npu attention
* [feature] update flash-attn
* [test] add flash attn test
* [test] update flash attn test
* [shardformer] update modeling to fit colo attention (#5465 )
* [misc] refactor folder structure
* [shardformer] update llama flash-attn
* [shardformer] fix llama policy
* [devops] update tensornvme install
* [test] update llama test
* [shardformer] update colo attn kernel dispatch
* [shardformer] update blip2
* [shardformer] update chatglm
* [shardformer] update gpt2
* [shardformer] update gptj
* [shardformer] update opt
* [shardformer] update vit
* [shardformer] update colo attention mask prep
* [shardformer] update whisper
* [test] fix shardformer tests (#5514 )
* [test] fix shardformer tests
* [test] fix shardformer tests
2024-03-27 11:19:32 +08:00
Edenzzzz
9a3321e9f4
Merge pull request #5515 from Edenzzzz/fix_layout_convert
...
Fix layout convertor caching
2024-03-26 19:51:02 +08:00
Edenzzzz
61da3fbc52
fixed layout converter caching and updated tester
2024-03-26 17:22:27 +08:00
傅剑寒
e6496dd371
[Inference] Optimize request handler of llama ( #5512 )
...
* optimize request_handler
* fix ways of writing
2024-03-26 16:37:14 +08:00
Rocky Duan
cbe34c557c
Fix ColoTensorSpec for py11 ( #5440 )
2024-03-26 15:56:49 +08:00
flybird11111
0688d92e2d
[shardformer]Fix lm parallel. ( #5480 )
...
* fix
* padding vocab_size when using pipeline parallellism
padding vocab_size when using pipeline parallellism
fix
fix
* fix
* fix
fix
fix
* fix gather output
* fix
* fix
* fix
fix resize embedding
fix resize embedding
* fix resize embedding
fix
* revert
* revert
* revert
* fix lm forward distribution
* fix
* test ci
* fix
2024-03-25 17:21:51 +08:00
Runyu Lu
6251d68dc9
[fix] PR #5354 ( #5501 )
...
* [fix]
* [fix]
* Update config.py docstring
* [fix] docstring align
* [fix] docstring align
* [fix] docstring align
2024-03-25 15:24:17 +08:00
Runyu Lu
68e9396bc0
[fix] merge conflicts
2024-03-25 14:48:28 +08:00
yuehuayingxueluo
87079cffe8
[Inference]Support FP16/BF16 Flash Attention 2 And Add high_precision Flag To Rotary Embedding ( #5461 )
...
* Support FP16/BF16 Flash Attention 2
* fix bugs in test_kv_cache_memcpy.py
* add context_kv_cache_memcpy_kernel.cu
* rm typename MT
* add tail process
* add high_precision
* add high_precision to config.py
* rm unused code
* change the comment for the high_precision parameter
* update test_rotary_embdding_unpad.py
* fix vector_copy_utils.h
* add comment for self.high_precision when using float32
2024-03-25 13:40:34 +08:00
Wenhao Chen
bb0a668fee
[hotfix] set return_outputs=False in examples and polish code ( #5404 )
...
* fix: simplify merge_batch
* fix: use return_outputs=False to eliminate extra memory consumption
* feat: add return_outputs warning
* style: remove `return_outputs=False` as it is the default value
2024-03-25 12:31:09 +08:00
Runyu Lu
ff4998c6f3
[fix] remove unused comment
2024-03-25 12:00:57 +08:00
Runyu Lu
5b017d6324
[fix]
2024-03-21 15:55:25 +08:00
Runyu Lu
4eafe0c814
[fix] unused option
2024-03-21 11:28:42 +08:00
Runyu Lu
aabc9fb6aa
[feat] add use_cuda_kernel option
2024-03-19 13:24:25 +08:00
flybird11111
5e16bf7980
[shardformer] fix gathering output when using tensor parallelism ( #5431 )
...
* fix
* padding vocab_size when using pipeline parallellism
padding vocab_size when using pipeline parallellism
fix
fix
* fix
* fix
fix
fix
* fix gather output
* fix
* fix
* fix
fix resize embedding
fix resize embedding
* fix resize embedding
fix
* revert
* revert
* revert
2024-03-18 15:55:11 +08:00
Runyu Lu
6e30248683
[fix] tmp for test
2024-03-14 16:13:00 +08:00
Runyu Lu
d02e257abd
Merge branch 'feature/colossal-infer' into colossal-infer-cuda-graph
2024-03-14 10:37:05 +08:00
Runyu Lu
ae24b4f025
diverse tests
2024-03-14 10:35:08 +08:00
Runyu Lu
1821a6dab0
[fix] pytest and fix dyn grid bug
2024-03-13 17:28:32 +08:00
yuehuayingxueluo
f366a5ea1f
[Inference/kernel]Add Fused Rotary Embedding and KVCache Memcopy CUDA Kernel ( #5418 )
...
* add rotary embedding kernel
* add rotary_embedding_kernel
* add fused rotary_emb and kvcache memcopy
* add fused_rotary_emb_and_cache_kernel.cu
* add fused_rotary_emb_and_memcopy
* fix bugs in fused_rotary_emb_and_cache_kernel.cu
* fix ci bugs
* use vec memcopy and opt the gloabl memory access
* fix code style
* fix test_rotary_embdding_unpad.py
* codes revised based on the review comments
* fix bugs about include path
* rm inline
2024-03-13 17:20:03 +08:00
Hongxin Liu
f2e8b9ef9f
[devops] fix compatibility ( #5444 )
...
* [devops] fix compatibility
* [hotfix] update compatibility test on pr
* [devops] fix compatibility
* [devops] record duration during comp test
* [test] decrease test duration
* fix falcon
2024-03-13 15:24:13 +08:00
digger yu
385e85afd4
[hotfix] fix typo s/keywrods/keywords etc. ( #5429 )
2024-03-12 11:25:16 +08:00
Runyu Lu
633e95b301
[doc] add doc
2024-03-11 10:56:51 +08:00
Runyu Lu
9dec66fad6
[fix] multi graphs capture error
2024-03-11 10:51:16 +08:00
Runyu Lu
b2c0d9ff2b
[fix] multi graphs capture error
2024-03-11 10:49:31 +08:00
Steve Luo
f7aecc0c6b
feat rmsnorm cuda kernel and add unittest, benchmark script ( #5417 )
2024-03-08 16:21:12 +08:00
Runyu Lu
cefaeb5fdd
[feat] cuda graph support and refactor non-functional api
2024-03-08 14:19:35 +08:00
digger yu
5e1c93d732
[hotfix] fix typo change MoECheckpintIO to MoECheckpointIO ( #5335 )
...
Co-authored-by: binmakeswell <binmakeswell@gmail.com>
2024-03-05 21:52:30 +08:00
digger yu
049121d19d
[hotfix] fix typo change enabel to enable under colossalai/shardformer/ ( #5317 )
2024-03-05 21:48:46 +08:00
digger yu
16c96d4d8c
[hotfix] fix typo change _descrption to _description ( #5331 )
2024-03-05 21:47:48 +08:00
Hongxin Liu
070df689e6
[devops] fix extention building ( #5427 )
2024-03-05 15:35:54 +08:00
flybird11111
29695cf70c
[example]add gpt2 benchmark example script. ( #5295 )
...
* benchmark gpt2
* fix
fix
fix
fix
* [doc] fix typo in Colossal-LLaMA-2/README.md (#5247 )
* [workflow] fixed build CI (#5240 )
* [workflow] fixed build CI
* polish
* polish
* polish
* polish
* polish
* [ci] fixed booster test (#5251 )
* [ci] fixed booster test
* [ci] fixed booster test
* [ci] fixed booster test
* [ci] fixed ddp test (#5254 )
* [ci] fixed ddp test
* polish
* fix typo in applications/ColossalEval/README.md (#5250 )
* [ci] fix shardformer tests. (#5255 )
* fix ci
fix
* revert: revert p2p
* feat: add enable_metadata_cache option
* revert: enable t5 tests
---------
Co-authored-by: Wenhao Chen <cwher@outlook.com>
* [doc] fix doc typo (#5256 )
* [doc] fix annotation display
* [doc] fix llama2 doc
* [hotfix]: add pp sanity check and fix mbs arg (#5268 )
* fix: fix misleading mbs arg
* feat: add pp sanity check
* fix: fix 1f1b sanity check
* [workflow] fixed incomplete bash command (#5272 )
* [workflow] fixed oom tests (#5275 )
* [workflow] fixed oom tests
* polish
* polish
* polish
* [ci] fix test_hybrid_parallel_plugin_checkpoint_io.py (#5276 )
* fix ci
fix
* fix test
* revert: revert p2p
* feat: add enable_metadata_cache option
* revert: enable t5 tests
* fix
---------
Co-authored-by: Wenhao Chen <cwher@outlook.com>
* [shardformer] hybridparallelplugin support gradients accumulation. (#5246 )
* support gradients acc
fix
fix
fix
fix
fix
fix
fix
fix
fix
fix
fix
fix
fix
* fix
fix
* fix
fix
fix
* [hotfix] Fix ShardFormer test execution path when using sequence parallelism (#5230 )
* fix auto loading gpt2 tokenizer (#5279 )
* [doc] add llama2-13B disyplay (#5285 )
* Update README.md
* fix 13b typo
---------
Co-authored-by: binmakeswell <binmakeswell@gmail.com>
* fix llama pretrain (#5287 )
* fix
* fix
* fix
fix
* fix
fix
fix
* fix
fix
* benchmark gpt2
* fix
fix
fix
fix
* [workflow] fixed build CI (#5240 )
* [workflow] fixed build CI
* polish
* polish
* polish
* polish
* polish
* [ci] fixed booster test (#5251 )
* [ci] fixed booster test
* [ci] fixed booster test
* [ci] fixed booster test
* fix
fix
* fix
fix
fix
* fix
* fix
fix
fix
fix
fix
* fix
* Update shardformer.py
---------
Co-authored-by: digger yu <digger-yu@outlook.com>
Co-authored-by: Frank Lee <somerlee.9@gmail.com>
Co-authored-by: Wenhao Chen <cwher@outlook.com>
Co-authored-by: binmakeswell <binmakeswell@gmail.com>
Co-authored-by: Zhongkai Zhao <kanezz620@gmail.com>
Co-authored-by: Michelle <97082656+MichelleMa8@users.noreply.github.com>
Co-authored-by: Desperado-Jia <502205863@qq.com>
2024-03-04 16:18:13 +08:00
FrankLeeeee
0310b76e9d
Merge branch 'main' into sync/main
2024-03-04 10:09:36 +08:00
yuehuayingxueluo
600881a8ea
[Inference]Add CUDA KVCache Kernel ( #5406 )
...
* add cuda KVCache kernel
* annotation benchmark_kvcache_copy
* add use cuda
* fix import path
* move benchmark scripts to example/
* rm benchmark codes in test_kv_cache_memcpy.py
* rm redundancy codes
* rm redundancy codes
* pr was modified according to the review
2024-02-28 14:36:50 +08:00
flybird11111
0a25e16e46
[shardformer]gather llama logits ( #5398 )
...
* gather llama logits
* fix
2024-02-27 22:44:07 +08:00
QinLuo
bf34c6fef6
[fsdp] impl save/load shard model/optimizer ( #5357 )
2024-02-27 13:51:14 +08:00
yuehuayingxueluo
bc1da87366
[Fix/Inference] Fix format of input prompts and input model in inference engine ( #5395 )
...
* Fix bugs in inference_engine
* fix bugs in engine.py
* rm CUDA_VISIBLE_DEVICES
* add request_ids in generate
* fix bug in engine.py
* add logger.debug for BatchBucket
2024-02-23 10:51:35 +08:00
yuehuayingxueluo
2a718c8be8
Optimized the execution interval time between cuda kernels caused by view and memcopy ( #5390 )
...
* opt_view_and_memcopy
* fix bugs in ci
* fix ci bugs
* update benchmark scripts
* fix ci bugs
2024-02-21 13:23:57 +08:00
Jianghai
730103819d
[Inference]Fused kv copy into rotary calculation ( #5383 )
...
* revise rotary embedding
* remove useless print
* adapt
* fix
* add
* fix
* modeling
* fix
* fix
* fix
* fused kv copy
* fused copy
* colossalai/kernel/triton/no_pad_rotary_embedding.py
* del padding llama
* del
2024-02-21 11:31:48 +08:00
Stephan Kölker
5d380a1a21
[hotfix] Fix wrong import in meta_registry ( #5392 )
2024-02-20 19:24:43 +08:00
Yuanheng Zhao
b21aac5bae
[Inference] Optimize and Refactor Inference Batching/Scheduling ( #5367 )
...
* add kvcache manager funcs for batching
* add batch bucket for batching
* revise RunningList struct in handler
* add kvcache/batch funcs for compatibility
* use new batching methods
* fix indexing bugs
* revise abort logic
* use cpu seq lengths/block tables
* rm unused attr in Sequence
* fix type conversion/default arg
* add and revise pytests
* revise pytests, rm unused tests
* rm unused statements
* fix pop finished indexing issue
* fix: use index in batch when retrieving inputs/update seqs
* use dict instead of odict in batch struct
* arg type hinting
* fix make compress
* refine comments
* fix: pop_n_seqs to pop the first n seqs
* add check in request handler
* remove redundant conversion
* fix test for request handler
* fix pop method in batch bucket
* fix prefill adding
2024-02-19 17:18:20 +08:00
Hongxin Liu
7303801854
[llama] fix training and inference scripts ( #5384 )
...
* [llama] refactor inference example to fit sft
* [llama] fix training script to fit gemini
* [llama] fix inference script
2024-02-19 16:41:04 +08:00
Frank Lee
efef43b53c
Merge pull request #5372 from hpcaitech/exp/mixtral
2024-02-08 16:30:05 +08:00
yuehuayingxueluo
8c69debdc7
[Inference]Support vllm testing in benchmark scripts ( #5379 )
...
* add vllm benchmark scripts
* fix code style
* update run_benchmark.sh
* fix code style
2024-02-08 15:27:26 +08:00
Frank Lee
4c03347fc7
Merge pull request #5377 from hpcaitech/example/llama-npu
...
[llama] support npu for Colossal-LLaMA-2
2024-02-08 14:12:11 +08:00
Frank Lee
9afa52061f
[inference] refactored config ( #5376 )
2024-02-08 14:04:14 +08:00
ver217
06db94fbc9
[moe] fix tests
2024-02-08 12:46:37 +08:00
Hongxin Liu
da39d21b71
[moe] support mixtral ( #5309 )
...
* [moe] add mixtral block for single expert
* [moe] mixtral block fwd support uneven ep
* [moe] mixtral block bwd support uneven ep
* [moe] add mixtral moe layer
* [moe] simplify replace
* [meo] support save sharded mixtral
* [meo] support load sharded mixtral
* [meo] support save sharded optim
* [meo] integrate moe manager into plug
* [meo] fix optimizer load
* [meo] fix mixtral layer
2024-02-07 19:21:02 +08:00
Hongxin Liu
c904d2ae99
[moe] update capacity computing ( #5253 )
...
* [moe] top2 allow uneven input
* [moe] update capacity computing
* [moe] remove debug info
* [moe] update capacity computing
* [moe] update capacity computing
2024-02-07 19:21:02 +08:00
Xuanlei Zhao
7d8e0338a4
[moe] init mixtral impl
2024-02-07 19:21:02 +08:00
Jianghai
1f8c7e7046
[Inference] User Experience: update the logic of default tokenizer and generation config. ( #5337 )
...
* add
* fix
* fix
* pause
* fix
* fix pytest
* align
* fix
* license
* fix
* fix
* fix readme
* fix some bugs
* remove tokenizer config
2024-02-07 17:55:48 +08:00
yuehuayingxueluo
6fb4bcbb24
[Inference/opt] Fused KVCahce Memcopy ( #5374 )
...
* fused kv memcopy
* add TODO in test_kvcache_copy.py
2024-02-07 17:15:42 +08:00
Frank Lee
58740b5f68
[inference] added inference template ( #5375 )
2024-02-07 17:11:43 +08:00
Frank Lee
8106ede07f
Revert "[Inference] Adapt to Fused rotary ( #5348 )" ( #5373 )
...
This reverts commit 9f4ab2eb92
.
2024-02-07 14:27:04 +08:00
Jianghai
9f4ab2eb92
[Inference] Adapt to Fused rotary ( #5348 )
...
* revise rotary embedding
* remove useless print
* adapt
* fix
* add
* fix
* modeling
* fix
* fix
* fix
2024-02-07 11:36:04 +08:00
yuehuayingxueluo
35382a7fbf
[Inference]Fused the gate and up proj in mlp,and optimized the autograd process. ( #5365 )
...
* fused the gate and up proj in mlp
* fix code styles
* opt auto_grad
* rollback test_inference_engine.py
* modifications based on the review feedback.
* fix bugs in flash attn
* Change reshape to view
* fix test_rmsnorm_triton.py
2024-02-06 19:38:25 +08:00
Yuanheng Zhao
1dedb57747
[Fix/Infer] Remove unused deps and revise requirements ( #5341 )
...
* remove flash-attn dep
* rm padding llama
* revise infer requirements
* move requirements out of module
2024-02-06 17:27:45 +08:00
Hongxin Liu
c53ddda88f
[lr-scheduler] fix load state dict and add test ( #5369 )
2024-02-06 14:23:32 +08:00
Hongxin Liu
eb4f2d90f9
[llama] polish training script and fix optim ckpt ( #5368 )
2024-02-06 11:52:17 +08:00
Hongxin Liu
6c0fa7b9a8
[llama] fix dataloader for hybrid parallel ( #5358 )
...
* [plugin] refactor prepare dataloader
* [plugin] update train script
2024-02-05 15:14:56 +08:00
Hongxin Liu
2dd01e3a14
[gemini] fix param op hook when output is tuple ( #5355 )
...
* [gemini] fix param op hook when output is tuple
* [gemini] fix param op hook
2024-02-04 11:58:26 +08:00
yuehuayingxueluo
631862f339
[Inference]Optimize generation process of inference engine ( #5356 )
...
* opt inference engine
* fix run_benchmark.sh
* fix generate in engine.py
* rollback tesh_inference_engine.py
2024-02-02 15:38:21 +08:00
yuehuayingxueluo
21ad4a27f9
[Inference/opt]Optimize the mid tensor of RMS Norm ( #5350 )
...
* opt rms_norm
* fix bugs in rms_layernorm
2024-02-02 15:06:01 +08:00
Wenhao Chen
1c790c0877
[fix] remove unnecessary dp_size assert ( #5351 )
...
* fix: remove unnecessary assert
* test: add more 3d plugin tests
* fix: add warning
2024-02-02 14:40:20 +08:00
Frank Lee
027aa1043f
[doc] updated inference readme ( #5343 )
2024-02-02 14:31:10 +08:00