Yuanheng Zhao
f9afe0addd
[hotfix] Fix KV Heads Number Assignment in KVCacheManager ( #5695 )
...
- Fix key value number assignment in KVCacheManager, as well as method of accessing
7 months ago
傅剑寒
1ace1065e6
[Inference/Feat] Add quant kvcache support for decode_kv_cache_memcpy ( #5686 )
7 months ago
Yuanheng Zhao
db7b3051f4
[Sync] Update from main to feature/colossal-infer (Merge pull request #5685 )
...
[Sync] Update from main to feature/colossal-infer
- Merge pull request #5685 from yuanheng-zhao/inference/merge/main
7 months ago
Steve Luo
725fbd2ed0
[Inference] Remove unnecessary float4_ and rename float8_ to float8 ( #5679 )
7 months ago
Yuanheng Zhao
8754abae24
[Fix] Fix & Update Inference Tests (compatibility w/ main)
7 months ago
Yuanheng Zhao
56ed09aba5
[sync] resolve conflicts of merging main
7 months ago
Yuanheng Zhao
537a3cbc4d
[kernel] Support New KCache Layout - Triton Kernel ( #5677 )
...
* kvmemcpy triton for new kcache layout
* revise tests for new kcache layout
* naive triton flash decoding - new kcache layout
* rotary triton kernel - new kcache layout
* remove redundancy - triton decoding
* remove redundancy - triton kvcache copy
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
7 months ago
傅剑寒
9df016fc45
[Inference] Fix quant bits order ( #5681 )
7 months ago
yuehuayingxueluo
f79963199c
[inference]Add alibi to flash attn function ( #5678 )
...
* add alibi to flash attn function
* rm redundant modifications
7 months ago
傅剑寒
ef8e4ffe31
[Inference/Feat] Add kvcache quant support for fused_rotary_embedding_cache_copy ( #5680 )
7 months ago
Steve Luo
5cd75ce4c7
[Inference/Kernel] refactor kvcache manager and rotary_embedding and kvcache_memcpy oper… ( #5663 )
...
* refactor kvcache manager and rotary_embedding and kvcache_memcpy operator
* refactor decode_kv_cache_memcpy
* enable alibi in pagedattention
7 months ago
yuehuayingxueluo
5f00002e43
[Inference] Adapt Baichuan2-13B TP ( #5659 )
...
* adapt to baichuan2 13B
* add baichuan2 13B TP
* update baichuan tp logic
* rm unused code
* Fix TP logic
* fix alibi slopes tp logic
* rm nn.Module
* Polished the code.
* change BAICHUAN_MODEL_NAME_OR_PATH
* Modified the logic for loading Baichuan weights.
* fix typos
7 months ago
傅剑寒
808ee6e4ad
[Inference/Feat] Feat quant kvcache step2 ( #5674 )
7 months ago
Wang Binluo
d3f34ee8cc
[Shardformer] add assert for num of attention heads divisible by tp_size ( #5670 )
...
* add assert for num of attention heads divisible by tp_size
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
7 months ago
flybird11111
6af6d6fc9f
[shardformer] support bias_gelu_jit_fused for models ( #5647 )
...
* support gelu_bias_fused for gpt2
* support gelu_bias_fused for gpt2
fix
fix
fix
* fix
fix
* fix
7 months ago
Hongxin Liu
7f8b16635b
[misc] refactor launch API and tensor constructor ( #5666 )
...
* [misc] remove config arg from initialize
* [misc] remove old tensor contrusctor
* [plugin] add npu support for ddp
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* [devops] fix doc test ci
* [test] fix test launch
* [doc] update launch doc
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
7 months ago
linsj20
91fa553775
[Feature] qlora support ( #5586 )
...
* [feature] qlora support
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* qlora follow commit
* migrate qutization folder to colossalai/
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* minor fixes
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
7 months ago
flybird11111
8954a0c2e2
[LowLevelZero] low level zero support lora ( #5153 )
...
* low level zero support lora
low level zero support lora
* add checkpoint test
* add checkpoint test
* fix
* fix
* fix
* fix
fix
fix
fix
* fix
* fix
fix
fix
fix
fix
fix
fix
* fix
* fix
fix
fix
fix
fix
fix
fix
* fix
* test ci
* git # This is a combination of 3 commits.
Update low_level_zero_plugin.py
Update low_level_zero_plugin.py
fix
fix
fix
* fix naming
fix naming
fix naming
fix
7 months ago
Baizhou Zhang
14b0d4c7e5
[lora] add lora APIs for booster, support lora for TorchDDP ( #4981 )
...
* add apis and peft requirement
* add liscense and implement apis
* add checkpointio apis
* add torchddp fwd_bwd test
* add support_lora methods
* add checkpointio test and debug
* delete unneeded codes
* remove peft from LICENSE
* add concrete methods for enable_lora
* simplify enable_lora api
* fix requirements
7 months ago
Hongxin Liu
c1594e4bad
[devops] fix release docker ci ( #5665 )
7 months ago
Hongxin Liu
4cfbf30a5e
[release] update version ( #5654 )
7 months ago
Tong Li
68ec99e946
[hotfix] add soft link to support required files ( #5661 )
7 months ago
傅剑寒
8ccb6714e7
[Inference/Feat] Add kvcache quantization support for FlashDecoding ( #5656 )
7 months ago
Yuanheng Zhao
5be590b99e
[kernel] Support new KCache Layout - Context Attention Triton Kernel ( #5658 )
...
* add context attn triton kernel - new kcache layout
* add benchmark triton
* tiny revise
* trivial - code style, comment
7 months ago
binmakeswell
b8a711aa2d
[news] llama3 and open-sora v1.1 ( #5655 )
...
* [news] llama3 and open-sora v1.1
* [news] llama3 and open-sora v1.1
7 months ago
Hongxin Liu
2082852f3f
[lazyinit] skip whisper test ( #5653 )
7 months ago
flybird11111
8b7d535977
fix gptj ( #5652 )
7 months ago
yuehuayingxueluo
3c91e3f176
[Inference]Adapt to baichuan2 13B ( #5614 )
...
* adapt to baichuan2 13B
* adapt to baichuan2 13B
* change BAICHUAN_MODEL_NAME_OR_PATH
* fix test_decoding_attn.py
* Modifications based on review comments.
* change BAICHUAN_MODEL_NAME_OR_PATH
* mv attn mask processes to test flash decoding
* mv get_alibi_slopes baichuan modeling
* fix bugs in test_baichuan.py
7 months ago
Yuanheng Zhao
f342a93871
[Fix] Remove obsolete files - inference ( #5650 )
7 months ago
Hongxin Liu
1b387ca9fe
[shardformer] refactor pipeline grad ckpt config ( #5646 )
...
* [shardformer] refactor pipeline grad ckpt config
* [shardformer] refactor pipeline grad ckpt config
* [pipeline] fix stage manager
7 months ago
Season
7ef91606e1
[Fix]: implement thread-safety singleton to avoid deadlock for very large-scale training scenarios ( #5625 )
...
* implement thread-safety singleton
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* refactor singleton implementation
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
7 months ago
Hongxin Liu
bbb2c21f16
[shardformer] fix chatglm implementation ( #5644 )
...
* [shardformer] fix chatglm policy
* [shardformer] fix chatglm flash attn
* [shardformer] update readme
* [shardformer] fix chatglm init
* [shardformer] fix chatglm test
* [pipeline] fix chatglm merge batch
7 months ago
Steve Luo
a8fd3b0342
[Inference/Kernel] Optimize paged attention: Refactor key cache layout ( #5643 )
...
* optimize flashdecodingattention: refactor code with different key cache layout(from [num_blocks, num_kv_heads, block_size, head_size] to [num_blocks, num_kv_heads, head_size/x, block_size, x])
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
7 months ago
flybird11111
5d88ef1aaf
[shardformer] remove useless code ( #5645 )
7 months ago
flybird11111
148506c828
[coloattention]modify coloattention ( #5627 )
...
* modify coloattention
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fix
* fix
* fix
fxi
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
7 months ago
Edenzzzz
7ee569b05f
[hotfix] Fixed fused layernorm bug without apex ( #5609 )
...
* fixed fused layernorm bug without apex
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* same for flash attn
* remove flash attn check
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
7 months ago
Wang Binluo
0d0a582033
[shardformer] update transformers ( #5583 )
...
* flash_attention forward upgrade
* llama_model_forward
* remove useless comment
* update the requirements.txt
* add the transformers version requirements
* remove the LATEST VERSION try
* [shardformer] update bloom model (#5518 )
* update bloom model
* remove the version restriction
* [shardformer] update_falcon (#5520 )
* [shardformer] update mistral model (#5511 )
* [shardformer] update gpt2 (#5502 )
* [shardformer] update gptj model (#5503 )
* [shardformer] update opt (#5522 )
* [shardformer] update t5 model (#5524 )
* [shardformer] update whisper model (#5529 )
* [shardformer] update vit model (#5530 )
* update vit model
* remove the output_hidden_states
* [shardformer] fix llama modeling
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* [zero] support multiple (partial) backward passes (#5596 )
* [zero] support multiple (partial) backward passes
* [misc] update requirements
* [zero] support multiple (partial) backward passes (#5596 )
* [zero] support multiple (partial) backward passes
* [misc] update requirements
* fix conflicts
* [doc] fix ColossalMoE readme (#5599 )
* fix readme
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* merge with main
* merge with main
* llama_model_forward
* remove useless comment
* remove the LATEST VERSION try
* [shardformer] update bloom model (#5518 )
* update bloom model
* remove the version restriction
* [shardformer] update mistral model (#5511 )
* [shardformer] update opt (#5522 )
* [shardformer] update whisper model (#5529 )
* [shardformer] fix llama modeling
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* [hotfix] Fix examples no pad token & auto parallel codegen bug; (#5606 )
* fix no pad token bug
* fixed some auto parallel codegen bug, but might not run on torch 2.1
---------
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
* [shardformer] fix pipeline grad ckpt (#5620 )
* [shardformer] fix pipeline grad ckpt
* [shardformer] fix whisper (#5628 )
* [test] fix llama model test
* fix the opt upgrade (#5634 )
* [shardformer] fix attn replacement (#5636 )
* [shardformer] update flashattention replacement (#5637 )
* update transformers
update transformers
fix
fix
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* [test] fix llama test (#5638 )
* [gemini] fix buffer cast (#5639 )
* Fix shardformer upgrade (#5640 )
* fix llama model
* fix the mistral
* fix the shardformer model
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* [shardformer]support pipeline parallelism for mistral. (#5642 )
* [shardformer] fix attn replacement (#5636 )
* [shardformer] update flashattention replacement (#5637 )
* update transformers
update transformers
fix
fix
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* [Feature] Support LLaMA-3 CPT and ST (#5619 )
* support LLaMA-3
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Run pre-commit
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* [exampe] update llama example (#5626 )
* [plugin] support dp inside for hybriad parallel
* [example] update llama benchmark
* [example] update llama benchmark
* [example] update llama readme
* [example] update llama readme
* [example] llama3 (#5631 )
* release llama3
* [release] llama3
* [release] llama3
* [release] llama3
* [release] llama3
* [test] fix llama test (#5638 )
* [gemini] fix buffer cast (#5639 )
* support pp for mistral
* fix
* fix
fix
fix
* fix
---------
Co-authored-by: Hongxin Liu <lhx0217@gmail.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Tong Li <tong.li352711588@gmail.com>
Co-authored-by: binmakeswell <binmakeswell@gmail.com>
---------
Co-authored-by: Hongxin Liu <lhx0217@gmail.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Camille Zhong <44392324+Camille7777@users.noreply.github.com>
Co-authored-by: Edenzzzz <wenxuan.tan@wisc.edu>
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
Co-authored-by: flybird11111 <1829166702@qq.com>
Co-authored-by: Tong Li <tong.li352711588@gmail.com>
Co-authored-by: binmakeswell <binmakeswell@gmail.com>
7 months ago
yuehuayingxueluo
90cd5227a3
[Fix/Inference]Fix vllm benchmark ( #5630 )
...
* Fix bugs about OOM when running vllm-0.4.0
* rm used params
* change generation_config
* change benchmark log file name
7 months ago
傅剑寒
279300dc5f
[Inference/Refactor] Refactor compilation mechanism and unified multi hw ( #5613 )
...
* refactor compilation mechanism and unified multi hw
* fix file path bug
* add init.py to make pybind a module to avoid relative path error caused by softlink
* delete duplicated micros
* fix micros bug in gcc
7 months ago
Yuanheng Zhao
04863a9b14
[example] Update Llama Inference example ( #5629 )
...
* [example] add infernece benchmark llama3
* revise inference config - arg
* remove unused args
* add llama generation demo script
* fix init rope in llama policy
* add benchmark-llama3 - cleanup
7 months ago
binmakeswell
f4c5aafe29
[example] llama3 ( #5631 )
...
* release llama3
* [release] llama3
* [release] llama3
* [release] llama3
* [release] llama3
7 months ago
Hongxin Liu
4de4e31818
[exampe] update llama example ( #5626 )
...
* [plugin] support dp inside for hybriad parallel
* [example] update llama benchmark
* [example] update llama benchmark
* [example] update llama readme
* [example] update llama readme
7 months ago
Tong Li
862fbaaa62
[Feature] Support LLaMA-3 CPT and ST ( #5619 )
...
* support LLaMA-3
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Run pre-commit
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
7 months ago
yuehuayingxueluo
12f10d5b0b
[Fix/Inference]Fix CUDA Rotary Rmbedding GQA ( #5623 )
...
* fix rotary embedding GQA
* change test_rotary_embdding_unpad.py KH
7 months ago
Yuanheng Zhao
5d4c1fe8f5
[Fix/Inference] Fix GQA Triton and Support Llama3 ( #5624 )
...
* [fix] GQA calling of flash decoding triton
* fix kv cache alloc shape
* fix rotary triton - GQA
* fix sequence max length assigning
* Sequence max length logic
* fix scheduling and spec-dec
* skip without import error
* fix pytest - skip without ImportError
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
7 months ago
Hongxin Liu
e094933da1
[shardformer] fix pipeline grad ckpt ( #5620 )
...
* [shardformer] fix pipeline grad ckpt
7 months ago
Steve Luo
ccf72797e3
feat baichuan2 rmsnorm whose hidden size equals to 5120 ( #5611 )
7 months ago
Edenzzzz
d83c633ca6
[hotfix] Fix examples no pad token & auto parallel codegen bug; ( #5606 )
...
* fix no pad token bug
* fixed some auto parallel codegen bug, but might not run on torch 2.1
---------
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
7 months ago
Runyu Lu
e37ee2fb65
[Feat]Tensor Model Parallel Support For Inference ( #5563 )
...
* tensor parallel support naive source
* [fix]precision, model load and refactor the framework
* add tp unit test
* docstring
* fix do_sample
7 months ago
Steve Luo
be396ad6cc
[Inference/Kernel] Add Paged Decoding kernel, sequence split within the same thread block ( #5531 )
...
* feat flash decoding for paged attention
* refactor flashdecodingattention
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
7 months ago