ColossalAI/examples/inference/benchmark_ops
Steve Luo a8fd3b0342
[Inference/Kernel] Optimize paged attention: Refactor key cache layout (#5643)
* optimize flashdecodingattention: refactor code with different key cache layout(from [num_blocks, num_kv_heads, block_size, head_size] to [num_blocks, num_kv_heads, head_size/x, block_size, x])

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-04-25 14:24:02 +08:00
..
benchmark_context_attn_unpad.py [Inference]Move benchmark-related code to the example directory. (#5408) 2024-02-28 16:46:03 +08:00
benchmark_decoding_attn.py [Inference/Kernel] Add Paged Decoding kernel, sequence split within the same thread block (#5531) 2024-04-18 16:45:07 +08:00
benchmark_flash_decoding_attention.py [Inference/Kernel] Optimize paged attention: Refactor key cache layout (#5643) 2024-04-25 14:24:02 +08:00
benchmark_fused_rotary_embdding_unpad.py [Inference/kernel]Add Fused Rotary Embedding and KVCache Memcopy CUDA Kernel (#5418) 2024-03-13 17:20:03 +08:00
benchmark_kv_cache_memcopy.py [Inference]Add CUDA KVCache Kernel (#5406) 2024-02-28 14:36:50 +08:00
benchmark_rmsnorm.py feat baichuan2 rmsnorm whose hidden size equals to 5120 (#5611) 2024-04-19 15:34:53 +08:00
benchmark_rotary_embedding.py [Inference/kernel]Add Fused Rotary Embedding and KVCache Memcopy CUDA Kernel (#5418) 2024-03-13 17:20:03 +08:00
benchmark_xine_copy.py [Inference/kernel]Add Fused Rotary Embedding and KVCache Memcopy CUDA Kernel (#5418) 2024-03-13 17:20:03 +08:00