ColossalAI/extensions/csrc/kernel/cuda
Steve Luo a8fd3b0342
[Inference/Kernel] Optimize paged attention: Refactor key cache layout (#5643)
* optimize flashdecodingattention: refactor code with different key cache layout(from [num_blocks, num_kv_heads, block_size, head_size] to [num_blocks, num_kv_heads, head_size/x, block_size, x])

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-04-25 14:24:02 +08:00
..
attention [Inference/Kernel] Optimize paged attention: Refactor key cache layout (#5643) 2024-04-25 14:24:02 +08:00
utils [Inference/Refactor] Refactor compilation mechanism and unified multi hw (#5613) 2024-04-24 14:17:54 +08:00
activation_kernel.cu [Inference/Refactor] Refactor compilation mechanism and unified multi hw (#5613) 2024-04-24 14:17:54 +08:00
context_kv_cache_memcpy_kernel.cu [Inference/Refactor] Refactor compilation mechanism and unified multi hw (#5613) 2024-04-24 14:17:54 +08:00
decode_kv_cache_memcpy_kernel.cu [Inference/Refactor] Refactor compilation mechanism and unified multi hw (#5613) 2024-04-24 14:17:54 +08:00
flash_decoding_attention_kernel.cu [Inference/Kernel] Optimize paged attention: Refactor key cache layout (#5643) 2024-04-25 14:24:02 +08:00
fused_rotary_emb_and_cache_kernel.cu [Inference/Refactor] Refactor compilation mechanism and unified multi hw (#5613) 2024-04-24 14:17:54 +08:00
get_cos_and_sin_kernel.cu [Inference/Refactor] Refactor compilation mechanism and unified multi hw (#5613) 2024-04-24 14:17:54 +08:00
layer_norm_kernel.cu [Inference/Refactor] Refactor compilation mechanism and unified multi hw (#5613) 2024-04-24 14:17:54 +08:00
moe_kernel.cu [Inference/Refactor] Refactor compilation mechanism and unified multi hw (#5613) 2024-04-24 14:17:54 +08:00
multi_tensor_adam_kernel.cu [Inference/Refactor] Refactor compilation mechanism and unified multi hw (#5613) 2024-04-24 14:17:54 +08:00
multi_tensor_apply.cuh [Inference/Refactor] Refactor compilation mechanism and unified multi hw (#5613) 2024-04-24 14:17:54 +08:00
multi_tensor_l2norm_kernel.cu [Inference/Refactor] Refactor compilation mechanism and unified multi hw (#5613) 2024-04-24 14:17:54 +08:00
multi_tensor_lamb_kernel.cu [Inference/Refactor] Refactor compilation mechanism and unified multi hw (#5613) 2024-04-24 14:17:54 +08:00
multi_tensor_scale_kernel.cu [Inference/Refactor] Refactor compilation mechanism and unified multi hw (#5613) 2024-04-24 14:17:54 +08:00
multi_tensor_sgd_kernel.cu [Inference/Refactor] Refactor compilation mechanism and unified multi hw (#5613) 2024-04-24 14:17:54 +08:00
rms_layernorm_kernel.cu [Inference/Kernel] Optimize paged attention: Refactor key cache layout (#5643) 2024-04-25 14:24:02 +08:00
scaled_masked_softmax_kernel.cu [Inference/Refactor] Refactor compilation mechanism and unified multi hw (#5613) 2024-04-24 14:17:54 +08:00
scaled_upper_triang_masked_softmax_kernel.cu [Inference/Refactor] Refactor compilation mechanism and unified multi hw (#5613) 2024-04-24 14:17:54 +08:00