ColossalAI/colossalai/kernel/triton
Yuanheng Zhao 5be590b99e
[kernel] Support new KCache Layout - Context Attention Triton Kernel (#5658)
* add context attn triton kernel - new kcache layout

* add benchmark triton

* tiny revise

* trivial - code style, comment
2024-04-26 17:51:49 +08:00
..
__init__.py [Infer] Revise and Adapt Triton Kernels for Spec-Dec (#5401) 2024-04-10 11:07:51 +08:00
context_attn_unpad.py [kernel] Support new KCache Layout - Context Attention Triton Kernel (#5658) 2024-04-26 17:51:49 +08:00
flash_decoding.py [Inference]Adapt to baichuan2 13B (#5614) 2024-04-25 23:11:30 +08:00
fused_rotary_embedding.py [Inference]Fused the gate and up proj in mlp,and optimized the autograd process. (#5365) 2024-02-06 19:38:25 +08:00
kvcache_copy.py [Infer] Revise and Adapt Triton Kernels for Spec-Dec (#5401) 2024-04-10 11:07:51 +08:00
llama_act_combine_kernel.py [devops] remove post commit ci (#5566) 2024-04-08 15:09:40 +08:00
no_pad_rotary_embedding.py [Fix/Inference] Fix GQA Triton and Support Llama3 (#5624) 2024-04-23 13:09:55 +08:00
qkv_matmul_kernel.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
rms_layernorm.py [fix] multi graphs capture error 2024-03-11 10:49:31 +08:00
rotary_cache_copy.py [Inference]Fused the gate and up proj in mlp,and optimized the autograd process. (#5365) 2024-02-06 19:38:25 +08:00
softmax.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00