ColossalAI/colossalai/kernel/triton
Jianghai 1f8a75d470
[Inference] Update rms norm kernel, benchmark with vLLM (#5315)
* add

* xi

* del

* del

* fix
2024-01-29 10:22:33 +08:00
..
__init__.py [inference]Optimize the usage of the mid tensors space in flash attn (#5304) 2024-01-26 14:00:10 +08:00
context_attn_unpad.py [inference]Optimize the usage of the mid tensors space in flash attn (#5304) 2024-01-26 14:00:10 +08:00
custom_autotune.py add autotune (#4822) 2023-09-28 13:47:35 +08:00
flash_decoding.py [inference]Optimize the usage of the mid tensors space in flash attn (#5304) 2024-01-26 14:00:10 +08:00
fused_rotary_embedding.py fix (#5311) 2024-01-26 15:02:12 +08:00
gptq_triton.py [inference] add reference and fix some bugs (#4937) 2023-10-20 13:39:34 +08:00
kvcache_copy.py [inference] Adapted to Rotary Embedding and RMS Norm (#5283) 2024-01-22 10:55:34 +08:00
llama_act_combine_kernel.py [moe] merge moe into main (#4978) 2023-11-02 02:21:24 +00:00
no_pad_rotary_embedding.py fix (#5311) 2024-01-26 15:02:12 +08:00
qkv_matmul_kernel.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
rms_layernorm.py [Inference] Update rms norm kernel, benchmark with vLLM (#5315) 2024-01-29 10:22:33 +08:00
rotary_cache_copy.py fix (#5311) 2024-01-26 15:02:12 +08:00
softmax.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00