You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
ColossalAI/colossalai/kernel/triton
Cuiqing Li (李崔卿) bce919708f
[Kernels]added flash-decoidng of triton (#5063)
1 year ago
..
__init__.py [Inference] Dynamic Batching Inference, online and offline (#4953) 1 year ago
context_attention.py [Kernels]added flash-decoidng of triton (#5063) 1 year ago
copy_kv_cache_dest.py [Inference] Dynamic Batching Inference, online and offline (#4953) 1 year ago
custom_autotune.py add autotune (#4822) 1 year ago
flash_decoding.py [Kernels]added flash-decoidng of triton (#5063) 1 year ago
fused_layernorm.py [misc] update pre-commit and run all files (#4752) 1 year ago
gptq_triton.py [inference] add reference and fix some bugs (#4937) 1 year ago
int8_rotary_embedding_kernel.py [inference] Add smmoothquant for llama (#4904) 1 year ago
llama_act_combine_kernel.py [moe] merge moe into main (#4978) 1 year ago
qkv_matmul_kernel.py [misc] update pre-commit and run all files (#4752) 1 year ago
self_attention_nofusion.py [Refactor] Integrated some lightllm kernels into token-attention (#4946) 1 year ago
smooth_attention.py [inference] add reference and fix some bugs (#4937) 1 year ago
softmax.py [misc] update pre-commit and run all files (#4752) 1 year ago
token_attention_kernel.py [Kernels]Update triton kernels into 2.1.0 (#5046) 1 year ago