ColossalAI/extensions/csrc/kernel/cuda
傅剑寒 808ee6e4ad
[Inference/Feat] Feat quant kvcache step2 (#5674)
2024-04-30 11:26:36 +08:00
..
attention [Inference/Kernel] Optimize paged attention: Refactor key cache layout (#5643) 2024-04-25 14:24:02 +08:00
utils [Inference/Feat] Feat quant kvcache step2 (#5674) 2024-04-30 11:26:36 +08:00
activation_kernel.cu
context_kv_cache_memcpy_kernel.cu [Inference/Feat] Feat quant kvcache step2 (#5674) 2024-04-30 11:26:36 +08:00
decode_kv_cache_memcpy_kernel.cu
flash_decoding_attention_kernel.cu [Inference/Feat] Feat quant kvcache step2 (#5674) 2024-04-30 11:26:36 +08:00
fused_rotary_emb_and_cache_kernel.cu
get_cos_and_sin_kernel.cu
layer_norm_kernel.cu
moe_kernel.cu
multi_tensor_adam_kernel.cu
multi_tensor_apply.cuh
multi_tensor_l2norm_kernel.cu
multi_tensor_lamb_kernel.cu
multi_tensor_scale_kernel.cu
multi_tensor_sgd_kernel.cu
rms_layernorm_kernel.cu [Inference/Kernel] Optimize paged attention: Refactor key cache layout (#5643) 2024-04-25 14:24:02 +08:00
scaled_masked_softmax_kernel.cu
scaled_upper_triang_masked_softmax_kernel.cu