ColossalAI/extensions/csrc/cuda
xs_courtesy 388e043930 add implementatino for GetGPULaunchConfig1D 2024-03-14 11:13:40 +08:00
..
include refactor code 2024-03-08 15:41:14 +08:00
pybind [Inference/kernel]Add Fused Rotary Embedding and KVCache Memcopy CUDA Kernel (#5418) 2024-03-13 17:20:03 +08:00
utils add implementatino for GetGPULaunchConfig1D 2024-03-14 11:13:40 +08:00
activation_kernel.cu add implementatino for GetGPULaunchConfig1D 2024-03-14 11:13:40 +08:00
decode_kv_cache_memcpy_kernel.cu [Inference/kernel]Add Fused Rotary Embedding and KVCache Memcopy CUDA Kernel (#5418) 2024-03-13 17:20:03 +08:00
fused_rotary_emb_and_cache_kernel.cu [Inference/kernel]Add Fused Rotary Embedding and KVCache Memcopy CUDA Kernel (#5418) 2024-03-13 17:20:03 +08:00
layer_norm_kernel.cu refactor code 2024-03-11 17:06:57 +08:00
moe_kernel.cu refactor code 2024-03-11 17:06:57 +08:00
multi_tensor_adam_kernel.cu refactor code 2024-03-11 17:06:57 +08:00
multi_tensor_apply.cuh refactor code 2024-03-08 15:41:14 +08:00
multi_tensor_l2norm_kernel.cu refactor code 2024-03-08 15:41:14 +08:00
multi_tensor_lamb_kernel.cu refactor code 2024-03-11 17:06:57 +08:00
multi_tensor_scale_kernel.cu refactor code 2024-03-08 15:41:14 +08:00
multi_tensor_sgd_kernel.cu refactor code 2024-03-08 15:41:14 +08:00
rms_layernorm_kernel.cu fix rmsnorm template function invocation problem(template function partial specialization is not allowed in Cpp) and luckily pass e2e precision test (#5454) 2024-03-13 16:00:55 +08:00
scaled_masked_softmax.h [feat] refactored extension module (#5298) 2024-01-25 17:01:48 +08:00
scaled_masked_softmax_kernel.cu refactor code 2024-03-11 17:06:57 +08:00
scaled_upper_triang_masked_softmax.h [feat] refactored extension module (#5298) 2024-01-25 17:01:48 +08:00
scaled_upper_triang_masked_softmax_kernel.cu refactor code 2024-03-11 17:06:57 +08:00