ColossalAI/extensions/csrc/cuda
pre-commit-ci[bot] d78817539e [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
2024-04-08 08:41:09 +00:00
..
funcs add cast and op_functor for cuda build-in types (#5546) 2024-04-08 11:38:05 +08:00
include add cast and op_functor for cuda build-in types (#5546) 2024-04-08 11:38:05 +08:00
pybind [pre-commit.ci] auto fixes from pre-commit.com hooks 2024-04-08 08:41:09 +00:00
utils add cast and op_functor for cuda build-in types (#5546) 2024-04-08 11:38:05 +08:00
activation_kernel.cu add vec_type_trait implementation (#5473) 2024-03-19 18:36:40 +08:00
context_kv_cache_memcpy_kernel.cu The writing style of tail processing and the logic related to macro definitions have been optimized. (#5519) 2024-03-28 10:42:51 +08:00
decode_kv_cache_memcpy_kernel.cu The writing style of tail processing and the logic related to macro definitions have been optimized. (#5519) 2024-03-28 10:42:51 +08:00
fused_rotary_emb_and_cache_kernel.cu [Inference]Support FP16/BF16 Flash Attention 2 And Add high_precision Flag To Rotary Embedding (#5461) 2024-03-25 13:40:34 +08:00
get_cos_and_sin_kernel.cu [Inference/Kernel]Add get_cos_and_sin Kernel (#5528) 2024-04-01 13:47:14 +08:00
layer_norm_kernel.cu [Inference] Add Reduce Utils (#5537) 2024-04-01 15:34:25 +08:00
moe_kernel.cu [Inference] Add Reduce Utils (#5537) 2024-04-01 15:34:25 +08:00
multi_tensor_adam_kernel.cu refactor code 2024-03-11 17:06:57 +08:00
multi_tensor_apply.cuh [Inference] Add Reduce Utils (#5537) 2024-04-01 15:34:25 +08:00
multi_tensor_l2norm_kernel.cu [Inference] Add Reduce Utils (#5537) 2024-04-01 15:34:25 +08:00
multi_tensor_lamb_kernel.cu [Inference] Add Reduce Utils (#5537) 2024-04-01 15:34:25 +08:00
multi_tensor_scale_kernel.cu refactor code 2024-03-08 15:41:14 +08:00
multi_tensor_sgd_kernel.cu refactor code 2024-03-08 15:41:14 +08:00
rms_layernorm_kernel.cu add cast and op_functor for cuda build-in types (#5546) 2024-04-08 11:38:05 +08:00
scaled_masked_softmax.h refactor vector utils 2024-03-19 11:32:01 +08:00
scaled_masked_softmax_kernel.cu refactor code 2024-03-11 17:06:57 +08:00
scaled_upper_triang_masked_softmax.h [Inference]Support FP16/BF16 Flash Attention 2 And Add high_precision Flag To Rotary Embedding (#5461) 2024-03-25 13:40:34 +08:00
scaled_upper_triang_masked_softmax_kernel.cu refactor code 2024-03-11 17:06:57 +08:00