ColossalAI/colossalai/kernel/cuda_native/csrc
Sze-qq f8b9aaef47 [NFC] polish colossalai/kernel/cuda_native/csrc/type_shim.h code style (#1260) 2022-07-13 12:08:21 +08:00
..
kernels [kernel] fixed the include bug in dropout kernel (#999) 2022-05-18 21:43:18 +08:00
colossal_C_frontend.cpp [NFC] polish colossalai/kernel/cuda_native/csrc/colossal_C_frontend.cpp code style 2022-05-20 23:57:38 +08:00
compat.h refactor kernel (#142) 2022-01-13 16:47:17 +08:00
cpu_adam.cpp [NFC] polish colossalai/kernel/cuda_native/csrc/cpu_adam.cpp code style (#936) 2022-05-17 10:25:06 +08:00
cpu_adam.h [NFC] polish colossalai/kernel/cuda_native/csrc/cpu_adam.h code style (#945) 2022-05-17 10:25:06 +08:00
layer_norm_cuda.cpp [NFC] polish colossalai/kernel/cuda_native/csrc/layer_norm_cuda.cpp code style (#973) 2022-05-17 10:25:06 +08:00
layer_norm_cuda_kernel.cu [NFC] polish colossalai/kernel/cuda_native/csrc/layer_norm_cuda_kernel.cu code style (#661) 2022-04-06 11:40:59 +08:00
moe_cuda.cpp [NFC] polish colossalai/kernel/cuda_native/csrc/moe_cuda.cpp code style (#942) 2022-05-17 10:25:06 +08:00
moe_cuda_kernel.cu [NFC] polish moe_cuda_kernel.cu code style (#940) 2022-05-17 10:25:06 +08:00
multi_tensor_adam.cu [NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_adam.cu code style (#667) 2022-04-06 11:40:59 +08:00
multi_tensor_apply.cuh refactor kernel (#142) 2022-01-13 16:47:17 +08:00
multi_tensor_l2norm_kernel.cu [NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_l2norm_kernel.cu code style (#958) 2022-05-17 10:25:06 +08:00
multi_tensor_lamb.cu [NFC] Polish colossalai/kernel/cuda_native/csrc/multi_tensor_lamb.cu code style. (#937) 2022-05-17 10:25:06 +08:00
multi_tensor_scale_kernel.cu [NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_scale_kernel.cu code style (#977) 2022-05-17 10:25:06 +08:00
multi_tensor_sgd_kernel.cu [optim] refactor fused sgd (#1134) 2022-06-20 11:19:38 +08:00
multihead_attention_1d.cpp [NFC] polish colossalai/kernel/cuda_native/csrc/multihead_attention_1d.cpp code style (#952) 2022-05-17 10:25:06 +08:00
multihead_attention_1d.h [NFC] polish colossalai/kernel/cuda_native/csrc/multihead_attention_1d.h code style (#962) 2022-05-17 10:25:06 +08:00
scaled_masked_softmax.cpp add colossalai kernel module (#55) 2021-12-21 12:19:52 +08:00
scaled_masked_softmax.h add colossalai kernel module (#55) 2021-12-21 12:19:52 +08:00
scaled_masked_softmax_cuda.cu [NFC] polish colossalai/kernel/cuda_native/csrc/scaled_masked_softmax_cuda.cu code style (#949) 2022-05-17 10:25:06 +08:00
scaled_upper_triang_masked_softmax.cpp [NFC] polish colossalai/kernel/cuda_native/csrc/scaled_upper_triang_masked_softmax.cpp code style (#959) 2022-05-17 10:25:06 +08:00
scaled_upper_triang_masked_softmax.h add colossalai kernel module (#55) 2021-12-21 12:19:52 +08:00
scaled_upper_triang_masked_softmax_cuda.cu [NFC] polish pre-commit run --files colossalai/kernel/cuda_native/csrc/scaled_upper_triang_masked_softmax_cuda.cu code style (#943) 2022-05-17 10:25:06 +08:00
type_shim.h [NFC] polish colossalai/kernel/cuda_native/csrc/type_shim.h code style (#1260) 2022-07-13 12:08:21 +08:00