ColossalAI/colossalai/kernel/cuda_native/csrc
Jiarui Fang e761ad2cd7
Revert "[zero] add ZeroTensorShardStrategy (#793)" (#806)
2022-04-19 14:40:02 +08:00
..
kernels [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/cuda_util.h code style (#641) 2022-04-06 11:40:59 +08:00
colossal_C_frontend.cpp fix format (#568) 2022-04-06 11:40:59 +08:00
compat.h refactor kernel (#142) 2022-01-13 16:47:17 +08:00
cpu_adam.cpp [NFC] polish colossalai/kernel/cuda_native/csrc/cpu_adam.cpp code style (#636) 2022-04-06 11:40:59 +08:00
cpu_adam.h fix format (#608) 2022-04-06 11:40:59 +08:00
layer_norm_cuda.cpp [format]colossalai/kernel/cuda_native/csrc/layer_norm_cuda.cpp (#566) 2022-04-06 11:40:59 +08:00
layer_norm_cuda_kernel.cu [NFC] polish colossalai/kernel/cuda_native/csrc/layer_norm_cuda_kernel.cu code style (#661) 2022-04-06 11:40:59 +08:00
moe_cuda.cpp [NFC] polish colossalai/kernel/cuda_native/csrc/moe_cuda.cpp code style (#642) 2022-04-06 11:40:59 +08:00
moe_cuda_kernel.cu fix format (#583) 2022-04-06 11:40:59 +08:00
multi_tensor_adam.cu [NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_adam.cu code style (#667) 2022-04-06 11:40:59 +08:00
multi_tensor_apply.cuh refactor kernel (#142) 2022-01-13 16:47:17 +08:00
multi_tensor_l2norm_kernel.cu [NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_l2norm_kernel.cu code style (#635) 2022-04-06 11:40:59 +08:00
multi_tensor_lamb.cu [NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_lamb.cu code stype (#628) 2022-04-06 11:40:59 +08:00
multi_tensor_scale_kernel.cu fix format (#563) 2022-04-06 11:40:59 +08:00
multi_tensor_sgd_kernel.cu refactor kernel (#142) 2022-01-13 16:47:17 +08:00
multihead_attention_1d.cpp add colossalai kernel module (#55) 2021-12-21 12:19:52 +08:00
multihead_attention_1d.h add colossalai kernel module (#55) 2021-12-21 12:19:52 +08:00
scaled_masked_softmax.cpp add colossalai kernel module (#55) 2021-12-21 12:19:52 +08:00
scaled_masked_softmax.h add colossalai kernel module (#55) 2021-12-21 12:19:52 +08:00
scaled_masked_softmax_cuda.cu add colossalai kernel module (#55) 2021-12-21 12:19:52 +08:00
scaled_upper_triang_masked_softmax.cpp add colossalai kernel module (#55) 2021-12-21 12:19:52 +08:00
scaled_upper_triang_masked_softmax.h add colossalai kernel module (#55) 2021-12-21 12:19:52 +08:00
scaled_upper_triang_masked_softmax_cuda.cu add colossalai kernel module (#55) 2021-12-21 12:19:52 +08:00
type_shim.h [cuda] modify the fused adam, support hybrid of fp16 and fp32 (#497) 2022-03-25 14:15:53 +08:00