.. |
kernels
|
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/normalize_kernels.cu code style (#974)
|
2022-05-17 10:25:06 +08:00 |
colossal_C_frontend.cpp
|
fix format (#568)
|
2022-04-06 11:40:59 +08:00 |
compat.h
|
refactor kernel (#142)
|
2022-01-13 16:47:17 +08:00 |
cpu_adam.cpp
|
[NFC] polish colossalai/kernel/cuda_native/csrc/cpu_adam.cpp code style (#936)
|
2022-05-17 10:25:06 +08:00 |
cpu_adam.h
|
[NFC] polish colossalai/kernel/cuda_native/csrc/cpu_adam.h code style (#945)
|
2022-05-17 10:25:06 +08:00 |
layer_norm_cuda.cpp
|
[NFC] polish colossalai/kernel/cuda_native/csrc/layer_norm_cuda.cpp code style (#973)
|
2022-05-17 10:25:06 +08:00 |
layer_norm_cuda_kernel.cu
|
[NFC] polish colossalai/kernel/cuda_native/csrc/layer_norm_cuda_kernel.cu code style (#661)
|
2022-04-06 11:40:59 +08:00 |
moe_cuda.cpp
|
[NFC] polish colossalai/kernel/cuda_native/csrc/moe_cuda.cpp code style (#942)
|
2022-05-17 10:25:06 +08:00 |
moe_cuda_kernel.cu
|
[NFC] polish moe_cuda_kernel.cu code style (#940)
|
2022-05-17 10:25:06 +08:00 |
multi_tensor_adam.cu
|
[NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_adam.cu code style (#667)
|
2022-04-06 11:40:59 +08:00 |
multi_tensor_apply.cuh
|
refactor kernel (#142)
|
2022-01-13 16:47:17 +08:00 |
multi_tensor_l2norm_kernel.cu
|
[NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_l2norm_kernel.cu code style (#958)
|
2022-05-17 10:25:06 +08:00 |
multi_tensor_lamb.cu
|
[NFC] Polish colossalai/kernel/cuda_native/csrc/multi_tensor_lamb.cu code style. (#937)
|
2022-05-17 10:25:06 +08:00 |
multi_tensor_scale_kernel.cu
|
[NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_scale_kernel.cu code style (#977)
|
2022-05-17 10:25:06 +08:00 |
multi_tensor_sgd_kernel.cu
|
[NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_sgd_kernel.cu code style (#978)
|
2022-05-17 10:25:06 +08:00 |
multihead_attention_1d.cpp
|
[NFC] polish colossalai/kernel/cuda_native/csrc/multihead_attention_1d.cpp code style (#952)
|
2022-05-17 10:25:06 +08:00 |
multihead_attention_1d.h
|
[NFC] polish colossalai/kernel/cuda_native/csrc/multihead_attention_1d.h code style (#962)
|
2022-05-17 10:25:06 +08:00 |
scaled_masked_softmax.cpp
|
add colossalai kernel module (#55)
|
2021-12-21 12:19:52 +08:00 |
scaled_masked_softmax.h
|
add colossalai kernel module (#55)
|
2021-12-21 12:19:52 +08:00 |
scaled_masked_softmax_cuda.cu
|
[NFC] polish colossalai/kernel/cuda_native/csrc/scaled_masked_softmax_cuda.cu code style (#949)
|
2022-05-17 10:25:06 +08:00 |
scaled_upper_triang_masked_softmax.cpp
|
[NFC] polish colossalai/kernel/cuda_native/csrc/scaled_upper_triang_masked_softmax.cpp code style (#959)
|
2022-05-17 10:25:06 +08:00 |
scaled_upper_triang_masked_softmax.h
|
add colossalai kernel module (#55)
|
2021-12-21 12:19:52 +08:00 |
scaled_upper_triang_masked_softmax_cuda.cu
|
[NFC] polish pre-commit run --files colossalai/kernel/cuda_native/csrc/scaled_upper_triang_masked_softmax_cuda.cu code style (#943)
|
2022-05-17 10:25:06 +08:00 |
type_shim.h
|
[cuda] modify the fused adam, support hybrid of fp16 and fp32 (#497)
|
2022-03-25 14:15:53 +08:00 |