ColossalAI/colossalai/kernel/cuda_native/csrc
Jun Gao dce05da535
fix thrust-transform-reduce error (#5078)
2023-11-21 15:09:35 +08:00
..
gptq [NFC] polish code style (#4799) 2023-10-07 13:36:52 +08:00
kernels fix thrust-transform-reduce error (#5078) 2023-11-21 15:09:35 +08:00
smoothquant [inference] Add smmoothquant for llama (#4904) 2023-10-16 11:28:44 +08:00
colossal_C_frontend.cpp [optimizer] add div_scale for optimizers (#2117) 2022-12-12 17:58:57 +08:00
compat.h [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
cpu_adam.cpp [kernel] support pure fp16 for cpu adam and update gemini optim tests (#4921) 2023-10-16 21:56:53 +08:00
cpu_adam.h [npu] add npu support for gemini and zero (#5067) 2023-11-20 16:12:41 +08:00
cpu_adam_arm.cpp [npu] add npu support for gemini and zero (#5067) 2023-11-20 16:12:41 +08:00
cpu_adam_arm.h [npu] add npu support for gemini and zero (#5067) 2023-11-20 16:12:41 +08:00
layer_norm_cuda.cpp [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
layer_norm_cuda_kernel.cu [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
moe_cuda.cpp [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
moe_cuda_kernel.cu [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
multi_tensor_adam.cu [doc] add deepspeed citation and copyright (#2996) 2023-03-04 20:08:11 +08:00
multi_tensor_apply.cuh [doc] add deepspeed citation and copyright (#2996) 2023-03-04 20:08:11 +08:00
multi_tensor_l2norm_kernel.cu [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
multi_tensor_lamb.cu [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
multi_tensor_scale_kernel.cu [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
multi_tensor_sgd_kernel.cu [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
multihead_attention_1d.cpp [hotfix] fix error for torch 2.0 (#2243) 2022-12-30 23:11:55 +08:00
multihead_attention_1d.h [hotfix] fix error for torch 2.0 (#2243) 2022-12-30 23:11:55 +08:00
scaled_masked_softmax.cpp [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
scaled_masked_softmax.h [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
scaled_masked_softmax_cuda.cu [NFC] polish colossalai/kernel/cuda_native/csrc/scaled_masked_softmax_cuda.cu code style (#949) 2022-05-17 10:25:06 +08:00
scaled_upper_triang_masked_softmax.cpp [NFC] polish colossalai/kernel/cuda_native/csrc/scaled_upper_triang_masked_softmax.cpp code style (#959) 2022-05-17 10:25:06 +08:00
scaled_upper_triang_masked_softmax.h [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
scaled_upper_triang_masked_softmax_cuda.cu
type_shim.h [bf16] add bf16 support (#3882) 2023-06-05 15:58:31 +08:00