mirror of https://github.com/hpcaitech/ColossalAI
![]() * Support FP16/BF16 Flash Attention 2 * fix bugs in test_kv_cache_memcpy.py * add context_kv_cache_memcpy_kernel.cu * rm typename MT * add tail process * add high_precision * add high_precision to config.py * rm unused code * change the comment for the high_precision parameter * update test_rotary_embdding_unpad.py * fix vector_copy_utils.h * add comment for self.high_precision when using float32 |
||
---|---|---|
.. | ||
include | ||
pybind | ||
utils | ||
activation_kernel.cu | ||
context_kv_cache_memcpy_kernel.cu | ||
decode_kv_cache_memcpy_kernel.cu | ||
fused_rotary_emb_and_cache_kernel.cu | ||
layer_norm_kernel.cu | ||
moe_kernel.cu | ||
multi_tensor_adam_kernel.cu | ||
multi_tensor_apply.cuh | ||
multi_tensor_l2norm_kernel.cu | ||
multi_tensor_lamb_kernel.cu | ||
multi_tensor_scale_kernel.cu | ||
multi_tensor_sgd_kernel.cu | ||
rms_layernorm_kernel.cu | ||
scaled_masked_softmax.h | ||
scaled_masked_softmax_kernel.cu | ||
scaled_upper_triang_masked_softmax.h | ||
scaled_upper_triang_masked_softmax_kernel.cu |