mirror of https://github.com/hpcaitech/ColossalAI
![]() * update flash-context-attention * adding kernels * fix * reset * add build script * add building process * add llama2 exmaple * add colossal-llama2 test * clean * fall back test setting * fix test file * clean * clean * clean --------- Co-authored-by: cuiqing.li <lixx336@gmail.com> |
||
---|---|---|
.. | ||
__init__.py | ||
context_attention.py | ||
copy_kv_cache_dest.py | ||
custom_autotune.py | ||
fused_layernorm.py | ||
gptq_triton.py | ||
int8_rotary_embedding_kernel.py | ||
llama_act_combine_kernel.py | ||
qkv_matmul_kernel.py | ||
self_attention_nofusion.py | ||
smooth_attention.py | ||
softmax.py | ||
token_attention_kernel.py |