ColossalAI/colossalai/kernel/triton
Cuiqing Li 3a41e8304e
[Refactor] Integrated some lightllm kernels into token-attention (#4946)
* add some req for inference

* clean codes

* add codes

* add some lightllm deps

* clean codes

* hello

* delete rms files

* add some comments

* add comments

* add doc

* add lightllm deps

* add lightllm cahtglm2 kernels

* add lightllm cahtglm2 kernels

* replace rotary embedding with lightllm kernel

* add some commnets

* add some comments

* add some comments

* add

* replace fwd kernel att1

* fix a arg

* add

* add

* fix token attention

* add some comments

* clean codes

* modify comments

* fix readme

* fix bug

* fix bug

---------

Co-authored-by: cuiqing.li <lixx336@gmail.com>
Co-authored-by: CjhHa1 <cjh18671720497@outlook.com>
2023-10-19 22:22:47 +08:00
..
__init__.py [Refactor] Integrated some lightllm kernels into token-attention (#4946) 2023-10-19 22:22:47 +08:00
context_attention.py [Refactor] Integrated some lightllm kernels into token-attention (#4946) 2023-10-19 22:22:47 +08:00
copy_kv_cache_dest.py [Refactor] Integrated some lightllm kernels into token-attention (#4946) 2023-10-19 22:22:47 +08:00
custom_autotune.py add autotune (#4822) 2023-09-28 13:47:35 +08:00
fused_layernorm.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
gptq_triton.py add autotune (#4822) 2023-09-28 13:47:35 +08:00
int8_rotary_embedding_kernel.py [inference] Add smmoothquant for llama (#4904) 2023-10-16 11:28:44 +08:00
qkv_matmul_kernel.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
self_attention_nofusion.py [Refactor] Integrated some lightllm kernels into token-attention (#4946) 2023-10-19 22:22:47 +08:00
smooth_attention.py [inference] Add smmoothquant for llama (#4904) 2023-10-16 11:28:44 +08:00
softmax.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
token_attention_kernel.py [Refactor] Integrated some lightllm kernels into token-attention (#4946) 2023-10-19 22:22:47 +08:00