ColossalAI/colossalai/kernel/triton
Xu Kai 611a5a80ca
[inference] Add smmoothquant for llama (#4904)
* [inference] add int8 rotary embedding kernel for smoothquant (#4843)

* [inference] add smoothquant llama attention (#4850)

* add smoothquant llama attention

* remove uselss code

* remove useless code

* fix import error

* rename file name

* [inference] add silu linear fusion for smoothquant llama mlp  (#4853)

* add silu linear

* update skip condition

* catch smoothquant cuda lib exception

* prcocess exception for tests

* [inference] add llama mlp for smoothquant (#4854)

* add llama mlp for smoothquant

* fix down out scale

* remove duplicate lines

* add llama mlp check

* delete useless code

* [inference] add smoothquant llama (#4861)

* add smoothquant llama

* fix attention accuracy

* fix accuracy

* add kv cache and save pretrained

* refactor example

* delete smooth

* refactor code

* [inference] add smooth function and delete useless code for smoothquant (#4895)

* add smooth function and delete useless code

* update datasets

* remove duplicate import

* delete useless file

* refactor codes (#4902)

* rafactor code

* add license

* add torch-int and smoothquant license
2023-10-16 11:28:44 +08:00
..
__init__.py [inference] Add smmoothquant for llama (#4904) 2023-10-16 11:28:44 +08:00
context_attention.py [inference] chatglm2 infer demo (#4724) 2023-09-22 11:12:50 +08:00
copy_kv_cache_dest.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
custom_autotune.py add autotune (#4822) 2023-09-28 13:47:35 +08:00
fused_layernorm.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
gptq_triton.py add autotune (#4822) 2023-09-28 13:47:35 +08:00
int8_rotary_embedding_kernel.py [inference] Add smmoothquant for llama (#4904) 2023-10-16 11:28:44 +08:00
qkv_matmul_kernel.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
rms_norm.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
rotary_embedding_kernel.py [inference] chatglm2 infer demo (#4724) 2023-09-22 11:12:50 +08:00
self_attention_nofusion.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
smooth_attention.py [inference] Add smmoothquant for llama (#4904) 2023-10-16 11:28:44 +08:00
softmax.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
token_attention_kernel.py [inference] chatglm2 infer demo (#4724) 2023-09-22 11:12:50 +08:00