ColossalAI/colossalai/kernel/cuda_native/csrc
Xu Kai 946ab56c48
[feature] add gptq for inference (#4754)
* [gptq] add gptq kernel (#4416)

* add gptq

* refactor code

* fix tests

* replace auto-gptq

* rname inferance/quant

* refactor test

* add auto-gptq as an option

* reset requirements

* change assert and check auto-gptq

* add import warnings

* change test flash attn version

* remove example

* change requirements of flash_attn

* modify tests

* [skip ci] change requirements-test

* [gptq] faster gptq cuda kernel (#4494)

* [skip ci] add cuda kernels

* add license

* [skip ci] fix max_input_len

* format files & change test size

* [skip ci]

* [gptq] add gptq tensor parallel (#4538)

* add gptq tensor parallel

* add gptq tp

* delete print

* add test gptq check

* add test auto gptq check

* [gptq] combine gptq and kv cache manager (#4706)

* combine gptq and kv cache manager

* add init bits

* delete useless code

* add model path

* delete usless print and update test

* delete usless import

* move option gptq to shard config

* change replace linear to shardformer

* update bloom policy

* delete useless code

* fix import bug and delete uselss code

* change colossalai/gptq to colossalai/quant/gptq

* update import linear for tests

* delete useless code and mv gptq_kernel to kernel directory

* fix triton kernel

* add triton import
2023-09-22 11:02:50 +08:00
..
gptq [feature] add gptq for inference (#4754) 2023-09-22 11:02:50 +08:00
kernels [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
colossal_C_frontend.cpp
compat.h [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
cpu_adam.cpp
cpu_adam.h
layer_norm_cuda.cpp [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
layer_norm_cuda_kernel.cu [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
moe_cuda.cpp [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
moe_cuda_kernel.cu [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
multi_tensor_adam.cu
multi_tensor_apply.cuh
multi_tensor_l2norm_kernel.cu [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
multi_tensor_lamb.cu [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
multi_tensor_scale_kernel.cu [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
multi_tensor_sgd_kernel.cu [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
multihead_attention_1d.cpp
multihead_attention_1d.h
scaled_masked_softmax.cpp [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
scaled_masked_softmax.h [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
scaled_masked_softmax_cuda.cu
scaled_upper_triang_masked_softmax.cpp
scaled_upper_triang_masked_softmax.h [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
scaled_upper_triang_masked_softmax_cuda.cu
type_shim.h [bf16] add bf16 support (#3882) 2023-06-05 15:58:31 +08:00