mirror of https://github.com/hpcaitech/ColossalAI
946ab56c48
* [gptq] add gptq kernel (#4416) * add gptq * refactor code * fix tests * replace auto-gptq * rname inferance/quant * refactor test * add auto-gptq as an option * reset requirements * change assert and check auto-gptq * add import warnings * change test flash attn version * remove example * change requirements of flash_attn * modify tests * [skip ci] change requirements-test * [gptq] faster gptq cuda kernel (#4494) * [skip ci] add cuda kernels * add license * [skip ci] fix max_input_len * format files & change test size * [skip ci] * [gptq] add gptq tensor parallel (#4538) * add gptq tensor parallel * add gptq tp * delete print * add test gptq check * add test auto gptq check * [gptq] combine gptq and kv cache manager (#4706) * combine gptq and kv cache manager * add init bits * delete useless code * add model path * delete usless print and update test * delete usless import * move option gptq to shard config * change replace linear to shardformer * update bloom policy * delete useless code * fix import bug and delete uselss code * change colossalai/gptq to colossalai/quant/gptq * update import linear for tests * delete useless code and mv gptq_kernel to kernel directory * fix triton kernel * add triton import |
||
---|---|---|
.. | ||
gptq | ||
kernels | ||
colossal_C_frontend.cpp | ||
compat.h | ||
cpu_adam.cpp | ||
cpu_adam.h | ||
layer_norm_cuda.cpp | ||
layer_norm_cuda_kernel.cu | ||
moe_cuda.cpp | ||
moe_cuda_kernel.cu | ||
multi_tensor_adam.cu | ||
multi_tensor_apply.cuh | ||
multi_tensor_l2norm_kernel.cu | ||
multi_tensor_lamb.cu | ||
multi_tensor_scale_kernel.cu | ||
multi_tensor_sgd_kernel.cu | ||
multihead_attention_1d.cpp | ||
multihead_attention_1d.h | ||
scaled_masked_softmax.cpp | ||
scaled_masked_softmax.h | ||
scaled_masked_softmax_cuda.cu | ||
scaled_upper_triang_masked_softmax.cpp | ||
scaled_upper_triang_masked_softmax.h | ||
scaled_upper_triang_masked_softmax_cuda.cu | ||
type_shim.h |