ColossalAI/colossalai
Xu Kai 611a5a80ca
[inference] Add smmoothquant for llama (#4904)
* [inference] add int8 rotary embedding kernel for smoothquant (#4843)

* [inference] add smoothquant llama attention (#4850)

* add smoothquant llama attention

* remove uselss code

* remove useless code

* fix import error

* rename file name

* [inference] add silu linear fusion for smoothquant llama mlp  (#4853)

* add silu linear

* update skip condition

* catch smoothquant cuda lib exception

* prcocess exception for tests

* [inference] add llama mlp for smoothquant (#4854)

* add llama mlp for smoothquant

* fix down out scale

* remove duplicate lines

* add llama mlp check

* delete useless code

* [inference] add smoothquant llama (#4861)

* add smoothquant llama

* fix attention accuracy

* fix accuracy

* add kv cache and save pretrained

* refactor example

* delete smooth

* refactor code

* [inference] add smooth function and delete useless code for smoothquant (#4895)

* add smooth function and delete useless code

* update datasets

* remove duplicate import

* delete useless file

* refactor codes (#4902)

* rafactor code

* add license

* add torch-int and smoothquant license
2023-10-16 11:28:44 +08:00
..
_C [setup] support pre-build and jit-build of cuda kernels (#2374) 2023-01-06 20:50:26 +08:00
_analyzer [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
amp [feature] Add clip_grad_norm for hybrid_parallel_plugin (#4837) 2023-10-12 11:32:37 +08:00
auto_parallel [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
autochunk [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
booster [feature] support no master weights option for low level zero plugin (#4816) 2023-10-13 07:57:45 +00:00
checkpoint_io [checkpointio] hotfix torch 2.0 compatibility (#4824) 2023-10-07 10:45:52 +08:00
cli [bug] Fix the version check bug in colossalai run when generating the cmd. (#4713) 2023-09-22 10:50:47 +08:00
cluster [doc] polish shardformer doc (#4779) 2023-09-26 10:57:47 +08:00
context [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
device [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
fx [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
inference [inference] Add smmoothquant for llama (#4904) 2023-10-16 11:28:44 +08:00
interface [lazy] support from_pretrained (#4801) 2023-09-26 11:04:11 +08:00
kernel [inference] Add smmoothquant for llama (#4904) 2023-10-16 11:28:44 +08:00
lazy [doc] add lazy init docs (#4808) 2023-09-27 10:24:04 +08:00
legacy [bug] fix get_default_parser in examples (#4764) 2023-09-21 10:42:25 +08:00
logging [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
nn [hotfix] fix lr scheduler bug in torch 2.0 (#4864) 2023-10-12 14:04:24 +08:00
pipeline [Pipeline Inference] Sync pipeline inference branch to main (#4820) 2023-10-11 11:40:06 +08:00
shardformer [infer] fix test bug (#4838) 2023-10-04 10:01:03 +08:00
tensor [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
testing [gemini] support amp o3 for gemini (#4872) 2023-10-12 10:39:08 +08:00
utils [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
zero [feature] support no master weights option for low level zero plugin (#4816) 2023-10-13 07:57:45 +00:00
__init__.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
initialize.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00