ColossalAI/examples/inference
Yuanheng Zhao 537a3cbc4d
[kernel] Support New KCache Layout - Triton Kernel (#5677)
* kvmemcpy triton for new kcache layout

* revise tests for new kcache layout

* naive triton flash decoding - new kcache layout

* rotary triton kernel - new kcache layout

* remove redundancy - triton decoding

* remove redundancy - triton kvcache copy

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-05-03 17:20:45 +08:00
..
benchmark_ops [kernel] Support New KCache Layout - Triton Kernel (#5677) 2024-05-03 17:20:45 +08:00
benchmark_llama.py [Fix/Inference]Fix vllm benchmark (#5630) 2024-04-24 14:51:36 +08:00
benchmark_llama3.py [Fix/Inference]Fix vllm benchmark (#5630) 2024-04-24 14:51:36 +08:00
llama_generation.py [example] Update Llama Inference example (#5629) 2024-04-23 22:23:07 +08:00
run_benchmark.sh [Fix/Inference]Fix vllm benchmark (#5630) 2024-04-24 14:51:36 +08:00