ColossalAI/tests/test_infer
Steve Luo a8fd3b0342
[Inference/Kernel] Optimize paged attention: Refactor key cache layout (#5643)
* optimize flashdecodingattention: refactor code with different key cache layout(from [num_blocks, num_kv_heads, block_size, head_size] to [num_blocks, num_kv_heads, head_size/x, block_size, x])

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-04-25 14:24:02 +08:00
..
test_models [inference/model]Adapted to the baichuan2-7B model (#5591) 2024-04-15 16:53:02 +08:00
test_ops [Inference/Kernel] Optimize paged attention: Refactor key cache layout (#5643) 2024-04-25 14:24:02 +08:00
_utils.py [Inference] Add the logic of the inference engine (#5173) 2024-01-11 13:39:56 +00:00
test_batch_bucket.py [Fix/Inference] Fix format of input prompts and input model in inference engine (#5395) 2024-02-23 10:51:35 +08:00
test_config_and_struct.py [Inference] Optimize and Refactor Inference Batching/Scheduling (#5367) 2024-02-19 17:18:20 +08:00
test_cuda_graph.py [Feat]Tensor Model Parallel Support For Inference (#5563) 2024-04-18 16:56:46 +08:00
test_drafter.py [Inference/SpecDec] Support GLIDE Drafter Model (#5455) 2024-04-10 11:07:52 +08:00
test_inference_engine.py [Fix/Inference] Fix GQA Triton and Support Llama3 (#5624) 2024-04-23 13:09:55 +08:00
test_kvcache_manager.py [Inference] Optimize and Refactor Inference Batching/Scheduling (#5367) 2024-02-19 17:18:20 +08:00
test_request_handler.py [Inference] Optimize and Refactor Inference Batching/Scheduling (#5367) 2024-02-19 17:18:20 +08:00