ColossalAI/tests/test_infer
Cuiqing Li 459a88c806
[Kernels]Updated Triton kernels into 2.1.0 and adding flash-decoding for llama token attention (#4965)
* adding flash-decoding

* clean

* adding kernel

* adding flash-decoding

* add integration

* add

* adding kernel

* adding kernel

* adding triton 2.1.0 features for inference

* update bloom triton kernel

* remove useless vllm kernels

* clean codes

* fix

* adding files

* fix readme

* update llama flash-decoding

---------

Co-authored-by: cuiqing.li <lixx336@gmail.com>
2023-10-30 14:04:37 +08:00
..
test_dynamic_batching [Inference] Dynamic Batching Inference, online and offline (#4953) 2023-10-30 10:52:19 +08:00
_utils.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
test_bloom_infer.py [Kernels]Updated Triton kernels into 2.1.0 and adding flash-decoding for llama token attention (#4965) 2023-10-30 14:04:37 +08:00
test_chatglm2_infer.py [Kernels]Updated Triton kernels into 2.1.0 and adding flash-decoding for llama token attention (#4965) 2023-10-30 14:04:37 +08:00
test_infer_engine.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
test_kvcache_manager.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
test_llama2_infer.py [Kernels]Updated Triton kernels into 2.1.0 and adding flash-decoding for llama token attention (#4965) 2023-10-30 14:04:37 +08:00
test_llama_infer.py [Kernels]Updated Triton kernels into 2.1.0 and adding flash-decoding for llama token attention (#4965) 2023-10-30 14:04:37 +08:00
test_pipeline_infer.py [Pipeline inference] Combine kvcache with pipeline inference (#4938) 2023-10-27 16:19:54 +08:00