ColossalAI/tests/test_infer
Bin Jia 81b8f5e76a
[Inference Refactor] Merge chatglm2 with pp and tp (#5023)
merge chatglm with pp and tp
2023-11-09 14:46:19 +08:00
..
test_dynamic_batching [Inference] Dynamic Batching Inference, online and offline (#4953) 2023-10-30 10:52:19 +08:00
_utils.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
test_bloom_infer.py [Kernels]Updated Triton kernels into 2.1.0 and adding flash-decoding for llama token attention (#4965) 2023-10-30 14:04:37 +08:00
test_chatglm2_infer.py [Inference] Fix bug in ChatGLM2 Tensor Parallelism (#5014) 2023-11-07 15:01:50 +08:00
test_hybrid_bloom.py [refactor] refactor gptq and smoothquant llama (#5012) 2023-11-09 10:12:11 +08:00
test_hybrid_chatglm2.py [Inference Refactor] Merge chatglm2 with pp and tp (#5023) 2023-11-09 14:46:19 +08:00
test_hybrid_llama.py [refactor] refactor gptq and smoothquant llama (#5012) 2023-11-09 10:12:11 +08:00
test_infer_engine.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
test_kvcache_manager.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
test_llama2_infer.py [Kernels]Updated Triton kernels into 2.1.0 and adding flash-decoding for llama token attention (#4965) 2023-10-30 14:04:37 +08:00
test_llama_infer.py [Kernels]Updated Triton kernels into 2.1.0 and adding flash-decoding for llama token attention (#4965) 2023-10-30 14:04:37 +08:00