ColossalAI/examples/inference/benchmark_ops
Steve Luo 5cd75ce4c7
[Inference/Kernel] refactor kvcache manager and rotary_embedding and kvcache_memcpy oper… (#5663)
* refactor kvcache manager and rotary_embedding and kvcache_memcpy operator

* refactor decode_kv_cache_memcpy

* enable alibi in pagedattention
2024-04-30 15:52:23 +08:00
..
benchmark_context_attn_unpad.py [kernel] Support new KCache Layout - Context Attention Triton Kernel (#5658) 2024-04-26 17:51:49 +08:00
benchmark_decoding_attn.py [Inference/Kernel] Add Paged Decoding kernel, sequence split within the same thread block (#5531) 2024-04-18 16:45:07 +08:00
benchmark_flash_decoding_attention.py [Inference/Kernel] refactor kvcache manager and rotary_embedding and kvcache_memcpy oper… (#5663) 2024-04-30 15:52:23 +08:00
benchmark_fused_rotary_embdding_unpad.py [Inference/Kernel] refactor kvcache manager and rotary_embedding and kvcache_memcpy oper… (#5663) 2024-04-30 15:52:23 +08:00
benchmark_kv_cache_memcopy.py [Inference/Kernel] refactor kvcache manager and rotary_embedding and kvcache_memcpy oper… (#5663) 2024-04-30 15:52:23 +08:00
benchmark_rmsnorm.py feat baichuan2 rmsnorm whose hidden size equals to 5120 (#5611) 2024-04-19 15:34:53 +08:00
benchmark_rotary_embedding.py [Inference/kernel]Add Fused Rotary Embedding and KVCache Memcopy CUDA Kernel (#5418) 2024-03-13 17:20:03 +08:00
benchmark_xine_copy.py [Inference/kernel]Add Fused Rotary Embedding and KVCache Memcopy CUDA Kernel (#5418) 2024-03-13 17:20:03 +08:00