mirror of https://github.com/hpcaitech/ColossalAI
![]() * optimize flashdecodingattention: refactor code with different key cache layout(from [num_blocks, num_kv_heads, block_size, head_size] to [num_blocks, num_kv_heads, head_size/x, block_size, x]) * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> |
||
---|---|---|
.. | ||
benchmark_context_attn_unpad.py | ||
benchmark_decoding_attn.py | ||
benchmark_flash_decoding_attention.py | ||
benchmark_fused_rotary_embdding_unpad.py | ||
benchmark_kv_cache_memcopy.py | ||
benchmark_rmsnorm.py | ||
benchmark_rotary_embedding.py | ||
benchmark_xine_copy.py |