Fixed a typo

pull/5258/head
yuehuayingxueluo 2024-01-04 15:09:06 +08:00 committed by FrankLeeeee
parent bbfebfb9fc
commit b2eb9cd186
1 changed files with 1 additions and 1 deletions

View File

@ -159,7 +159,7 @@ def llama_attn_forward(
_, _, _, block_size = k_cache.shape
# NOTE: context_attention_unpadded is unsed for testing accuracy and we can only use aligned inputs.
# NOTE: context_attention_unpadded is used for testing accuracy and we can only use aligned inputs.
# The code below will be uncommented after the development of attention-related kernel is completed.
if is_prompts:
attn_output = context_attention_unpadded(