ColossalAI/colossalai/kernel/triton
yuehuayingxueluo 3c91e3f176
[Inference]Adapt to baichuan2 13B (#5614)
* adapt to baichuan2 13B

* adapt to baichuan2 13B

* change BAICHUAN_MODEL_NAME_OR_PATH

* fix test_decoding_attn.py

* Modifications based on review comments.

* change BAICHUAN_MODEL_NAME_OR_PATH

* mv attn mask processes to test flash decoding

* mv get_alibi_slopes baichuan modeling

* fix bugs in test_baichuan.py
2024-04-25 23:11:30 +08:00
..
__init__.py [Infer] Revise and Adapt Triton Kernels for Spec-Dec (#5401) 2024-04-10 11:07:51 +08:00
context_attn_unpad.py [Inference]Adapt to baichuan2 13B (#5614) 2024-04-25 23:11:30 +08:00
flash_decoding.py [Inference]Adapt to baichuan2 13B (#5614) 2024-04-25 23:11:30 +08:00
fused_rotary_embedding.py [Inference]Fused the gate and up proj in mlp,and optimized the autograd process. (#5365) 2024-02-06 19:38:25 +08:00
kvcache_copy.py [Infer] Revise and Adapt Triton Kernels for Spec-Dec (#5401) 2024-04-10 11:07:51 +08:00
llama_act_combine_kernel.py [devops] remove post commit ci (#5566) 2024-04-08 15:09:40 +08:00
no_pad_rotary_embedding.py [Fix/Inference] Fix GQA Triton and Support Llama3 (#5624) 2024-04-23 13:09:55 +08:00
qkv_matmul_kernel.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
rms_layernorm.py [fix] multi graphs capture error 2024-03-11 10:49:31 +08:00
rotary_cache_copy.py [Inference]Fused the gate and up proj in mlp,and optimized the autograd process. (#5365) 2024-02-06 19:38:25 +08:00
softmax.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00