ColossalAI/colossalai/kernel/cuda_native/mha
Hongxin Liu 0b00def881
[example] add llama2 example (#4527)
* [example] transfer llama-1 example

* [example] fit llama-2

* [example] refactor scripts folder

* [example] fit new gemini plugin

* [cli] fix multinode runner

* [example] fit gemini optim checkpoint

* [example] refactor scripts

* [example] update requirements

* [example] update requirements

* [example] rename llama to llama2

* [example] update readme and pretrain script

* [example] refactor scripts
2023-08-28 17:59:11 +08:00
..
__init__.py [coloattention] fix import error (#4380) 2023-08-04 16:28:41 +08:00
flash_attn_2.py [fix] coloattention support flash attention 2 (#4347) 2023-08-04 13:46:22 +08:00
mem_eff_attn.py [example] add llama2 example (#4527) 2023-08-28 17:59:11 +08:00
mha.py [fix] coloattention support flash attention 2 (#4347) 2023-08-04 13:46:22 +08:00
utils.py [fix] coloattention support flash attention 2 (#4347) 2023-08-04 13:46:22 +08:00