ColossalAI/colossalai/kernel/cuda_native
Hongxin Liu 0b00def881
[example] add llama2 example (#4527)
* [example] transfer llama-1 example

* [example] fit llama-2

* [example] refactor scripts folder

* [example] fit new gemini plugin

* [cli] fix multinode runner

* [example] fit gemini optim checkpoint

* [example] refactor scripts

* [example] update requirements

* [example] update requirements

* [example] rename llama to llama2

* [example] update readme and pretrain script

* [example] refactor scripts
2023-08-28 17:59:11 +08:00
..
csrc [bf16] add bf16 support (#3882) 2023-06-05 15:58:31 +08:00
mha [example] add llama2 example (#4527) 2023-08-28 17:59:11 +08:00
__init__.py [shardformer] update shardformer to use flash attention 2 (#4392) 2023-08-15 23:25:14 +08:00
layer_norm.py [kernel] fixed repeated loading of kernels (#2549) 2023-02-03 09:47:13 +08:00
multihead_attention.py [nfc] fix typo colossalai/cli fx kernel (#3847) 2023-06-02 15:02:45 +08:00
scaled_softmax.py [fix] coloattention support flash attention 2 (#4347) 2023-08-04 13:46:22 +08:00