ColossalAI/colossalai/kernel/cuda_native
Jiarui Fang 1cb532ffec
[builder] multihead attn runtime building (#2203)
* [hotfix] correcnt cpu_optim runtime compilation

* [builder] multihead attn

* fix bug

* fix a bug
2022-12-27 16:06:09 +08:00
..
csrc [optimizer] add div_scale for optimizers (#2117) 2022-12-12 17:58:57 +08:00
__init__.py updated flash attention api 2022-11-15 15:25:39 +08:00
flash_attention.py updated attention kernel (#2133) 2022-12-16 10:54:03 +08:00
layer_norm.py [kernel] move all symlinks of kernel to `colossalai._C` (#1971) 2022-11-17 13:42:33 +08:00
multihead_attention.py [builder] multihead attn runtime building (#2203) 2022-12-27 16:06:09 +08:00
scaled_softmax.py [kernel] move all symlinks of kernel to `colossalai._C` (#1971) 2022-11-17 13:42:33 +08:00