ColossalAI/colossalai/kernel/cuda_native
Frank Lee 918bc94b6b
[triton] added copyright information for flash attention (#2835)
* [triton] added copyright information for flash attention

* polish code
2023-02-21 11:25:57 +08:00
..
csrc [hotfix] fix error for torch 2.0 (#2243) 2022-12-30 23:11:55 +08:00
__init__.py [kernel] fixed repeated loading of kernels (#2549) 2023-02-03 09:47:13 +08:00
flash_attention.py [triton] added copyright information for flash attention (#2835) 2023-02-21 11:25:57 +08:00
layer_norm.py [kernel] fixed repeated loading of kernels (#2549) 2023-02-03 09:47:13 +08:00
multihead_attention.py [setup] support pre-build and jit-build of cuda kernels (#2374) 2023-01-06 20:50:26 +08:00
scaled_softmax.py [kernel] fixed repeated loading of kernels (#2549) 2023-02-03 09:47:13 +08:00