ColossalAI/colossalai/kernel/cuda_native
Frank Lee 8b7495dd54
[example] integrate seq-parallel tutorial with CI (#2463)
2023-01-13 14:40:05 +08:00
..
csrc [hotfix] fix error for torch 2.0 (#2243) 2022-12-30 23:11:55 +08:00
__init__.py updated flash attention api 2022-11-15 15:25:39 +08:00
flash_attention.py updated attention kernel (#2133) 2022-12-16 10:54:03 +08:00
layer_norm.py [hotfix] issue #2388 2023-01-07 18:23:02 +08:00
multihead_attention.py [setup] support pre-build and jit-build of cuda kernels (#2374) 2023-01-06 20:50:26 +08:00
scaled_softmax.py [example] integrate seq-parallel tutorial with CI (#2463) 2023-01-13 14:40:05 +08:00