xcnick
|
85178a397a
|
[hotfix] fix error for torch 2.0 (#2243)
|
2022-12-30 23:11:55 +08:00 |
Jiarui Fang
|
db4cbdc7fb
|
[builder] builder for scaled_upper_triang_masked_softmax (#2234)
|
2022-12-30 09:58:00 +08:00 |
Jiarui Fang
|
1cb532ffec
|
[builder] multihead attn runtime building (#2203)
* [hotfix] correcnt cpu_optim runtime compilation
* [builder] multihead attn
* fix bug
* fix a bug
|
2022-12-27 16:06:09 +08:00 |
アマデウス
|
077a66dd81
|
updated attention kernel (#2133)
|
2022-12-16 10:54:03 +08:00 |
HELSON
|
e7d3afc9cc
|
[optimizer] add div_scale for optimizers (#2117)
* [optimizer] add div_scale for optimizers
* [zero] use div_scale in zero optimizer
* fix testing error
|
2022-12-12 17:58:57 +08:00 |
ver217
|
f8a7148dec
|
[kernel] move all symlinks of kernel to `colossalai._C` (#1971)
|
2022-11-17 13:42:33 +08:00 |
zbian
|
6877121377
|
updated flash attention api
|
2022-11-15 15:25:39 +08:00 |
xcnick
|
e0da01ea71
|
[hotfix] fix build error when torch version >= 1.13 (#1803)
|
2022-11-08 09:40:24 +08:00 |
oahzxl
|
9639ea88fc
|
[kernel] more flexible flashatt interface (#1804)
|
2022-11-07 17:02:09 +08:00 |
oahzxl
|
501a9e9cd2
|
[hotfix] polish flash attention (#1802)
|
2022-11-07 14:30:22 +08:00 |
Jiarui Fang
|
c248800359
|
[kernel] skip tests of flash_attn and triton when they are not available (#1798)
|
2022-11-07 13:41:13 +08:00 |
oahzxl
|
25952b67d7
|
[feat] add flash attention (#1762)
|
2022-10-26 16:15:52 +08:00 |
ver217
|
12b4887097
|
[hotfix] fix CPUAdam kernel nullptr (#1410)
|
2022-08-05 19:45:45 +08:00 |
binmakeswell
|
7696cead8d
|
Recover kernal files
|
2022-07-13 12:08:21 +08:00 |
Maruyama_Aya
|
87f679aeae
|
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/kernels.h code style (#1291)
|
2022-07-13 12:08:21 +08:00 |
doubleHU
|
d6f5ef8860
|
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/transform_kernels.cu code style (#1286)
|
2022-07-13 12:08:21 +08:00 |
yuxuan-lou
|
5f6ab35d25
|
Hotfix/format (#1274)
* [NFC] Polish colossalai/kernel/cuda_native/csrc/multi_tensor_lamb.cu code style. (#937)
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/cuda_util.h code style
* [NFC] polish colossalai/kernel/cuda_native/csrc/scaled_masked_softmax.cpp code style
Co-authored-by: BoxiangW <45734921+BoxiangW@users.noreply.github.com>
|
2022-07-13 12:08:21 +08:00 |
binmakeswell
|
c95e18cdb9
|
[NFC] polish colossalai/kernel/cuda_native/csrc/scaled_upper_triang_masked_softmax.h code style (#1270)
|
2022-07-13 12:08:21 +08:00 |
DouJS
|
db13f96333
|
[NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_apply.cuh code style (#1264)
|
2022-07-13 12:08:21 +08:00 |
shenggan
|
5d7366b144
|
[NFC] polish colossalai/kernel/cuda_native/csrc/scaled_masked_softmax.h code style (#1263)
|
2022-07-13 12:08:21 +08:00 |
ziyu huang
|
f1cafcc73a
|
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/dropout_kernels.cu code style (#1261)
Co-authored-by: “Arsmart123 <202476410arsmart@gmail.com>
|
2022-07-13 12:08:21 +08:00 |
Sze-qq
|
f8b9aaef47
|
[NFC] polish colossalai/kernel/cuda_native/csrc/type_shim.h code style (#1260)
|
2022-07-13 12:08:21 +08:00 |
ver217
|
e4f555f29a
|
[optim] refactor fused sgd (#1134)
|
2022-06-20 11:19:38 +08:00 |
zhengzangw
|
ae7c338105
|
[NFC] polish colossalai/kernel/cuda_native/csrc/colossal_C_frontend.cpp code style
|
2022-05-20 23:57:38 +08:00 |
Frank Lee
|
533d0c46d8
|
[kernel] fixed the include bug in dropout kernel (#999)
|
2022-05-18 21:43:18 +08:00 |
puck_WCR
|
bda70b4b66
|
[NFC] polish colossalai/kernel/cuda_native/layer_norm.py code style (#980)
|
2022-05-17 10:25:06 +08:00 |
Kai Wang (Victor Kai)
|
c50c08dcbb
|
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/dropout_kernels.cu code style (#979)
|
2022-05-17 10:25:06 +08:00 |
binmakeswell
|
f28c021376
|
[NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_sgd_kernel.cu code style (#978)
|
2022-05-17 10:25:06 +08:00 |
Jie Zhu
|
b67eebd20f
|
[NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_scale_kernel.cu code style (#977)
|
2022-05-17 10:25:06 +08:00 |
DouJS
|
52705ec5c5
|
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/normalize_kernels.cu code style (#974)
|
2022-05-17 10:25:06 +08:00 |
Ofey Chan
|
136946422b
|
[NFC] polish colossalai/kernel/cuda_native/csrc/layer_norm_cuda.cpp code style (#973)
|
2022-05-17 10:25:06 +08:00 |
Xu Kai
|
632e94abde
|
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/dropout.h code style (#970)
|
2022-05-17 10:25:06 +08:00 |
ExtremeViscent
|
22d1df224d
|
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/feed_forward.h (#968)
code style
|
2022-05-17 10:25:06 +08:00 |
Yuer867
|
7106a399fc
|
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/softmax.h code style (#964)
|
2022-05-17 10:25:06 +08:00 |
ziyu huang
|
5bd80b7dd1
|
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/general_kernels.cu code style (#963)
Co-authored-by: “Arsmart123 <202476410arsmart@gmail.com>
|
2022-05-17 10:25:06 +08:00 |
superhao1995
|
48c4a180c7
|
[NFC] polish colossalai/kernel/cuda_native/csrc/scaled_upper_triang_masked_softmax.cpp code style (#959)
|
2022-05-17 10:25:06 +08:00 |
MaxT
|
442a2975ab
|
[NFC] polish colossalai/kernel/cuda_native/csrc/multihead_attention_1d.h code style (#962)
|
2022-05-17 10:25:06 +08:00 |
runluo
|
89e2767a92
|
[NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_l2norm_kernel.cu code style (#958)
|
2022-05-17 10:25:06 +08:00 |
doubleHU
|
1dc1b6fa00
|
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/cross_entropy_layer.h code style (#957)
|
2022-05-17 10:25:06 +08:00 |
RichardoLuo
|
0e922da874
|
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/context.h code style (#956)
Co-authored-by: RichardoLuo <14049555596@qq.com>
|
2022-05-17 10:25:06 +08:00 |
Wangbo Zhao(黑色枷锁)
|
8ca2a85682
|
[NFC] polish colossalai/kernel/cuda_native/scaled_softmax.py code style (#955)
|
2022-05-17 10:25:06 +08:00 |
Luxios22
|
f6970ef8b1
|
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/softmax_kernels.cu code style (#954)
|
2022-05-17 10:25:06 +08:00 |
Cautiousss
|
0b86a6345e
|
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/cross_entropy.cu code style (#953)
Co-authored-by: 何晓昕 <cautious@hexiaoxins-MacBook-Pro.local>
|
2022-05-17 10:25:06 +08:00 |
Sze-qq
|
d8d07b0e2b
|
[NFC] polish colossalai/kernel/cuda_native/csrc/multihead_attention_1d.cpp code style (#952)
|
2022-05-17 10:25:06 +08:00 |
JT.Han
|
c3e423c8be
|
[NFC] polish colossalai/kernel/cuda_native/csrc/scaled_masked_softmax_cuda.cu code style (#949)
Co-authored-by: Jiatong <jiatong.han@u.nus.edu>
|
2022-05-17 10:25:06 +08:00 |
bajiaoyu517
|
eb9a81d72a
|
[NFC] polish colossalai/kernel/cuda_native/csrc/cpu_adam.h code style (#945)
|
2022-05-17 10:25:06 +08:00 |
wky
|
8ffdc38376
|
[NFC] polish colossalai/kernel/cuda_native/csrc/moe_cuda.cpp code style (#942)
|
2022-05-17 10:25:06 +08:00 |
HaoyuQin
|
c0f373db5d
|
[NFC] polish pre-commit run --files colossalai/kernel/cuda_native/csrc/scaled_upper_triang_masked_softmax_cuda.cu code style (#943)
|
2022-05-17 10:25:06 +08:00 |
XYE
|
5bbefeb06a
|
[NFC] polish moe_cuda_kernel.cu code style (#940)
Co-authored-by: Xiao Ye <xiaoye2@illinois.edu>
|
2022-05-17 10:25:06 +08:00 |
Maruyama_Aya
|
7aa35eae6a
|
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/block_reduce.h code style (#938)
|
2022-05-17 10:25:06 +08:00 |