zbian
|
7bc0afc901
|
updated flash attention usage
|
2023-03-20 17:57:04 +08:00 |
Frank Lee
|
95a36eae63
|
[kernel] added kernel loader to softmax autograd function (#3093)
* [kernel] added kernel loader to softmax autograd function
* [release] v0.2.6
|
2023-03-10 14:27:09 +08:00 |
ver217
|
823f3b9cf4
|
[doc] add deepspeed citation and copyright (#2996)
* [doc] add deepspeed citation and copyright
* [doc] add deepspeed citation and copyright
* [doc] add deepspeed citation and copyright
|
2023-03-04 20:08:11 +08:00 |
ver217
|
090f14fd6b
|
[misc] add reference (#2930)
* [misc] add reference
* [misc] add license
|
2023-02-28 18:07:24 +08:00 |
Frank Lee
|
918bc94b6b
|
[triton] added copyright information for flash attention (#2835)
* [triton] added copyright information for flash attention
* polish code
|
2023-02-21 11:25:57 +08:00 |
Frank Lee
|
dd14783f75
|
[kernel] fixed repeated loading of kernels (#2549)
* [kernel] fixed repeated loading of kernels
* polish code
* polish code
|
2023-02-03 09:47:13 +08:00 |
Frank Lee
|
8b7495dd54
|
[example] integrate seq-parallel tutorial with CI (#2463)
|
2023-01-13 14:40:05 +08:00 |
jiaruifang
|
69d9180c4b
|
[hotfix] issue #2388
|
2023-01-07 18:23:02 +08:00 |
Frank Lee
|
40d376c566
|
[setup] support pre-build and jit-build of cuda kernels (#2374)
* [setup] support pre-build and jit-build of cuda kernels
* polish code
* polish code
* polish code
* polish code
* polish code
* polish code
|
2023-01-06 20:50:26 +08:00 |
xcnick
|
85178a397a
|
[hotfix] fix error for torch 2.0 (#2243)
|
2022-12-30 23:11:55 +08:00 |
Jiarui Fang
|
db4cbdc7fb
|
[builder] builder for scaled_upper_triang_masked_softmax (#2234)
|
2022-12-30 09:58:00 +08:00 |
Jiarui Fang
|
1cb532ffec
|
[builder] multihead attn runtime building (#2203)
* [hotfix] correcnt cpu_optim runtime compilation
* [builder] multihead attn
* fix bug
* fix a bug
|
2022-12-27 16:06:09 +08:00 |
アマデウス
|
077a66dd81
|
updated attention kernel (#2133)
|
2022-12-16 10:54:03 +08:00 |
HELSON
|
e7d3afc9cc
|
[optimizer] add div_scale for optimizers (#2117)
* [optimizer] add div_scale for optimizers
* [zero] use div_scale in zero optimizer
* fix testing error
|
2022-12-12 17:58:57 +08:00 |
ver217
|
f8a7148dec
|
[kernel] move all symlinks of kernel to `colossalai._C` (#1971)
|
2022-11-17 13:42:33 +08:00 |
zbian
|
6877121377
|
updated flash attention api
|
2022-11-15 15:25:39 +08:00 |
xcnick
|
e0da01ea71
|
[hotfix] fix build error when torch version >= 1.13 (#1803)
|
2022-11-08 09:40:24 +08:00 |
oahzxl
|
9639ea88fc
|
[kernel] more flexible flashatt interface (#1804)
|
2022-11-07 17:02:09 +08:00 |
oahzxl
|
501a9e9cd2
|
[hotfix] polish flash attention (#1802)
|
2022-11-07 14:30:22 +08:00 |
Jiarui Fang
|
c248800359
|
[kernel] skip tests of flash_attn and triton when they are not available (#1798)
|
2022-11-07 13:41:13 +08:00 |
oahzxl
|
25952b67d7
|
[feat] add flash attention (#1762)
|
2022-10-26 16:15:52 +08:00 |
ver217
|
12b4887097
|
[hotfix] fix CPUAdam kernel nullptr (#1410)
|
2022-08-05 19:45:45 +08:00 |
binmakeswell
|
7696cead8d
|
Recover kernal files
|
2022-07-13 12:08:21 +08:00 |
Maruyama_Aya
|
87f679aeae
|
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/kernels.h code style (#1291)
|
2022-07-13 12:08:21 +08:00 |
doubleHU
|
d6f5ef8860
|
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/transform_kernels.cu code style (#1286)
|
2022-07-13 12:08:21 +08:00 |
yuxuan-lou
|
5f6ab35d25
|
Hotfix/format (#1274)
* [NFC] Polish colossalai/kernel/cuda_native/csrc/multi_tensor_lamb.cu code style. (#937)
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/cuda_util.h code style
* [NFC] polish colossalai/kernel/cuda_native/csrc/scaled_masked_softmax.cpp code style
Co-authored-by: BoxiangW <45734921+BoxiangW@users.noreply.github.com>
|
2022-07-13 12:08:21 +08:00 |
binmakeswell
|
c95e18cdb9
|
[NFC] polish colossalai/kernel/cuda_native/csrc/scaled_upper_triang_masked_softmax.h code style (#1270)
|
2022-07-13 12:08:21 +08:00 |
DouJS
|
db13f96333
|
[NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_apply.cuh code style (#1264)
|
2022-07-13 12:08:21 +08:00 |
shenggan
|
5d7366b144
|
[NFC] polish colossalai/kernel/cuda_native/csrc/scaled_masked_softmax.h code style (#1263)
|
2022-07-13 12:08:21 +08:00 |
ziyu huang
|
f1cafcc73a
|
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/dropout_kernels.cu code style (#1261)
Co-authored-by: “Arsmart123 <202476410arsmart@gmail.com>
|
2022-07-13 12:08:21 +08:00 |
Sze-qq
|
f8b9aaef47
|
[NFC] polish colossalai/kernel/cuda_native/csrc/type_shim.h code style (#1260)
|
2022-07-13 12:08:21 +08:00 |
ver217
|
e4f555f29a
|
[optim] refactor fused sgd (#1134)
|
2022-06-20 11:19:38 +08:00 |
zhengzangw
|
ae7c338105
|
[NFC] polish colossalai/kernel/cuda_native/csrc/colossal_C_frontend.cpp code style
|
2022-05-20 23:57:38 +08:00 |
Frank Lee
|
533d0c46d8
|
[kernel] fixed the include bug in dropout kernel (#999)
|
2022-05-18 21:43:18 +08:00 |
puck_WCR
|
bda70b4b66
|
[NFC] polish colossalai/kernel/cuda_native/layer_norm.py code style (#980)
|
2022-05-17 10:25:06 +08:00 |
Kai Wang (Victor Kai)
|
c50c08dcbb
|
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/dropout_kernels.cu code style (#979)
|
2022-05-17 10:25:06 +08:00 |
binmakeswell
|
f28c021376
|
[NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_sgd_kernel.cu code style (#978)
|
2022-05-17 10:25:06 +08:00 |
Jie Zhu
|
b67eebd20f
|
[NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_scale_kernel.cu code style (#977)
|
2022-05-17 10:25:06 +08:00 |
DouJS
|
52705ec5c5
|
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/normalize_kernels.cu code style (#974)
|
2022-05-17 10:25:06 +08:00 |
Ofey Chan
|
136946422b
|
[NFC] polish colossalai/kernel/cuda_native/csrc/layer_norm_cuda.cpp code style (#973)
|
2022-05-17 10:25:06 +08:00 |
Xu Kai
|
632e94abde
|
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/dropout.h code style (#970)
|
2022-05-17 10:25:06 +08:00 |
ExtremeViscent
|
22d1df224d
|
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/feed_forward.h (#968)
code style
|
2022-05-17 10:25:06 +08:00 |
Yuer867
|
7106a399fc
|
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/softmax.h code style (#964)
|
2022-05-17 10:25:06 +08:00 |
ziyu huang
|
5bd80b7dd1
|
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/general_kernels.cu code style (#963)
Co-authored-by: “Arsmart123 <202476410arsmart@gmail.com>
|
2022-05-17 10:25:06 +08:00 |
superhao1995
|
48c4a180c7
|
[NFC] polish colossalai/kernel/cuda_native/csrc/scaled_upper_triang_masked_softmax.cpp code style (#959)
|
2022-05-17 10:25:06 +08:00 |
MaxT
|
442a2975ab
|
[NFC] polish colossalai/kernel/cuda_native/csrc/multihead_attention_1d.h code style (#962)
|
2022-05-17 10:25:06 +08:00 |
runluo
|
89e2767a92
|
[NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_l2norm_kernel.cu code style (#958)
|
2022-05-17 10:25:06 +08:00 |
doubleHU
|
1dc1b6fa00
|
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/cross_entropy_layer.h code style (#957)
|
2022-05-17 10:25:06 +08:00 |
RichardoLuo
|
0e922da874
|
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/context.h code style (#956)
Co-authored-by: RichardoLuo <14049555596@qq.com>
|
2022-05-17 10:25:06 +08:00 |
Wangbo Zhao(黑色枷锁)
|
8ca2a85682
|
[NFC] polish colossalai/kernel/cuda_native/scaled_softmax.py code style (#955)
|
2022-05-17 10:25:06 +08:00 |