Hongxin Liu
ae02d4e4f7
[bf16] add bf16 support ( #3882 )
...
* [bf16] add bf16 support for fused adam (#3844 )
* [bf16] fused adam kernel support bf16
* [test] update fused adam kernel test
* [test] update fused adam test
* [bf16] cpu adam and hybrid adam optimizers support bf16 (#3860 )
* [bf16] implement mixed precision mixin and add bf16 support for low level zero (#3869 )
* [bf16] add mixed precision mixin
* [bf16] low level zero optim support bf16
* [text] update low level zero test
* [text] fix low level zero grad acc test
* [bf16] add bf16 support for gemini (#3872 )
* [bf16] gemini support bf16
* [test] update gemini bf16 test
* [doc] update gemini docstring
* [bf16] add bf16 support for plugins (#3877 )
* [bf16] add bf16 support for legacy zero (#3879 )
* [zero] init context support bf16
* [zero] legacy zero support bf16
* [test] add zero bf16 test
* [doc] add bf16 related docstring for legacy zero
2023-06-05 15:58:31 +08:00
ver217
823f3b9cf4
[doc] add deepspeed citation and copyright ( #2996 )
...
* [doc] add deepspeed citation and copyright
* [doc] add deepspeed citation and copyright
* [doc] add deepspeed citation and copyright
2023-03-04 20:08:11 +08:00
ver217
090f14fd6b
[misc] add reference ( #2930 )
...
* [misc] add reference
* [misc] add license
2023-02-28 18:07:24 +08:00
xcnick
85178a397a
[hotfix] fix error for torch 2.0 ( #2243 )
2022-12-30 23:11:55 +08:00
HELSON
e7d3afc9cc
[optimizer] add div_scale for optimizers ( #2117 )
...
* [optimizer] add div_scale for optimizers
* [zero] use div_scale in zero optimizer
* fix testing error
2022-12-12 17:58:57 +08:00
xcnick
e0da01ea71
[hotfix] fix build error when torch version >= 1.13 ( #1803 )
2022-11-08 09:40:24 +08:00
ver217
12b4887097
[hotfix] fix CPUAdam kernel nullptr ( #1410 )
2022-08-05 19:45:45 +08:00
binmakeswell
7696cead8d
Recover kernal files
2022-07-13 12:08:21 +08:00
Maruyama_Aya
87f679aeae
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/kernels.h code style ( #1291 )
2022-07-13 12:08:21 +08:00
doubleHU
d6f5ef8860
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/transform_kernels.cu code style ( #1286 )
2022-07-13 12:08:21 +08:00
yuxuan-lou
5f6ab35d25
Hotfix/format ( #1274 )
...
* [NFC] Polish colossalai/kernel/cuda_native/csrc/multi_tensor_lamb.cu code style. (#937 )
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/cuda_util.h code style
* [NFC] polish colossalai/kernel/cuda_native/csrc/scaled_masked_softmax.cpp code style
Co-authored-by: BoxiangW <45734921+BoxiangW@users.noreply.github.com>
2022-07-13 12:08:21 +08:00
binmakeswell
c95e18cdb9
[NFC] polish colossalai/kernel/cuda_native/csrc/scaled_upper_triang_masked_softmax.h code style ( #1270 )
2022-07-13 12:08:21 +08:00
DouJS
db13f96333
[NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_apply.cuh code style ( #1264 )
2022-07-13 12:08:21 +08:00
shenggan
5d7366b144
[NFC] polish colossalai/kernel/cuda_native/csrc/scaled_masked_softmax.h code style ( #1263 )
2022-07-13 12:08:21 +08:00
ziyu huang
f1cafcc73a
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/dropout_kernels.cu code style ( #1261 )
...
Co-authored-by: “Arsmart123 <202476410arsmart@gmail.com>
2022-07-13 12:08:21 +08:00
Sze-qq
f8b9aaef47
[NFC] polish colossalai/kernel/cuda_native/csrc/type_shim.h code style ( #1260 )
2022-07-13 12:08:21 +08:00
ver217
e4f555f29a
[optim] refactor fused sgd ( #1134 )
2022-06-20 11:19:38 +08:00
zhengzangw
ae7c338105
[NFC] polish colossalai/kernel/cuda_native/csrc/colossal_C_frontend.cpp code style
2022-05-20 23:57:38 +08:00
Frank Lee
533d0c46d8
[kernel] fixed the include bug in dropout kernel ( #999 )
2022-05-18 21:43:18 +08:00
Kai Wang (Victor Kai)
c50c08dcbb
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/dropout_kernels.cu code style ( #979 )
2022-05-17 10:25:06 +08:00
binmakeswell
f28c021376
[NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_sgd_kernel.cu code style ( #978 )
2022-05-17 10:25:06 +08:00
Jie Zhu
b67eebd20f
[NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_scale_kernel.cu code style ( #977 )
2022-05-17 10:25:06 +08:00
DouJS
52705ec5c5
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/normalize_kernels.cu code style ( #974 )
2022-05-17 10:25:06 +08:00
Ofey Chan
136946422b
[NFC] polish colossalai/kernel/cuda_native/csrc/layer_norm_cuda.cpp code style ( #973 )
2022-05-17 10:25:06 +08:00
Xu Kai
632e94abde
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/dropout.h code style ( #970 )
2022-05-17 10:25:06 +08:00
ExtremeViscent
22d1df224d
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/feed_forward.h ( #968 )
...
code style
2022-05-17 10:25:06 +08:00
Yuer867
7106a399fc
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/softmax.h code style ( #964 )
2022-05-17 10:25:06 +08:00
ziyu huang
5bd80b7dd1
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/general_kernels.cu code style ( #963 )
...
Co-authored-by: “Arsmart123 <202476410arsmart@gmail.com>
2022-05-17 10:25:06 +08:00
superhao1995
48c4a180c7
[NFC] polish colossalai/kernel/cuda_native/csrc/scaled_upper_triang_masked_softmax.cpp code style ( #959 )
2022-05-17 10:25:06 +08:00
MaxT
442a2975ab
[NFC] polish colossalai/kernel/cuda_native/csrc/multihead_attention_1d.h code style ( #962 )
2022-05-17 10:25:06 +08:00
runluo
89e2767a92
[NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_l2norm_kernel.cu code style ( #958 )
2022-05-17 10:25:06 +08:00
doubleHU
1dc1b6fa00
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/cross_entropy_layer.h code style ( #957 )
2022-05-17 10:25:06 +08:00
RichardoLuo
0e922da874
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/context.h code style ( #956 )
...
Co-authored-by: RichardoLuo <14049555596@qq.com>
2022-05-17 10:25:06 +08:00
Luxios22
f6970ef8b1
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/softmax_kernels.cu code style ( #954 )
2022-05-17 10:25:06 +08:00
Cautiousss
0b86a6345e
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/cross_entropy.cu code style ( #953 )
...
Co-authored-by: 何晓昕 <cautious@hexiaoxins-MacBook-Pro.local>
2022-05-17 10:25:06 +08:00
Sze-qq
d8d07b0e2b
[NFC] polish colossalai/kernel/cuda_native/csrc/multihead_attention_1d.cpp code style ( #952 )
2022-05-17 10:25:06 +08:00
JT.Han
c3e423c8be
[NFC] polish colossalai/kernel/cuda_native/csrc/scaled_masked_softmax_cuda.cu code style ( #949 )
...
Co-authored-by: Jiatong <jiatong.han@u.nus.edu>
2022-05-17 10:25:06 +08:00
bajiaoyu517
eb9a81d72a
[NFC] polish colossalai/kernel/cuda_native/csrc/cpu_adam.h code style ( #945 )
2022-05-17 10:25:06 +08:00
wky
8ffdc38376
[NFC] polish colossalai/kernel/cuda_native/csrc/moe_cuda.cpp code style ( #942 )
2022-05-17 10:25:06 +08:00
HaoyuQin
c0f373db5d
[NFC] polish pre-commit run --files colossalai/kernel/cuda_native/csrc/scaled_upper_triang_masked_softmax_cuda.cu code style ( #943 )
2022-05-17 10:25:06 +08:00
XYE
5bbefeb06a
[NFC] polish moe_cuda_kernel.cu code style ( #940 )
...
Co-authored-by: Xiao Ye <xiaoye2@illinois.edu>
2022-05-17 10:25:06 +08:00
Maruyama_Aya
7aa35eae6a
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/block_reduce.h code style ( #938 )
2022-05-17 10:25:06 +08:00
Geng Zhang
b6cc9313ef
[NFC] polish colossalai/kernel/cuda_native/csrc/cpu_adam.cpp code style ( #936 )
2022-05-17 10:25:06 +08:00
yuxuan-lou
44b6f8947b
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/cuda_util.h code style ( #939 )
2022-05-17 10:25:06 +08:00
BoxiangW
872aa413c2
[NFC] Polish colossalai/kernel/cuda_native/csrc/multi_tensor_lamb.cu code style. ( #937 )
2022-05-17 10:25:06 +08:00
ver217
58580b50fe
Revert "[NFC] Hotfix/format ( #984 )" ( #986 )
...
This reverts commit 0772828fba
.
2022-05-17 10:23:38 +08:00
binmakeswell
0772828fba
[NFC] Hotfix/format ( #984 )
...
* [NFC] Polish colossalai/kernel/cuda_native/csrc/multi_tensor_lamb.cu code style. (#937 )
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/cuda_util.h code style (#939 )
* [NFC] polish colossalai/kernel/cuda_native/csrc/cpu_adam.cpp code style (#936 )
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/block_reduce.h code style (#938 )
* [NFC] polish moe_cuda_kernel.cu code style (#940 )
Co-authored-by: Xiao Ye <xiaoye2@illinois.edu>
* [NFC] polish pre-commit run --files colossalai/kernel/cuda_native/csrc/scaled_upper_triang_masked_softmax_cuda.cu code style (#943 )
* [NFC] polish colossalai/kernel/cuda_native/csrc/moe_cuda.cpp code style (#942 )
* [NFC] polish colossalai/kernel/cuda_native/csrc/cpu_adam.h code style (#945 )
* [NFC] polish colossalai/kernel/jit/bias_gelu.py code style (#946 )
Co-authored-by: jnbai <897086360@qq.com>
* [NFC] polish colossalai/kernel/cuda_native/csrc/scaled_masked_softmax_cuda.cu code style (#949 )
Co-authored-by: Jiatong <jiatong.han@u.nus.edu>
* [NFC] polish colossalai/builder/pipeline.py code style (#951 )
* [NFC] polish colossalai/kernel/cuda_native/csrc/multihead_attention_1d.cpp code style (#952 )
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/cross_entropy.cu code style (#953 )
Co-authored-by: 何晓昕 <cautious@hexiaoxins-MacBook-Pro.local>
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/softmax_kernels.cu code style (#954 )
* [NFC] polish colossalai/kernel/cuda_native/scaled_softmax.py code style (#955 )
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/context.h code style (#956 )
Co-authored-by: RichardoLuo <14049555596@qq.com>
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/cross_entropy_layer.h code style (#957 )
* [NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_l2norm_kernel.cu code style (#958 )
* [NFC] polish colossalai/kernel/cuda_native/csrc/multihead_attention_1d.h code style (#962 )
* [NFC] polish colossalai/kernel/cuda_native/csrc/scaled_upper_triang_masked_softmax.cpp code style (#959 )
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/general_kernels.cu code style (#963 )
Co-authored-by: “Arsmart123 <202476410arsmart@gmail.com>
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/softmax.h code style (#964 )
* [NFC] polish __init__.py code style (#965 )
* [NFC] polish colossalai/nn/layer/parallel_3d/layers.py code style (#966 )
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/feed_forward.h (#968 )
code style
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/dropout.h code style (#970 )
* [NFC] polish colossalai/nn/layer/parallel_2p5d/layers.py code style (#972 )
* [NFC] polish colossalai/kernel/cuda_native/csrc/layer_norm_cuda.cpp code style (#973 )
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/normalize_kernels.cu code style (#974 )
* [NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_scale_kernel.cu code style (#977 )
* [NFC] polish colossalai/nn/layer/parallel_2d/layers.py code style (#976 )
* [NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_sgd_kernel.cu code style (#978 )
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/dropout_kernels.cu code style (#979 )
* [NFC] polish colossalai/kernel/cuda_native/layer_norm.py code style (#980 )
* [NFC] polish colossalai/nn/layer/utils/common.py code style (#983 )
Co-authored-by: BoxiangW <45734921+BoxiangW@users.noreply.github.com>
Co-authored-by: yuxuan-lou <83441848+yuxuan-lou@users.noreply.github.com>
Co-authored-by: Geng Zhang <34452939+zxgx@users.noreply.github.com>
Co-authored-by: Maruyama_Aya <38985202+MaruyamaAya@users.noreply.github.com>
Co-authored-by: XYE <92607131+Itok2000u@users.noreply.github.com>
Co-authored-by: Xiao Ye <xiaoye2@illinois.edu>
Co-authored-by: HaoyuQin <79465534+coder-chin@users.noreply.github.com>
Co-authored-by: wky <64853922+wangkuangyi@users.noreply.github.com>
Co-authored-by: bajiaoyu517 <59548007+bajiaoyu517@users.noreply.github.com>
Co-authored-by: luoling-LC <105470086+luoling-LC@users.noreply.github.com>
Co-authored-by: jnbai <897086360@qq.com>
Co-authored-by: JT.Han <59948448+JThh@users.noreply.github.com>
Co-authored-by: Jiatong <jiatong.han@u.nus.edu>
Co-authored-by: xyupeng <99191637+xyupeng@users.noreply.github.com>
Co-authored-by: Sze-qq <68757353+Sze-qq@users.noreply.github.com>
Co-authored-by: Cautiousss <48676630+Cautiousss@users.noreply.github.com>
Co-authored-by: 何晓昕 <cautious@hexiaoxins-MacBook-Pro.local>
Co-authored-by: Luxios22 <67457897+Luxios22@users.noreply.github.com>
Co-authored-by: Wangbo Zhao(黑色枷锁) <56866854+wangbo-zhao@users.noreply.github.com>
Co-authored-by: RichardoLuo <50363844+RichardoLuo@users.noreply.github.com>
Co-authored-by: RichardoLuo <14049555596@qq.com>
Co-authored-by: doubleHU <98150031+huxin711@users.noreply.github.com>
Co-authored-by: runluo <68489000+run-qiao@users.noreply.github.com>
Co-authored-by: MaxT <854721132@qq.com>
Co-authored-by: superhao1995 <804673818@qq.com>
Co-authored-by: ziyu huang <huang0ziyu@gmail.com>
Co-authored-by: “Arsmart123 <202476410arsmart@gmail.com>
Co-authored-by: Yuer867 <62204893+Yuer867@users.noreply.github.com>
Co-authored-by: lucasliunju <lucasliunju@gmail.com>
Co-authored-by: LuGY <74758262+Gy-Lu@users.noreply.github.com>
Co-authored-by: ExtremeViscent <zhangyiqi55732@sina.com>
Co-authored-by: Xu Kai <xukai16@foxmail.com>
Co-authored-by: Zirui Zhu <zhuzr21@gmail.com>
Co-authored-by: Ofey Chan <ofey206@gmail.com>
Co-authored-by: DouJS <dujiangsu@163.com>
Co-authored-by: Jie Zhu <chore.08-protist@icloud.com>
Co-authored-by: shenggan <csg19971016@gmail.com>
Co-authored-by: Kai Wang (Victor Kai) <37533040+kaiwang960112@users.noreply.github.com>
Co-authored-by: puck_WCR <46049915+WANG-CR@users.noreply.github.com>
Co-authored-by: Ziheng Qin <37519855+henryqin1997@users.noreply.github.com>
2022-05-17 09:54:49 +08:00
Jiarui Fang
e761ad2cd7
Revert "[zero] add ZeroTensorShardStrategy ( #793 )" ( #806 )
2022-04-19 14:40:02 +08:00
HELSON
88759e289e
[zero] add ZeroTensorShardStrategy ( #793 )
2022-04-19 14:32:45 +08:00
encmps
79ccfa4310
[NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_adam.cu code style ( #667 )
2022-04-06 11:40:59 +08:00