Geng Zhang
|
b6cc9313ef
|
[NFC] polish colossalai/kernel/cuda_native/csrc/cpu_adam.cpp code style (#936)
|
2022-05-17 10:25:06 +08:00 |
yuxuan-lou
|
44b6f8947b
|
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/cuda_util.h code style (#939)
|
2022-05-17 10:25:06 +08:00 |
BoxiangW
|
872aa413c2
|
[NFC] Polish colossalai/kernel/cuda_native/csrc/multi_tensor_lamb.cu code style. (#937)
|
2022-05-17 10:25:06 +08:00 |
ver217
|
58580b50fe
|
Revert "[NFC] Hotfix/format (#984)" (#986)
This reverts commit 0772828fba .
|
2022-05-17 10:23:38 +08:00 |
binmakeswell
|
0772828fba
|
[NFC] Hotfix/format (#984)
* [NFC] Polish colossalai/kernel/cuda_native/csrc/multi_tensor_lamb.cu code style. (#937)
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/cuda_util.h code style (#939)
* [NFC] polish colossalai/kernel/cuda_native/csrc/cpu_adam.cpp code style (#936)
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/block_reduce.h code style (#938)
* [NFC] polish moe_cuda_kernel.cu code style (#940)
Co-authored-by: Xiao Ye <xiaoye2@illinois.edu>
* [NFC] polish pre-commit run --files colossalai/kernel/cuda_native/csrc/scaled_upper_triang_masked_softmax_cuda.cu code style (#943)
* [NFC] polish colossalai/kernel/cuda_native/csrc/moe_cuda.cpp code style (#942)
* [NFC] polish colossalai/kernel/cuda_native/csrc/cpu_adam.h code style (#945)
* [NFC] polish colossalai/kernel/jit/bias_gelu.py code style (#946)
Co-authored-by: jnbai <897086360@qq.com>
* [NFC] polish colossalai/kernel/cuda_native/csrc/scaled_masked_softmax_cuda.cu code style (#949)
Co-authored-by: Jiatong <jiatong.han@u.nus.edu>
* [NFC] polish colossalai/builder/pipeline.py code style (#951)
* [NFC] polish colossalai/kernel/cuda_native/csrc/multihead_attention_1d.cpp code style (#952)
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/cross_entropy.cu code style (#953)
Co-authored-by: 何晓昕 <cautious@hexiaoxins-MacBook-Pro.local>
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/softmax_kernels.cu code style (#954)
* [NFC] polish colossalai/kernel/cuda_native/scaled_softmax.py code style (#955)
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/context.h code style (#956)
Co-authored-by: RichardoLuo <14049555596@qq.com>
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/cross_entropy_layer.h code style (#957)
* [NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_l2norm_kernel.cu code style (#958)
* [NFC] polish colossalai/kernel/cuda_native/csrc/multihead_attention_1d.h code style (#962)
* [NFC] polish colossalai/kernel/cuda_native/csrc/scaled_upper_triang_masked_softmax.cpp code style (#959)
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/general_kernels.cu code style (#963)
Co-authored-by: “Arsmart123 <202476410arsmart@gmail.com>
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/softmax.h code style (#964)
* [NFC] polish __init__.py code style (#965)
* [NFC] polish colossalai/nn/layer/parallel_3d/layers.py code style (#966)
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/feed_forward.h (#968)
code style
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/dropout.h code style (#970)
* [NFC] polish colossalai/nn/layer/parallel_2p5d/layers.py code style (#972)
* [NFC] polish colossalai/kernel/cuda_native/csrc/layer_norm_cuda.cpp code style (#973)
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/normalize_kernels.cu code style (#974)
* [NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_scale_kernel.cu code style (#977)
* [NFC] polish colossalai/nn/layer/parallel_2d/layers.py code style (#976)
* [NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_sgd_kernel.cu code style (#978)
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/dropout_kernels.cu code style (#979)
* [NFC] polish colossalai/kernel/cuda_native/layer_norm.py code style (#980)
* [NFC] polish colossalai/nn/layer/utils/common.py code style (#983)
Co-authored-by: BoxiangW <45734921+BoxiangW@users.noreply.github.com>
Co-authored-by: yuxuan-lou <83441848+yuxuan-lou@users.noreply.github.com>
Co-authored-by: Geng Zhang <34452939+zxgx@users.noreply.github.com>
Co-authored-by: Maruyama_Aya <38985202+MaruyamaAya@users.noreply.github.com>
Co-authored-by: XYE <92607131+Itok2000u@users.noreply.github.com>
Co-authored-by: Xiao Ye <xiaoye2@illinois.edu>
Co-authored-by: HaoyuQin <79465534+coder-chin@users.noreply.github.com>
Co-authored-by: wky <64853922+wangkuangyi@users.noreply.github.com>
Co-authored-by: bajiaoyu517 <59548007+bajiaoyu517@users.noreply.github.com>
Co-authored-by: luoling-LC <105470086+luoling-LC@users.noreply.github.com>
Co-authored-by: jnbai <897086360@qq.com>
Co-authored-by: JT.Han <59948448+JThh@users.noreply.github.com>
Co-authored-by: Jiatong <jiatong.han@u.nus.edu>
Co-authored-by: xyupeng <99191637+xyupeng@users.noreply.github.com>
Co-authored-by: Sze-qq <68757353+Sze-qq@users.noreply.github.com>
Co-authored-by: Cautiousss <48676630+Cautiousss@users.noreply.github.com>
Co-authored-by: 何晓昕 <cautious@hexiaoxins-MacBook-Pro.local>
Co-authored-by: Luxios22 <67457897+Luxios22@users.noreply.github.com>
Co-authored-by: Wangbo Zhao(黑色枷锁) <56866854+wangbo-zhao@users.noreply.github.com>
Co-authored-by: RichardoLuo <50363844+RichardoLuo@users.noreply.github.com>
Co-authored-by: RichardoLuo <14049555596@qq.com>
Co-authored-by: doubleHU <98150031+huxin711@users.noreply.github.com>
Co-authored-by: runluo <68489000+run-qiao@users.noreply.github.com>
Co-authored-by: MaxT <854721132@qq.com>
Co-authored-by: superhao1995 <804673818@qq.com>
Co-authored-by: ziyu huang <huang0ziyu@gmail.com>
Co-authored-by: “Arsmart123 <202476410arsmart@gmail.com>
Co-authored-by: Yuer867 <62204893+Yuer867@users.noreply.github.com>
Co-authored-by: lucasliunju <lucasliunju@gmail.com>
Co-authored-by: LuGY <74758262+Gy-Lu@users.noreply.github.com>
Co-authored-by: ExtremeViscent <zhangyiqi55732@sina.com>
Co-authored-by: Xu Kai <xukai16@foxmail.com>
Co-authored-by: Zirui Zhu <zhuzr21@gmail.com>
Co-authored-by: Ofey Chan <ofey206@gmail.com>
Co-authored-by: DouJS <dujiangsu@163.com>
Co-authored-by: Jie Zhu <chore.08-protist@icloud.com>
Co-authored-by: shenggan <csg19971016@gmail.com>
Co-authored-by: Kai Wang (Victor Kai) <37533040+kaiwang960112@users.noreply.github.com>
Co-authored-by: puck_WCR <46049915+WANG-CR@users.noreply.github.com>
Co-authored-by: Ziheng Qin <37519855+henryqin1997@users.noreply.github.com>
|
2022-05-17 09:54:49 +08:00 |
ver217
|
5898ccf38b
|
udpate version (#982)
|
2022-05-17 09:48:14 +08:00 |
binmakeswell
|
7471f97fc3
|
update results on a single GPU, highlight quick view (#981)
|
2022-05-16 21:14:35 +08:00 |
ver217
|
c2fdc6a011
|
[tensor] derive compute pattern from dist spec (#971)
* derive compute pattern from dist spec
* polish code
|
2022-05-16 14:58:08 +08:00 |
github-actions[bot]
|
46bc95708f
|
Automated submodule synchronization (#960)
Co-authored-by: github-actions <github-actions@github.com>
|
2022-05-14 21:55:34 +08:00 |
Ziyue Jiang
|
797a9dc5a9
|
add DistSpec for loss and test_model (#947)
|
2022-05-13 20:29:50 +08:00 |
ver217
|
67c33f57eb
|
[tensor] design DistSpec and DistSpecManager for ColoTensor (#934)
* add dist spec
* update linear op
* polish code
* polish code
* update embedding op
* polish unit tests
* polish unit tests
* polish comments
* polish code
* add test_dist_spec_mgr
* polish code
* refactor folder structure
* polish unit tests
* add get_process_group() for TensorSpec
* polish code
|
2022-05-13 15:13:52 +08:00 |
Ziyue Jiang
|
830d3bca26
|
[Tensor] add optimizer to bert test (#933)
* add optimizer to bert test
* polish
|
2022-05-13 11:37:23 +08:00 |
github-actions[bot]
|
7edb38193a
|
Automated submodule synchronization (#932)
Co-authored-by: github-actions <github-actions@github.com>
|
2022-05-13 10:22:51 +08:00 |
Ziyue Jiang
|
d73c2b1d79
|
[Tensor] fix init context (#931)
* change torch.Parameter to ColoParameter
* fix post assignment for init context
* polish
* polish
|
2022-05-11 15:48:12 +08:00 |
Ziyue Jiang
|
dfc88b85ea
|
[Tensor] simplify named param (#928)
* simplify ColoModulize
* simplify ColoModulize
* polish
* polish
|
2022-05-11 10:54:19 +08:00 |
YuliangLiu0306
|
32a45cd7ef
|
[pipelinable]use pipelinable to support GPT model. (#903)
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4 .
* [pipelinable]use pipelinable to support GPT model.
* fix a bug caused by ShardedModel
* polish
* fix front func list
|
2022-05-11 09:23:58 +08:00 |
github-actions[bot]
|
b61d64685f
|
Automated submodule synchronization (#929)
Co-authored-by: github-actions <github-actions@github.com>
|
2022-05-11 09:13:06 +08:00 |
ver217
|
4ca732349e
|
[tensor] colo tensor overrides mul (#927)
* colo tensor overrides mul
* polish code
|
2022-05-10 16:04:08 +08:00 |
ver217
|
45b9124df4
|
[tensor] hijack addmm for colo tensor (#923)
* hijack addmm for colo tensor
* fix bugs
* polish unit test
* polish comments
|
2022-05-09 18:55:49 +08:00 |
Jiarui Fang
|
534afb018a
|
test pretrain loading on multi-process (#922)
|
2022-05-09 17:07:35 +08:00 |
Ziyue Jiang
|
c195d2814c
|
[Tensor] add from_pretrained support and bert pretrained test (#921)
* add from_pretrained support and test
* polish
* polish
* polish
* polish
|
2022-05-09 16:11:47 +08:00 |
ver217
|
1d625fcd36
|
[setup] support more cuda architectures (#920)
* support more cuda archs
* polish code
|
2022-05-09 10:56:45 +08:00 |
ver217
|
5d8f1262fb
|
update cuda ext cc flags (#919)
|
2022-05-07 18:01:04 +08:00 |
Jiarui Fang
|
845856ea29
|
[Graph] building computing graph with ColoTensor, Linear only (#917)
|
2022-05-07 17:10:37 +08:00 |
Ziyue Jiang
|
75d221918a
|
[Tensor] add 1d vocab loss (#918)
* add 1d vocab loss
* polish
|
2022-05-07 15:49:14 +08:00 |
Ziyue Jiang
|
dfaff4e243
|
[Tensor] fix test_model (#916)
* polish test_model
* polish
|
2022-05-06 18:06:22 +08:00 |
Jiarui Fang
|
ed6426c300
|
[Tensor] polish model test (#915)
|
2022-05-06 17:07:56 +08:00 |
Ziyue Jiang
|
0fab86b12a
|
[Tensor] add a basic bert. (#911)
* add base bert test
* Add bert test
* polish
* remove test_bert
* polish
|
2022-05-06 15:03:43 +08:00 |
Jiarui Fang
|
ab95ec9aea
|
[Tensor] init ColoParameter (#914)
|
2022-05-06 12:57:14 +08:00 |
Ziyue Jiang
|
193d629311
|
update pytest.mark.parametrize in tensor tests (#913)
|
2022-05-06 11:16:40 +08:00 |
github-actions[bot]
|
1cf7fb3cd9
|
Automated submodule synchronization (#912)
Co-authored-by: github-actions <github-actions@github.com>
|
2022-05-06 10:10:56 +08:00 |
Frank Lee
|
f0f35216f1
|
[ci] added wheel build scripts (#910)
* [ci] added wheel build scripts
* polish code and workflow
* polish code and workflow
* polish code and workflow
* polish code and workflow
* polish code and workflow
* polish code and workflow
* polish code and workflow
* polish code and workflow
* polish code and workflow
* polish code and workflow
* polish code and workflow
* [ci] polish wheel build scripts
|
2022-05-05 16:06:39 +08:00 |
ver217
|
150b1a7453
|
update local version format (#909)
|
2022-05-05 14:59:12 +08:00 |
github-actions[bot]
|
3b1f5f07ce
|
Automated submodule synchronization (#907)
Co-authored-by: github-actions <github-actions@github.com>
|
2022-05-03 13:14:48 +08:00 |
Ziyue Jiang
|
f593a5637e
|
[Tensor] add embedding tp1d row (#904)
|
2022-04-29 14:10:05 +08:00 |
ver217
|
16122d5fac
|
update release bdist CI (#902)
|
2022-04-28 17:52:57 +08:00 |
Ziyue Jiang
|
2c0d19d755
|
[Tensor] add ColoTensor TP1Dcol Embedding (#899)
|
2022-04-28 17:45:06 +08:00 |
ver217
|
e46e423c00
|
add CI for releasing bdist wheel (#901)
|
2022-04-28 17:40:53 +08:00 |
Jiarui Fang
|
e1108caf7d
|
change version to 0.1.4 (#900)
|
2022-04-28 15:51:25 +08:00 |
Jiarui Fang
|
d16671da75
|
[Tensor] initialize the ColoOptimizer (#898)
* [Tensor] activation is an attr of ColoTensor
* [Tensor] add optimizer
* only detach parameters in context
* polish code
|
2022-04-28 15:23:40 +08:00 |
Jiarui Fang
|
676f191532
|
[Tensor] activation is an attr of ColoTensor (#897)
|
2022-04-28 14:43:22 +08:00 |
Jiarui Fang
|
e76f76c08b
|
[Tensor] test parameters() as member function (#896)
|
2022-04-28 10:57:14 +08:00 |
Ziyue Jiang
|
cb182da7c5
|
[tensor] refine linear and add gather for laynorm (#893)
* refine linear and add function to ColoTensor
* add gather for layernorm
* polish
* polish
|
2022-04-28 10:55:40 +08:00 |
Jiarui Fang
|
26c49639d8
|
[Tensor] overriding paramters() for Module using ColoTensor (#889)
|
2022-04-27 15:28:59 +08:00 |
ver217
|
daf59ff72e
|
[setup] add local version label (#890)
|
2022-04-27 15:26:12 +08:00 |
Ziyue Jiang
|
1d0aba4153
|
[tensor] add ColoTensor 1Dcol (#888)
|
2022-04-27 14:13:55 +08:00 |
Jiarui Fang
|
a0e5971692
|
[Tensor] test model check results for a simple net (#887)
|
2022-04-27 12:00:18 +08:00 |
Jiarui Fang
|
72cdc06875
|
[Tensor] make ColoTensor more robust for getattr (#886)
* [Tensor] make ColoTensor more robust for getattr
* polish
* polish
|
2022-04-27 10:57:49 +08:00 |
Ziyue Jiang
|
9bc5a77c31
|
[tensor] wrap function in the torch_tensor to ColoTensor (#881)
|
2022-04-26 20:13:56 +08:00 |
ver217
|
4df6471f5d
|
fix import error (#880)
|
2022-04-26 19:28:40 +08:00 |