ver217
|
9492a561c3
|
[tensor] ColoTensor supports ZeRo (#1015)
* impl chunk manager
* impl param op hook
* add reduce_chunk
* add zero hook v2
* add zero dp
* fix TensorInfo
* impl load balancing when using zero without chunk
* fix zero hook
* polish chunk
* fix bugs
* ddp ok
* zero ok
* polish code
* fix bugs about load balancing
* polish code
* polish code
* add ene-to-end test
* polish code
* polish code
* polish code
* fix typo
* add test_chunk
* fix bugs
* fix bugs
* polish code
|
3 years ago |
ver217
|
cefc29ff06
|
[tensor] impl ColoDDP for ColoTensor (#1009)
* impl ColoDDP for ColoTensor
* polish code
|
3 years ago |
Ziheng Qin
|
571f12eff3
|
[NFC] polish colossalai/nn/layer/utils/common.py code style (#983)
|
3 years ago |
shenggan
|
18542b47fc
|
[NFC] polish colossalai/nn/layer/parallel_2d/layers.py code style (#976)
|
3 years ago |
Zirui Zhu
|
598cde4a0f
|
[NFC] polish colossalai/nn/layer/parallel_2p5d/layers.py code style (#972)
|
3 years ago |
LuGY
|
fb5bc6cb28
|
[NFC] polish colossalai/nn/layer/parallel_3d/layers.py code style (#966)
|
3 years ago |
ver217
|
58580b50fe
|
Revert "[NFC] Hotfix/format (#984)" (#986)
This reverts commit 0772828fba .
|
3 years ago |
binmakeswell
|
0772828fba
|
[NFC] Hotfix/format (#984)
* [NFC] Polish colossalai/kernel/cuda_native/csrc/multi_tensor_lamb.cu code style. (#937)
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/cuda_util.h code style (#939)
* [NFC] polish colossalai/kernel/cuda_native/csrc/cpu_adam.cpp code style (#936)
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/block_reduce.h code style (#938)
* [NFC] polish moe_cuda_kernel.cu code style (#940)
Co-authored-by: Xiao Ye <xiaoye2@illinois.edu>
* [NFC] polish pre-commit run --files colossalai/kernel/cuda_native/csrc/scaled_upper_triang_masked_softmax_cuda.cu code style (#943)
* [NFC] polish colossalai/kernel/cuda_native/csrc/moe_cuda.cpp code style (#942)
* [NFC] polish colossalai/kernel/cuda_native/csrc/cpu_adam.h code style (#945)
* [NFC] polish colossalai/kernel/jit/bias_gelu.py code style (#946)
Co-authored-by: jnbai <897086360@qq.com>
* [NFC] polish colossalai/kernel/cuda_native/csrc/scaled_masked_softmax_cuda.cu code style (#949)
Co-authored-by: Jiatong <jiatong.han@u.nus.edu>
* [NFC] polish colossalai/builder/pipeline.py code style (#951)
* [NFC] polish colossalai/kernel/cuda_native/csrc/multihead_attention_1d.cpp code style (#952)
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/cross_entropy.cu code style (#953)
Co-authored-by: 何晓昕 <cautious@hexiaoxins-MacBook-Pro.local>
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/softmax_kernels.cu code style (#954)
* [NFC] polish colossalai/kernel/cuda_native/scaled_softmax.py code style (#955)
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/context.h code style (#956)
Co-authored-by: RichardoLuo <14049555596@qq.com>
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/cross_entropy_layer.h code style (#957)
* [NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_l2norm_kernel.cu code style (#958)
* [NFC] polish colossalai/kernel/cuda_native/csrc/multihead_attention_1d.h code style (#962)
* [NFC] polish colossalai/kernel/cuda_native/csrc/scaled_upper_triang_masked_softmax.cpp code style (#959)
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/general_kernels.cu code style (#963)
Co-authored-by: “Arsmart123 <202476410arsmart@gmail.com>
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/softmax.h code style (#964)
* [NFC] polish __init__.py code style (#965)
* [NFC] polish colossalai/nn/layer/parallel_3d/layers.py code style (#966)
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/feed_forward.h (#968)
code style
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/dropout.h code style (#970)
* [NFC] polish colossalai/nn/layer/parallel_2p5d/layers.py code style (#972)
* [NFC] polish colossalai/kernel/cuda_native/csrc/layer_norm_cuda.cpp code style (#973)
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/normalize_kernels.cu code style (#974)
* [NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_scale_kernel.cu code style (#977)
* [NFC] polish colossalai/nn/layer/parallel_2d/layers.py code style (#976)
* [NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_sgd_kernel.cu code style (#978)
* [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/dropout_kernels.cu code style (#979)
* [NFC] polish colossalai/kernel/cuda_native/layer_norm.py code style (#980)
* [NFC] polish colossalai/nn/layer/utils/common.py code style (#983)
Co-authored-by: BoxiangW <45734921+BoxiangW@users.noreply.github.com>
Co-authored-by: yuxuan-lou <83441848+yuxuan-lou@users.noreply.github.com>
Co-authored-by: Geng Zhang <34452939+zxgx@users.noreply.github.com>
Co-authored-by: Maruyama_Aya <38985202+MaruyamaAya@users.noreply.github.com>
Co-authored-by: XYE <92607131+Itok2000u@users.noreply.github.com>
Co-authored-by: Xiao Ye <xiaoye2@illinois.edu>
Co-authored-by: HaoyuQin <79465534+coder-chin@users.noreply.github.com>
Co-authored-by: wky <64853922+wangkuangyi@users.noreply.github.com>
Co-authored-by: bajiaoyu517 <59548007+bajiaoyu517@users.noreply.github.com>
Co-authored-by: luoling-LC <105470086+luoling-LC@users.noreply.github.com>
Co-authored-by: jnbai <897086360@qq.com>
Co-authored-by: JT.Han <59948448+JThh@users.noreply.github.com>
Co-authored-by: Jiatong <jiatong.han@u.nus.edu>
Co-authored-by: xyupeng <99191637+xyupeng@users.noreply.github.com>
Co-authored-by: Sze-qq <68757353+Sze-qq@users.noreply.github.com>
Co-authored-by: Cautiousss <48676630+Cautiousss@users.noreply.github.com>
Co-authored-by: 何晓昕 <cautious@hexiaoxins-MacBook-Pro.local>
Co-authored-by: Luxios22 <67457897+Luxios22@users.noreply.github.com>
Co-authored-by: Wangbo Zhao(黑色枷锁) <56866854+wangbo-zhao@users.noreply.github.com>
Co-authored-by: RichardoLuo <50363844+RichardoLuo@users.noreply.github.com>
Co-authored-by: RichardoLuo <14049555596@qq.com>
Co-authored-by: doubleHU <98150031+huxin711@users.noreply.github.com>
Co-authored-by: runluo <68489000+run-qiao@users.noreply.github.com>
Co-authored-by: MaxT <854721132@qq.com>
Co-authored-by: superhao1995 <804673818@qq.com>
Co-authored-by: ziyu huang <huang0ziyu@gmail.com>
Co-authored-by: “Arsmart123 <202476410arsmart@gmail.com>
Co-authored-by: Yuer867 <62204893+Yuer867@users.noreply.github.com>
Co-authored-by: lucasliunju <lucasliunju@gmail.com>
Co-authored-by: LuGY <74758262+Gy-Lu@users.noreply.github.com>
Co-authored-by: ExtremeViscent <zhangyiqi55732@sina.com>
Co-authored-by: Xu Kai <xukai16@foxmail.com>
Co-authored-by: Zirui Zhu <zhuzr21@gmail.com>
Co-authored-by: Ofey Chan <ofey206@gmail.com>
Co-authored-by: DouJS <dujiangsu@163.com>
Co-authored-by: Jie Zhu <chore.08-protist@icloud.com>
Co-authored-by: shenggan <csg19971016@gmail.com>
Co-authored-by: Kai Wang (Victor Kai) <37533040+kaiwang960112@users.noreply.github.com>
Co-authored-by: puck_WCR <46049915+WANG-CR@users.noreply.github.com>
Co-authored-by: Ziheng Qin <37519855+henryqin1997@users.noreply.github.com>
|
3 years ago |
HELSON
|
e5ea3fdeef
|
[gemini] add GeminiMemoryManger (#832)
* refactor StatefulTensor, tensor utilities
* add unitest for GeminiMemoryManager
|
3 years ago |
Ziyue Jiang
|
4b01da24cd
|
[TP] change the check assert in split batch 2d (#772)
|
3 years ago |
アマデウス
|
b8899e0905
|
[TP] allow layernorm without bias (#750)
|
3 years ago |
Frank Lee
|
eda30a058e
|
[compatibility] fixed tensor parallel compatibility with torch 1.9 (#700)
|
3 years ago |
HELSON
|
a9b8300d54
|
[zero] improve adaptability for not-shard parameters (#708)
* adapt post grad hooks for not-shard parameters
* adapt optimizer for not-shard parameters
* offload gradients for not-replicated parameters
|
3 years ago |
アマデウス
|
3fc8a204dc
|
[]Corrected 3d vocab parallel embedding (#707)
|
3 years ago |
HELSON
|
b31daed4cf
|
fix bugs in CPU adam (#633)
* add cpu adam counter for all cpu adam
* fixed updating error in adam kernel
|
3 years ago |
Liang Bowen
|
828e465622
|
[hotfix] Raise messages for indivisible batch sizes with tensor parallelism (#622)
|
3 years ago |
アマデウス
|
77ad24bf94
|
[model checkpoint] updated saving/loading for 3d layers (#597)
|
3 years ago |
アマデウス
|
93089ed708
|
[model checkpoint] updated saving/loading for 2.5d layers (#596)
|
3 years ago |
アマデウス
|
c50bfb807b
|
[model checkpoint] updated saving/loading for 1d layers (#594)
|
3 years ago |
アマデウス
|
7636d518e1
|
[model checkpoint] updated saving/loading for 2d layers (#595)
|
3 years ago |
アマデウス
|
cd13b63832
|
[model checkpoint] reworked unified layers for ease of save/load states (#593)
|
3 years ago |
Ziyue Jiang
|
1c40ee8749
|
[TP] add assert for tp1d (#621)
|
3 years ago |
ver217
|
e619a651fb
|
polish optimizer docstring (#619)
|
3 years ago |
ver217
|
8432dc7080
|
polish moe docsrting (#618)
|
3 years ago |
ver217
|
104cbbb313
|
[hotfix] add hybrid adam to __init__ (#584)
|
3 years ago |
HELSON
|
e6d50ec107
|
[zero] adapt zero for unsharded parameters (#561)
* support existing sharded and unsharded parameters in zero
* add unitest for moe-zero model init
* polish moe gradient handler
|
3 years ago |
Wesley
|
46c9ba33da
|
update code format
|
3 years ago |
Wesley
|
666cfd094a
|
fix parallel_input flag for Linear1D_Col gather_output
|
3 years ago |
Liang Bowen
|
2c45efc398
|
html refactor (#555)
|
3 years ago |
LuGY
|
c44d797072
|
[docs] updatad docs of hybrid adam and cpu adam (#552)
|
3 years ago |
Ziyue Jiang
|
763dc325f1
|
[TP] Add gather_out arg to Linear (#541)
|
3 years ago |
HELSON
|
8c90d4df54
|
[zero] add zero context manager to change config during initialization (#546)
|
3 years ago |
Liang Bowen
|
ec5086c49c
|
Refactored docstring to google style
|
3 years ago |
LuGY
|
105c5301c3
|
[zero]added hybrid adam, removed loss scale in adam (#527)
* [zero]added hybrid adam, removed loss scale of adam
* remove useless code
|
3 years ago |
LuGY
|
6a3f9fda83
|
[cuda] modify the fused adam, support hybrid of fp16 and fp32 (#497)
|
3 years ago |
Jiarui Fang
|
a445e118cf
|
[polish] polish singleton and global context (#500)
|
3 years ago |
ver217
|
9ec1ce6ab1
|
[zero] sharded model support the reuse of fp16 shard (#495)
* sharded model supports reuse fp16 shard
* rename variable
* polish code
* polish code
* polish code
|
3 years ago |
HELSON
|
c9023d4078
|
[MOE] support PR-MOE (#488)
|
3 years ago |
ver217
|
62b0a8d644
|
[zero] sharded optim support hybrid cpu adam (#486)
* sharded optim support hybrid cpu adam
* update unit test
* polish docstring
|
3 years ago |
HELSON
|
d7ea63992b
|
[MOE] add FP32LinearGate for MOE in NaiveAMP context (#480)
|
3 years ago |
Jiarui Fang
|
65c0f380c2
|
[format] polish name format for MOE (#481)
|
3 years ago |
HELSON
|
7544347145
|
[MOE] add unitest for MOE experts layout, gradient handler and kernel (#469)
|
3 years ago |
HELSON
|
aff9d354f7
|
[MOE] polish moe_env (#467)
|
3 years ago |
HELSON
|
bccbc15861
|
[MOE] changed parallelmode to dist process group (#460)
|
3 years ago |
Jiarui Fang
|
0fcfb1e00d
|
[test] make zero engine test really work (#447)
|
3 years ago |
Jiarui Fang
|
237d08e7ee
|
[zero] hybrid cpu adam (#445)
|
3 years ago |
HELSON
|
dbdc9a7783
|
added Multiply Jitter and capacity factor eval for MOE (#434)
|
3 years ago |
HELSON
|
3f70a2b12f
|
removed noisy function during evaluation of MoE router (#419)
|
3 years ago |
Jiang Zhuo
|
5a4a3b77d9
|
fix format (#376)
|
3 years ago |
LuGY
|
de46450461
|
Added activation offload (#331)
* Added activation offload
* Fixed the import bug, used the pytest
|
3 years ago |