Frank Lee
2412429d54
[util] fixed activation checkpointing on torch 1.9 ( #719 )
3 years ago
Frank Lee
04ff5ea546
[utils] support detection of number of processes on current node ( #723 )
3 years ago
Jiarui Fang
4d90a7b513
[refactor] zero directory ( #724 )
3 years ago
Jiarui Fang
193dc8dacb
[refactor] refactor the memory utils ( #715 )
3 years ago
HELSON
dbd96fe90a
[zero] check whether gradients have inf and nan in gpu ( #712 )
3 years ago
ver217
715b86eadd
[hotfix] fix stm cuda model data size ( #710 )
3 years ago
LuGY
140263a394
[hotfix]fixed bugs of assigning grad states to non leaf nodes ( #711 )
...
* fixed bugs of assigning grad states to non leaf nodes
* use detach()
3 years ago
Frank Lee
eda30a058e
[compatibility] fixed tensor parallel compatibility with torch 1.9 ( #700 )
3 years ago
HELSON
a9b8300d54
[zero] improve adaptability for not-shard parameters ( #708 )
...
* adapt post grad hooks for not-shard parameters
* adapt optimizer for not-shard parameters
* offload gradients for not-replicated parameters
3 years ago
ver217
ab8c6b4a0e
[zero] refactor memstats collector ( #706 )
...
* refactor memstats collector
* fix disposable
* polish code
3 years ago
アマデウス
3fc8a204dc
[]Corrected 3d vocab parallel embedding ( #707 )
3 years ago
HELSON
ee112fe1da
[zero] adapt zero hooks for unsharded module ( #699 )
3 years ago
ver217
3c9cd5bb5e
[zero] stateful tensor manager ( #687 )
...
* [WIP] stateful tensor manager
* add eviction strategy
* polish code
* polish code
* polish comment
* add unit test
* fix sampler bug
* polish code
* fix max sampling cnt resetting bug
* fix sampler bug
* polish code
* fix bug
* fix unit test
Co-authored-by: jiaruifang <fangjiarui123@gmail.com>
3 years ago
HELSON
d7ecaf362b
[zero] fix init bugs in zero context ( #686 )
...
* adapt model weight initialization for methods in Pytorch nn.init
3 years ago
YuliangLiu0306
0ed7042f42
[pipeline] refactor pipeline ( #679 )
...
* refactor pipeline---put runtime schedule into engine.
* add type hint for schedule Optional[BaseSchedule]
* preprocess schedule during engine initializing
* infer pipeline schedule params from config
3 years ago
Jiarui Fang
59bf2dc590
[zero] initialize a stateful tensor manager ( #614 )
3 years ago
encmps
79ccfa4310
[NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_adam.cu code style ( #667 )
3 years ago
lucasliunju
e4bcff9b0f
[NFC] polish colossalai/builder/builder.py code style ( #662 )
3 years ago
shenggan
331683bf82
[NFC] polish colossalai/kernel/cuda_native/csrc/layer_norm_cuda_kernel.cu code style ( #661 )
3 years ago
FredHuang99
c336cd3066
[NFC] polish colossalai/communication/utils.py code style ( #656 )
3 years ago
MaxT
5ab9a71299
[NFC] polish colossalai/kernel/cuda_native/csrc/moe_cuda.cpp code style ( #642 )
3 years ago
Xue Fuzhao
10afec728f
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/cuda_util.h code style ( #641 )
3 years ago
Cautiousss
055d0270c8
[NFC] polish colossalai/context/process_group_initializer/initializer_sequence.py colossalai/context/process_group_initializer initializer_tensor.py code style ( #639 )
...
Co-authored-by: 何晓昕 <cautious@r-236-100-25-172.comp.nus.edu.sg>
3 years ago
Ziheng Qin
c7c224ee17
[NFC] polish colossalai/builder/pipeline.py code style ( #638 )
3 years ago
Sze-qq
10591ecdf9
[NFC] polish colossalai/kernel/cuda_native/csrc/cpu_adam.cpp code style ( #636 )
3 years ago
Wangbo Zhao
6fcb381801
[NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_l2norm_kernel.cu code style ( #635 )
3 years ago
ExtremeViscent
8a5d526e95
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/dropout_kernels.cu and cross_entropy.cu code style ( #634 )
3 years ago
RichardoLuo
ad1e7ab2b2
'[NFC] polish <colossalai/engine/_base_engine.py> code style' ( #631 )
...
Co-authored-by: RichardoLuo <14049555596@qq.com>
3 years ago
Zangwei
2e11853d04
[NFC] polish colossalai/communication/ring.py code style ( #630 )
3 years ago
puck_WCR
01cc941e1d
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/transform_kernels.cu code stype ( #629 )
3 years ago
superhao1995
c1bed0d998
[NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_lamb.cu code stype ( #628 )
3 years ago
Jiang Zhuo
0a96338b13
[NFC] polish <colossalai/context/process_group_initializer/initializer_data.py> code stype ( #626 )
...
Co-authored-by: 姜卓 <jiangzhuo@jiangzhuodeMacBook-Pro.local>
3 years ago
ziyu huang
701bad439b
[NFC] polish colossalai/context/process_group_initializer/process_group_initializer.py code stype ( #617 )
...
Co-authored-by: “Arsmart123 <202476410arsmart@gmail.com>
3 years ago
Shawn-Kong
db54419409
fix format ( #613 )
...
Co-authored-by: evin K <evink@evins-MacBook-Air.local>
3 years ago
Yuer867
5ecef13c16
fix format ( #611 )
3 years ago
xyupeng
d3d5bedc65
fix format ( #607 )
3 years ago
xuqifan897
f2d2a1597a
fix format ( #608 )
3 years ago
doubleHU
f2da21a827
fix format ( #586 )
3 years ago
fanjinfucool
ffad81e1d1
fix format ( #585 )
...
Co-authored-by: fanjifu <FAN>
3 years ago
binmakeswell
6582aedc94
fix format ( #583 )
3 years ago
DouJS
f08fc17f2b
block_reduce.h fix format ( #581 )
3 years ago
Maruyama_Aya
d2dc6049b5
fix format ( #580 )
3 years ago
wky
174b9c1d85
fix format ( #574 )
3 years ago
BoxiangW
dfe423ae42
fix format ( #572 )
3 years ago
yuxuan-lou
cfb41297ff
'fix/format' ( #573 )
3 years ago
Kai Wang (Victor Kai)
b0f708dfc1
fix format ( #570 )
3 years ago
Xu Kai
2a915a8b62
fix format ( #568 )
3 years ago
YuliangLiu0306
9420d3ae31
fix format ( #567 )
3 years ago
Jie Zhu
0f1da44e5e
[format]colossalai/kernel/cuda_native/csrc/layer_norm_cuda.cpp ( #566 )
3 years ago
coder-chin
5835631218
fix format ( #564 )
3 years ago
Luxios22
e014144c44
fix format ( #565 )
3 years ago
Ziyue Jiang
1762ba14ab
fix format ( #563 )
3 years ago
HELSON
17e73e62cc
[hotfix] fix bugs for unsharded parameters when restore data ( #664 )
3 years ago
Jiarui Fang
0aab52301e
[hotfix] fix a bug in model data stats tracing ( #655 )
3 years ago
YuliangLiu0306
ade05a5d83
[refactor] pipeline, put runtime schedule into engine. ( #627 )
3 years ago
HELSON
e5d615aeee
[hotfix] fix bugs in testing ( #659 )
...
* remove hybrid adam in test_moe_zero_optim
* fix activation checkpointing and its unitest
3 years ago
Jiarui Fang
036404ca8a
Revert "[zero] polish init context ( #645 )" ( #657 )
3 years ago
HELSON
b31daed4cf
fix bugs in CPU adam ( #633 )
...
* add cpu adam counter for all cpu adam
* fixed updating error in adam kernel
3 years ago
LuGY
1e2557e801
[zero] fixed the activation offload ( #647 )
...
* fixed the activation offload
* polish
3 years ago
Liang Bowen
828e465622
[hotfix] Raise messages for indivisible batch sizes with tensor parallelism ( #622 )
3 years ago
Jiarui Fang
67b4928244
[zero] polish init context ( #645 )
3 years ago
ver217
f5d3a9c2b0
polish checkpoint docstring ( #637 )
3 years ago
HELSON
055fbf5be6
[zero] adapt zero for unsharded paramters (Optimizer part) ( #601 )
3 years ago
KAIYUAN GAN
229382c844
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/cuda_util.cu code stype ( #625 )
3 years ago
アマデウス
28b515d610
[model checkpoint] updated checkpoint hook ( #598 )
3 years ago
アマデウス
77ad24bf94
[model checkpoint] updated saving/loading for 3d layers ( #597 )
3 years ago
アマデウス
93089ed708
[model checkpoint] updated saving/loading for 2.5d layers ( #596 )
3 years ago
アマデウス
6302069c0e
[model checkpoint] updated communication ops for cpu tensors ( #590 )
3 years ago
アマデウス
c50bfb807b
[model checkpoint] updated saving/loading for 1d layers ( #594 )
3 years ago
アマデウス
7636d518e1
[model checkpoint] updated saving/loading for 2d layers ( #595 )
3 years ago
アマデウス
cd13b63832
[model checkpoint] reworked unified layers for ease of save/load states ( #593 )
3 years ago
アマデウス
acae68eb04
[model checkpoint] updated checkpoint save/load utils ( #592 )
3 years ago
Ziyue Jiang
1c40ee8749
[TP] add assert for tp1d ( #621 )
3 years ago
ver217
369a288bf3
polish utils docstring ( #620 )
3 years ago
ver217
e619a651fb
polish optimizer docstring ( #619 )
3 years ago
ver217
8432dc7080
polish moe docsrting ( #618 )
3 years ago
ver217
c5b488edf8
polish amp docstring ( #616 )
3 years ago
ver217
0ef8819c67
polish docstring of zero ( #612 )
3 years ago
LuGY
02b187c14f
[zero] add sampling time for memstats collector ( #610 )
3 years ago
ver217
9bee119104
[hotfix] fix sharded optim zero grad ( #604 )
...
* fix sharded optim zero grad
* polish comments
3 years ago
アマデウス
297b8baae2
[model checkpoint] add gloo groups for cpu tensor communication ( #589 )
3 years ago
アマデウス
54e688b623
moved ensure_path_exists to utils.common ( #591 )
3 years ago
Jiarui Fang
e956d93ac2
[refactor] memory utils ( #577 )
3 years ago
ver217
104cbbb313
[hotfix] add hybrid adam to __init__ ( #584 )
3 years ago
HELSON
e6d50ec107
[zero] adapt zero for unsharded parameters ( #561 )
...
* support existing sharded and unsharded parameters in zero
* add unitest for moe-zero model init
* polish moe gradient handler
3 years ago
Wesley
46c9ba33da
update code format
3 years ago
Wesley
666cfd094a
fix parallel_input flag for Linear1D_Col gather_output
3 years ago
ver217
7c6c427db1
[zero] trace states of fp16/32 grad and fp32 param ( #571 )
3 years ago
Jiarui Fang
7675366fce
[polish] rename col_attr -> colo_attr ( #558 )
3 years ago
Liang Bowen
2c45efc398
html refactor ( #555 )
3 years ago
Jiarui Fang
d1211148a7
[utils] update colo tensor moving APIs ( #553 )
3 years ago
LuGY
c44d797072
[docs] updatad docs of hybrid adam and cpu adam ( #552 )
3 years ago
ver217
014bac0c49
[zero] hijack p.grad in sharded model ( #554 )
...
* hijack p.grad in sharded model
* polish comments
* polish comments
3 years ago
Jiarui Fang
f552b11294
[zero] label state for param fp16 and grad ( #551 )
3 years ago
Jiarui Fang
214da761d4
[zero] add stateful tensor ( #549 )
3 years ago
Jiarui Fang
107b99ddb1
[zero] dump memory stats for sharded model ( #548 )
3 years ago
Ziyue Jiang
763dc325f1
[TP] Add gather_out arg to Linear ( #541 )
3 years ago
HELSON
8c90d4df54
[zero] add zero context manager to change config during initialization ( #546 )
3 years ago
Liang Bowen
ec5086c49c
Refactored docstring to google style
3 years ago
Jiarui Fang
53b1b6e340
[zero] non model data tracing ( #545 )
3 years ago