Commit Graph

339 Commits (cf6d1c9284dbc397a4847ccea7f78b8a0890ed60)

Author SHA1 Message Date
Jiarui Fang 3d7dc46d33
[zero] use factory pattern for tensor_placement_policy (#752) 2022-04-14 11:07:29 +08:00
ver217 4b048a8728
fix prepare grads in sharded optim (#749) 2022-04-13 22:36:11 +08:00
ver217 097772546e fix initialize about zero 2022-04-13 19:10:21 +08:00
ver217 e396bb71f2
[zero] add tensor placement policies (#743)
* add tensor placement policies

* polish comments

* polish comments

* update moe unit tests
2022-04-13 15:00:48 +08:00
HELSON 22c4b88d56
[zero] refactor ShardedParamV2 for convenience (#742) 2022-04-13 14:54:26 +08:00
HELSON 340e59f968
[utils] add synchronized cuda memory monitor (#740) 2022-04-13 10:50:54 +08:00
ver217 e6212f56cd
[hotfix] fix memory leak in backward of sharded model (#741) 2022-04-13 09:59:05 +08:00
Frank Lee a4e91bc87f
[bug] fixed grad scaler compatibility with torch 1.8 (#735) 2022-04-12 16:04:21 +08:00
Jiarui Fang 53cb584808
[utils] correct cpu memory used and capacity in the context of multi-process (#726) 2022-04-12 14:57:54 +08:00
Jiarui Fang 7db3ccc79b
[hotfix] remove duplicated param register to stateful tensor manager (#728) 2022-04-12 13:55:25 +08:00
Frank Lee 1cb7bdad3b
[util] fixed communication API depth with PyTorch 1.9 (#721) 2022-04-12 09:44:40 +08:00
Frank Lee 2412429d54
[util] fixed activation checkpointing on torch 1.9 (#719) 2022-04-12 09:35:45 +08:00
Frank Lee 04ff5ea546
[utils] support detection of number of processes on current node (#723) 2022-04-12 09:28:19 +08:00
Jiarui Fang 4d90a7b513
[refactor] zero directory (#724) 2022-04-11 23:13:02 +08:00
Jiarui Fang 193dc8dacb
[refactor] refactor the memory utils (#715) 2022-04-11 16:47:57 +08:00
HELSON dbd96fe90a
[zero] check whether gradients have inf and nan in gpu (#712) 2022-04-11 15:40:13 +08:00
ver217 715b86eadd
[hotfix] fix stm cuda model data size (#710) 2022-04-11 15:10:39 +08:00
LuGY 140263a394
[hotfix]fixed bugs of assigning grad states to non leaf nodes (#711)
* fixed bugs of assigning grad states to non leaf nodes

* use detach()
2022-04-11 14:04:58 +08:00
Frank Lee eda30a058e
[compatibility] fixed tensor parallel compatibility with torch 1.9 (#700) 2022-04-11 13:44:50 +08:00
HELSON a9b8300d54
[zero] improve adaptability for not-shard parameters (#708)
* adapt post grad hooks for not-shard parameters
* adapt optimizer for not-shard parameters
* offload gradients for not-replicated parameters
2022-04-11 13:38:51 +08:00
ver217 ab8c6b4a0e
[zero] refactor memstats collector (#706)
* refactor memstats collector

* fix disposable

* polish code
2022-04-11 10:46:08 +08:00
アマデウス 3fc8a204dc
[]Corrected 3d vocab parallel embedding (#707) 2022-04-11 10:17:55 +08:00
HELSON ee112fe1da
[zero] adapt zero hooks for unsharded module (#699) 2022-04-08 20:23:26 +08:00
ver217 3c9cd5bb5e
[zero] stateful tensor manager (#687)
* [WIP] stateful tensor manager

* add eviction strategy

* polish code

* polish code

* polish comment

* add unit test

* fix sampler bug

* polish code

* fix max sampling cnt resetting bug

* fix sampler bug

* polish code

* fix bug

* fix unit test

Co-authored-by: jiaruifang <fangjiarui123@gmail.com>
2022-04-08 17:51:34 +08:00
HELSON d7ecaf362b
[zero] fix init bugs in zero context (#686)
* adapt model weight initialization for methods in Pytorch nn.init
2022-04-07 17:38:45 +08:00
YuliangLiu0306 0ed7042f42
[pipeline] refactor pipeline (#679)
* refactor pipeline---put runtime schedule into engine.

* add type hint for schedule Optional[BaseSchedule]

* preprocess schedule during engine initializing

* infer pipeline schedule params from config
2022-04-07 15:54:14 +08:00
Jiarui Fang 59bf2dc590
[zero] initialize a stateful tensor manager (#614) 2022-04-06 16:18:49 +08:00
encmps 79ccfa4310 [NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_adam.cu code style (#667) 2022-04-06 11:40:59 +08:00
lucasliunju e4bcff9b0f [NFC] polish colossalai/builder/builder.py code style (#662) 2022-04-06 11:40:59 +08:00
shenggan 331683bf82 [NFC] polish colossalai/kernel/cuda_native/csrc/layer_norm_cuda_kernel.cu code style (#661) 2022-04-06 11:40:59 +08:00
FredHuang99 c336cd3066 [NFC] polish colossalai/communication/utils.py code style (#656) 2022-04-06 11:40:59 +08:00
MaxT 5ab9a71299 [NFC] polish colossalai/kernel/cuda_native/csrc/moe_cuda.cpp code style (#642) 2022-04-06 11:40:59 +08:00
Xue Fuzhao 10afec728f [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/cuda_util.h code style (#641) 2022-04-06 11:40:59 +08:00
Cautiousss 055d0270c8 [NFC] polish colossalai/context/process_group_initializer/initializer_sequence.py colossalai/context/process_group_initializer initializer_tensor.py code style (#639)
Co-authored-by: 何晓昕 <cautious@r-236-100-25-172.comp.nus.edu.sg>
2022-04-06 11:40:59 +08:00
Ziheng Qin c7c224ee17 [NFC] polish colossalai/builder/pipeline.py code style (#638) 2022-04-06 11:40:59 +08:00
Sze-qq 10591ecdf9 [NFC] polish colossalai/kernel/cuda_native/csrc/cpu_adam.cpp code style (#636) 2022-04-06 11:40:59 +08:00
Wangbo Zhao 6fcb381801 [NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_l2norm_kernel.cu code style (#635) 2022-04-06 11:40:59 +08:00
ExtremeViscent 8a5d526e95 [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/dropout_kernels.cu and cross_entropy.cu code style (#634) 2022-04-06 11:40:59 +08:00
RichardoLuo ad1e7ab2b2 '[NFC] polish <colossalai/engine/_base_engine.py> code style' (#631)
Co-authored-by: RichardoLuo <14049555596@qq.com>
2022-04-06 11:40:59 +08:00
Zangwei 2e11853d04 [NFC] polish colossalai/communication/ring.py code style (#630) 2022-04-06 11:40:59 +08:00
puck_WCR 01cc941e1d [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/transform_kernels.cu code stype (#629) 2022-04-06 11:40:59 +08:00
superhao1995 c1bed0d998 [NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_lamb.cu code stype (#628) 2022-04-06 11:40:59 +08:00
Jiang Zhuo 0a96338b13 [NFC] polish <colossalai/context/process_group_initializer/initializer_data.py> code stype (#626)
Co-authored-by: 姜卓 <jiangzhuo@jiangzhuodeMacBook-Pro.local>
2022-04-06 11:40:59 +08:00
ziyu huang 701bad439b [NFC] polish colossalai/context/process_group_initializer/process_group_initializer.py code stype (#617)
Co-authored-by: “Arsmart123 <202476410arsmart@gmail.com>
2022-04-06 11:40:59 +08:00
Shawn-Kong db54419409 fix format (#613)
Co-authored-by: evin K <evink@evins-MacBook-Air.local>
2022-04-06 11:40:59 +08:00
Yuer867 5ecef13c16 fix format (#611) 2022-04-06 11:40:59 +08:00
xyupeng d3d5bedc65 fix format (#607) 2022-04-06 11:40:59 +08:00
xuqifan897 f2d2a1597a fix format (#608) 2022-04-06 11:40:59 +08:00
doubleHU f2da21a827 fix format (#586) 2022-04-06 11:40:59 +08:00
fanjinfucool ffad81e1d1 fix format (#585)
Co-authored-by: fanjifu <FAN>
2022-04-06 11:40:59 +08:00
binmakeswell 6582aedc94 fix format (#583) 2022-04-06 11:40:59 +08:00
DouJS f08fc17f2b block_reduce.h fix format (#581) 2022-04-06 11:40:59 +08:00
Maruyama_Aya d2dc6049b5 fix format (#580) 2022-04-06 11:40:59 +08:00
wky 174b9c1d85 fix format (#574) 2022-04-06 11:40:59 +08:00
BoxiangW dfe423ae42 fix format (#572) 2022-04-06 11:40:59 +08:00
yuxuan-lou cfb41297ff 'fix/format' (#573) 2022-04-06 11:40:59 +08:00
Kai Wang (Victor Kai) b0f708dfc1 fix format (#570) 2022-04-06 11:40:59 +08:00
Xu Kai 2a915a8b62 fix format (#568) 2022-04-06 11:40:59 +08:00
YuliangLiu0306 9420d3ae31 fix format (#567) 2022-04-06 11:40:59 +08:00
Jie Zhu 0f1da44e5e [format]colossalai/kernel/cuda_native/csrc/layer_norm_cuda.cpp (#566) 2022-04-06 11:40:59 +08:00
coder-chin 5835631218 fix format (#564) 2022-04-06 11:40:59 +08:00
Luxios22 e014144c44 fix format (#565) 2022-04-06 11:40:59 +08:00
Ziyue Jiang 1762ba14ab fix format (#563) 2022-04-06 11:40:59 +08:00
HELSON 17e73e62cc
[hotfix] fix bugs for unsharded parameters when restore data (#664) 2022-04-03 22:02:11 +08:00
Jiarui Fang 0aab52301e
[hotfix] fix a bug in model data stats tracing (#655) 2022-04-03 21:48:06 +08:00
YuliangLiu0306 ade05a5d83
[refactor] pipeline, put runtime schedule into engine. (#627) 2022-04-03 20:46:45 +08:00
HELSON e5d615aeee
[hotfix] fix bugs in testing (#659)
* remove hybrid adam in test_moe_zero_optim

* fix activation checkpointing and its unitest
2022-04-02 21:58:47 +08:00
Jiarui Fang 036404ca8a
Revert "[zero] polish init context (#645)" (#657) 2022-04-02 18:30:06 +08:00
HELSON b31daed4cf
fix bugs in CPU adam (#633)
* add cpu adam counter for all cpu adam

* fixed updating error in adam kernel
2022-04-02 17:04:05 +08:00
LuGY 1e2557e801
[zero] fixed the activation offload (#647)
* fixed the activation offload

* polish
2022-04-02 16:21:32 +08:00
Liang Bowen 828e465622
[hotfix] Raise messages for indivisible batch sizes with tensor parallelism (#622) 2022-04-02 16:12:04 +08:00
Jiarui Fang 67b4928244
[zero] polish init context (#645) 2022-04-02 15:52:04 +08:00
ver217 f5d3a9c2b0
polish checkpoint docstring (#637) 2022-04-02 13:34:33 +08:00
HELSON 055fbf5be6
[zero] adapt zero for unsharded paramters (Optimizer part) (#601) 2022-04-01 20:10:47 +08:00
KAIYUAN GAN 229382c844
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/cuda_util.cu code stype (#625) 2022-04-01 17:45:53 +08:00
アマデウス 28b515d610
[model checkpoint] updated checkpoint hook (#598) 2022-04-01 16:53:03 +08:00
アマデウス 77ad24bf94
[model checkpoint] updated saving/loading for 3d layers (#597) 2022-04-01 16:52:47 +08:00
アマデウス 93089ed708
[model checkpoint] updated saving/loading for 2.5d layers (#596) 2022-04-01 16:52:33 +08:00
アマデウス 6302069c0e
[model checkpoint] updated communication ops for cpu tensors (#590) 2022-04-01 16:52:20 +08:00
アマデウス c50bfb807b
[model checkpoint] updated saving/loading for 1d layers (#594) 2022-04-01 16:51:52 +08:00
アマデウス 7636d518e1
[model checkpoint] updated saving/loading for 2d layers (#595) 2022-04-01 16:50:34 +08:00
アマデウス cd13b63832
[model checkpoint] reworked unified layers for ease of save/load states (#593) 2022-04-01 16:49:56 +08:00
アマデウス acae68eb04
[model checkpoint] updated checkpoint save/load utils (#592) 2022-04-01 16:49:21 +08:00
Ziyue Jiang 1c40ee8749
[TP] add assert for tp1d (#621) 2022-04-01 16:44:23 +08:00
ver217 369a288bf3
polish utils docstring (#620) 2022-04-01 16:36:47 +08:00
ver217 e619a651fb
polish optimizer docstring (#619) 2022-04-01 16:27:03 +08:00
ver217 8432dc7080
polish moe docsrting (#618) 2022-04-01 16:15:36 +08:00
ver217 c5b488edf8
polish amp docstring (#616) 2022-04-01 16:09:39 +08:00
ver217 0ef8819c67
polish docstring of zero (#612) 2022-04-01 14:50:56 +08:00
LuGY 02b187c14f
[zero] add sampling time for memstats collector (#610) 2022-04-01 14:03:00 +08:00
ver217 9bee119104
[hotfix] fix sharded optim zero grad (#604)
* fix sharded optim zero grad

* polish comments
2022-04-01 12:41:20 +08:00
アマデウス 297b8baae2
[model checkpoint] add gloo groups for cpu tensor communication (#589) 2022-04-01 10:15:52 +08:00
アマデウス 54e688b623
moved ensure_path_exists to utils.common (#591) 2022-04-01 09:46:33 +08:00
Jiarui Fang e956d93ac2
[refactor] memory utils (#577) 2022-04-01 09:22:33 +08:00
ver217 104cbbb313
[hotfix] add hybrid adam to __init__ (#584) 2022-03-31 19:08:34 +08:00
HELSON e6d50ec107
[zero] adapt zero for unsharded parameters (#561)
* support existing sharded and unsharded parameters in zero

* add unitest for moe-zero model init

* polish moe gradient handler
2022-03-31 18:34:11 +08:00
Wesley 46c9ba33da update code format 2022-03-31 17:15:08 +08:00
Wesley 666cfd094a fix parallel_input flag for Linear1D_Col gather_output 2022-03-31 17:15:08 +08:00
ver217 7c6c427db1
[zero] trace states of fp16/32 grad and fp32 param (#571) 2022-03-31 16:26:54 +08:00
Jiarui Fang 7675366fce
[polish] rename col_attr -> colo_attr (#558) 2022-03-31 12:25:45 +08:00