Commit Graph

434 Commits (9e3d602dba4a292ab6d99a2432e0050bdd712e25)

Author SHA1 Message Date
HELSON 340e59f968
[utils] add synchronized cuda memory monitor (#740)
3 years ago
ver217 e6212f56cd
[hotfix] fix memory leak in backward of sharded model (#741)
3 years ago
Frank Lee a4e91bc87f
[bug] fixed grad scaler compatibility with torch 1.8 (#735)
3 years ago
Jiarui Fang 53cb584808
[utils] correct cpu memory used and capacity in the context of multi-process (#726)
3 years ago
Jiarui Fang 7db3ccc79b
[hotfix] remove duplicated param register to stateful tensor manager (#728)
3 years ago
Frank Lee 1cb7bdad3b
[util] fixed communication API depth with PyTorch 1.9 (#721)
3 years ago
Frank Lee 2412429d54
[util] fixed activation checkpointing on torch 1.9 (#719)
3 years ago
Frank Lee 04ff5ea546
[utils] support detection of number of processes on current node (#723)
3 years ago
Jiarui Fang 4d90a7b513
[refactor] zero directory (#724)
3 years ago
Jiarui Fang 193dc8dacb
[refactor] refactor the memory utils (#715)
3 years ago
HELSON dbd96fe90a
[zero] check whether gradients have inf and nan in gpu (#712)
3 years ago
ver217 715b86eadd
[hotfix] fix stm cuda model data size (#710)
3 years ago
LuGY 140263a394
[hotfix]fixed bugs of assigning grad states to non leaf nodes (#711)
3 years ago
Frank Lee eda30a058e
[compatibility] fixed tensor parallel compatibility with torch 1.9 (#700)
3 years ago
HELSON a9b8300d54
[zero] improve adaptability for not-shard parameters (#708)
3 years ago
ver217 ab8c6b4a0e
[zero] refactor memstats collector (#706)
3 years ago
アマデウス 3fc8a204dc
[]Corrected 3d vocab parallel embedding (#707)
3 years ago
HELSON ee112fe1da
[zero] adapt zero hooks for unsharded module (#699)
3 years ago
ver217 3c9cd5bb5e
[zero] stateful tensor manager (#687)
3 years ago
HELSON d7ecaf362b
[zero] fix init bugs in zero context (#686)
3 years ago
YuliangLiu0306 0ed7042f42
[pipeline] refactor pipeline (#679)
3 years ago
Jiarui Fang 59bf2dc590
[zero] initialize a stateful tensor manager (#614)
3 years ago
encmps 79ccfa4310 [NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_adam.cu code style (#667)
3 years ago
lucasliunju e4bcff9b0f [NFC] polish colossalai/builder/builder.py code style (#662)
3 years ago
shenggan 331683bf82 [NFC] polish colossalai/kernel/cuda_native/csrc/layer_norm_cuda_kernel.cu code style (#661)
3 years ago
FredHuang99 c336cd3066 [NFC] polish colossalai/communication/utils.py code style (#656)
3 years ago
MaxT 5ab9a71299 [NFC] polish colossalai/kernel/cuda_native/csrc/moe_cuda.cpp code style (#642)
3 years ago
Xue Fuzhao 10afec728f [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/include/cuda_util.h code style (#641)
3 years ago
Cautiousss 055d0270c8 [NFC] polish colossalai/context/process_group_initializer/initializer_sequence.py colossalai/context/process_group_initializer initializer_tensor.py code style (#639)
3 years ago
Ziheng Qin c7c224ee17 [NFC] polish colossalai/builder/pipeline.py code style (#638)
3 years ago
Sze-qq 10591ecdf9 [NFC] polish colossalai/kernel/cuda_native/csrc/cpu_adam.cpp code style (#636)
3 years ago
Wangbo Zhao 6fcb381801 [NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_l2norm_kernel.cu code style (#635)
3 years ago
ExtremeViscent 8a5d526e95 [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/dropout_kernels.cu and cross_entropy.cu code style (#634)
3 years ago
RichardoLuo ad1e7ab2b2 '[NFC] polish <colossalai/engine/_base_engine.py> code style' (#631)
3 years ago
Zangwei 2e11853d04 [NFC] polish colossalai/communication/ring.py code style (#630)
3 years ago
puck_WCR 01cc941e1d [NFC] polish colossalai/kernel/cuda_native/csrc/kernels/transform_kernels.cu code stype (#629)
3 years ago
superhao1995 c1bed0d998 [NFC] polish colossalai/kernel/cuda_native/csrc/multi_tensor_lamb.cu code stype (#628)
3 years ago
Jiang Zhuo 0a96338b13 [NFC] polish <colossalai/context/process_group_initializer/initializer_data.py> code stype (#626)
3 years ago
ziyu huang 701bad439b [NFC] polish colossalai/context/process_group_initializer/process_group_initializer.py code stype (#617)
3 years ago
Shawn-Kong db54419409 fix format (#613)
3 years ago
Yuer867 5ecef13c16 fix format (#611)
3 years ago
xyupeng d3d5bedc65 fix format (#607)
3 years ago
xuqifan897 f2d2a1597a fix format (#608)
3 years ago
doubleHU f2da21a827 fix format (#586)
3 years ago
fanjinfucool ffad81e1d1 fix format (#585)
3 years ago
binmakeswell 6582aedc94 fix format (#583)
3 years ago
DouJS f08fc17f2b block_reduce.h fix format (#581)
3 years ago
Maruyama_Aya d2dc6049b5 fix format (#580)
3 years ago
wky 174b9c1d85 fix format (#574)
3 years ago
BoxiangW dfe423ae42 fix format (#572)
3 years ago
yuxuan-lou cfb41297ff 'fix/format' (#573)
3 years ago
Kai Wang (Victor Kai) b0f708dfc1 fix format (#570)
3 years ago
Xu Kai 2a915a8b62 fix format (#568)
3 years ago
YuliangLiu0306 9420d3ae31 fix format (#567)
3 years ago
Jie Zhu 0f1da44e5e [format]colossalai/kernel/cuda_native/csrc/layer_norm_cuda.cpp (#566)
3 years ago
coder-chin 5835631218 fix format (#564)
3 years ago
Luxios22 e014144c44 fix format (#565)
3 years ago
Ziyue Jiang 1762ba14ab fix format (#563)
3 years ago
HELSON 17e73e62cc
[hotfix] fix bugs for unsharded parameters when restore data (#664)
3 years ago
Jiarui Fang 0aab52301e
[hotfix] fix a bug in model data stats tracing (#655)
3 years ago
YuliangLiu0306 ade05a5d83
[refactor] pipeline, put runtime schedule into engine. (#627)
3 years ago
HELSON e5d615aeee
[hotfix] fix bugs in testing (#659)
3 years ago
Jiarui Fang 036404ca8a
Revert "[zero] polish init context (#645)" (#657)
3 years ago
HELSON b31daed4cf
fix bugs in CPU adam (#633)
3 years ago
LuGY 1e2557e801
[zero] fixed the activation offload (#647)
3 years ago
Liang Bowen 828e465622
[hotfix] Raise messages for indivisible batch sizes with tensor parallelism (#622)
3 years ago
Jiarui Fang 67b4928244
[zero] polish init context (#645)
3 years ago
ver217 f5d3a9c2b0
polish checkpoint docstring (#637)
3 years ago
HELSON 055fbf5be6
[zero] adapt zero for unsharded paramters (Optimizer part) (#601)
3 years ago
KAIYUAN GAN 229382c844
[NFC] polish colossalai/kernel/cuda_native/csrc/kernels/cuda_util.cu code stype (#625)
3 years ago
アマデウス 28b515d610
[model checkpoint] updated checkpoint hook (#598)
3 years ago
アマデウス 77ad24bf94
[model checkpoint] updated saving/loading for 3d layers (#597)
3 years ago
アマデウス 93089ed708
[model checkpoint] updated saving/loading for 2.5d layers (#596)
3 years ago
アマデウス 6302069c0e
[model checkpoint] updated communication ops for cpu tensors (#590)
3 years ago
アマデウス c50bfb807b
[model checkpoint] updated saving/loading for 1d layers (#594)
3 years ago
アマデウス 7636d518e1
[model checkpoint] updated saving/loading for 2d layers (#595)
3 years ago
アマデウス cd13b63832
[model checkpoint] reworked unified layers for ease of save/load states (#593)
3 years ago
アマデウス acae68eb04
[model checkpoint] updated checkpoint save/load utils (#592)
3 years ago
Ziyue Jiang 1c40ee8749
[TP] add assert for tp1d (#621)
3 years ago
ver217 369a288bf3
polish utils docstring (#620)
3 years ago
ver217 e619a651fb
polish optimizer docstring (#619)
3 years ago
ver217 8432dc7080
polish moe docsrting (#618)
3 years ago
ver217 c5b488edf8
polish amp docstring (#616)
3 years ago
ver217 0ef8819c67
polish docstring of zero (#612)
3 years ago
LuGY 02b187c14f
[zero] add sampling time for memstats collector (#610)
3 years ago
ver217 9bee119104
[hotfix] fix sharded optim zero grad (#604)
3 years ago
アマデウス 297b8baae2
[model checkpoint] add gloo groups for cpu tensor communication (#589)
3 years ago
アマデウス 54e688b623
moved ensure_path_exists to utils.common (#591)
3 years ago
Jiarui Fang e956d93ac2
[refactor] memory utils (#577)
3 years ago
ver217 104cbbb313
[hotfix] add hybrid adam to __init__ (#584)
3 years ago
HELSON e6d50ec107
[zero] adapt zero for unsharded parameters (#561)
3 years ago
Wesley 46c9ba33da update code format
3 years ago
Wesley 666cfd094a fix parallel_input flag for Linear1D_Col gather_output
3 years ago
ver217 7c6c427db1
[zero] trace states of fp16/32 grad and fp32 param (#571)
3 years ago
Jiarui Fang 7675366fce
[polish] rename col_attr -> colo_attr (#558)
3 years ago
Liang Bowen 2c45efc398
html refactor (#555)
3 years ago
Jiarui Fang d1211148a7
[utils] update colo tensor moving APIs (#553)
3 years ago
LuGY c44d797072
[docs] updatad docs of hybrid adam and cpu adam (#552)
3 years ago
ver217 014bac0c49
[zero] hijack p.grad in sharded model (#554)
3 years ago
Jiarui Fang f552b11294
[zero] label state for param fp16 and grad (#551)
3 years ago
Jiarui Fang 214da761d4
[zero] add stateful tensor (#549)
3 years ago
Jiarui Fang 107b99ddb1
[zero] dump memory stats for sharded model (#548)
3 years ago
Ziyue Jiang 763dc325f1
[TP] Add gather_out arg to Linear (#541)
3 years ago
HELSON 8c90d4df54
[zero] add zero context manager to change config during initialization (#546)
3 years ago
Liang Bowen ec5086c49c Refactored docstring to google style
3 years ago
Jiarui Fang 53b1b6e340
[zero] non model data tracing (#545)
3 years ago
Jie Zhu 73d36618a6
[profiler] add MemProfiler (#356)
3 years ago
ver217 fb841dd5c5
[zero] optimize grad offload (#539)
3 years ago
Jiarui Fang 7d81b5b46e
[logging] polish logger format (#543)
3 years ago
ver217 1f90a3b129
[zero] polish ZeroInitContext (#540)
3 years ago
Jiarui Fang c11ff81b15
[zero] get memory usage of sharded optim v2. (#542)
3 years ago
HELSON a30e2b4c24
[zero] adapt for no-leaf module in zero (#535)
3 years ago
Jiarui Fang 705f56107c
[zero] refactor model data tracing (#537)
3 years ago
Jiarui Fang a590ed0ba3
[zero] improve the accuracy of get_memory_usage of sharded param (#538)
3 years ago
Jiarui Fang 37cb70feec
[zero] get memory usage for sharded param (#536)
3 years ago
Jiarui Fang 05e33b2578
[zero] fix grad offload (#528)
3 years ago
LuGY 105c5301c3
[zero]added hybrid adam, removed loss scale in adam (#527)
3 years ago
Jiarui Fang 8d8c5407c0
[zero] refactor model data tracing (#522)
3 years ago
Frank Lee 3601b2bad0
[test] fixed rerun_on_exception and adapted test cases (#487)
3 years ago
Jiarui Fang 4d322b79da
[refactor] remove old zero code (#517)
3 years ago
LuGY 6a3f9fda83
[cuda] modify the fused adam, support hybrid of fp16 and fp32 (#497)
3 years ago
Jiarui Fang 920c5889a7
[zero] add colo move inline (#521)
3 years ago
ver217 7be397ca9c
[log] polish disable_existing_loggers (#519)
3 years ago
Jiarui Fang 0bebda6ea5
[zero] fix init device bug in zero init context unittest (#516)
3 years ago
Jiarui Fang 7ef3507ace
[zero] show model data cuda memory usage after zero context init. (#515)
3 years ago
ver217 a2e61d61d4
[zero] zero init ctx enable rm_torch_payload_on_the_fly (#512)
3 years ago
Jiarui Fang 81145208d1
[install] run with out rich (#513)
3 years ago
Jiarui Fang bca0c49a9d
[zero] use colo model data api in optimv2 (#511)
3 years ago
Jiarui Fang 9330be0f3c
[memory] set cuda mem frac (#506)
3 years ago
Jiarui Fang 0035b7be07
[memory] add model data tensor moving api (#503)
3 years ago
Jiarui Fang a445e118cf
[polish] polish singleton and global context (#500)
3 years ago
ver217 9ec1ce6ab1
[zero] sharded model support the reuse of fp16 shard (#495)
3 years ago
HELSON f24b5ed201
[MOE] remove old MoE legacy (#493)
3 years ago
ver217 c4c02424f3
[zero] sharded model manages ophooks individually (#492)
3 years ago
HELSON c9023d4078
[MOE] support PR-MOE (#488)
3 years ago
ver217 a9ecb4b244
[zero] polish sharded optimizer v2 (#490)
3 years ago
ver217 62b0a8d644
[zero] sharded optim support hybrid cpu adam (#486)
3 years ago
Jiarui Fang b334822163
[zero] polish sharded param name (#484)
3 years ago
HELSON d7ea63992b
[MOE] add FP32LinearGate for MOE in NaiveAMP context (#480)
3 years ago
Jiarui Fang 65c0f380c2
[format] polish name format for MOE (#481)
3 years ago
ver217 8d3250d74b
[zero] ZeRO supports pipeline parallel (#477)
3 years ago
Frank Lee 83a847d058
[test] added rerun on exception for testing (#475)
3 years ago
HELSON 7544347145
[MOE] add unitest for MOE experts layout, gradient handler and kernel (#469)
3 years ago
ver217 3cb3fc275e
zero init ctx receives a dp process group (#471)
3 years ago
HELSON aff9d354f7
[MOE] polish moe_env (#467)
3 years ago
HELSON bccbc15861
[MOE] changed parallelmode to dist process group (#460)
3 years ago
ver217 fc8e6db005
[doc] Update docstring for ZeRO (#459)
3 years ago
HELSON 84fd7c1d4d
add moe context, moe utilities and refactor gradient handler (#455)
3 years ago
ver217 a241f61b34
[zero] Update initialize for ZeRO (#458)
3 years ago
ver217 642846d6f9
update sharded optim and fix zero init ctx (#457)
3 years ago
Jiarui Fang e2e9f82588
Revert "[zero] update sharded optim and fix zero init ctx" (#456)
3 years ago
ver217 e99af94ab8 rename variables
3 years ago
ver217 57567ee768 update sharded optim and fix zero init ctx
3 years ago
Jiarui Fang 0fcfb1e00d
[test] make zero engine test really work (#447)
3 years ago
Jiarui Fang 237d08e7ee
[zero] hybrid cpu adam (#445)
3 years ago
Frank Lee b72b8445c6
optimized context test time consumption (#446)
3 years ago
Jiarui Fang 496cbb0760
[hotfix] fix initialize bug with zero (#442)
3 years ago
Jiarui Fang 640a6cd304
[refactory] refactory the initialize method for new zero design (#431)
3 years ago
Frank Lee bffd85bf34
added testing module (#435)
3 years ago
HELSON dbdc9a7783
added Multiply Jitter and capacity factor eval for MOE (#434)
3 years ago
Frank Lee b03b3ae99c
fixed mem monitor device (#433)
3 years ago
Frank Lee 14a7094243
fixed fp16 optimizer none grad bug (#432)
3 years ago
ver217 fce9432f08 sync before creating empty grad
3 years ago
ver217 ea6905a898 free param.grad
3 years ago
ver217 9506a8beb2 use double buffer to handle grad
3 years ago
Jiarui Fang 54229cd33e
[log] better logging display with rich (#426)
3 years ago
HELSON 3f70a2b12f
removed noisy function during evaluation of MoE router (#419)
3 years ago
Jiarui Fang adebb3e041
[zero] cuda margin space for OS (#418)
3 years ago
Jiarui Fang 56bb412e72
[polish] use GLOBAL_MODEL_DATA_TRACER (#417)
3 years ago
Jiarui Fang 23ba3fc450
[zero] refactory ShardedOptimV2 init method (#416)
3 years ago
Frank Lee e79ea44247
[fp16] refactored fp16 optimizer (#392)
3 years ago
Jiarui Fang 21dc54e019
[zero] memtracer to record cuda memory usage of model data and overall system (#395)
3 years ago
Jiarui Fang 370f567e7d
[zero] new interface for ShardedOptimv2 (#406)
3 years ago
LuGY a9c27be42e
Added tensor detector (#393)
3 years ago
1SAA 907ac4a2dc fixed error when no collective communication in CommProfiler
3 years ago
Frank Lee 2fe68b359a
Merge pull request #403 from ver217/feature/shard-strategy
3 years ago
HELSON dfd0363f68
polished output format for communication profiler and pcie profiler (#404)
3 years ago
ver217 63469c0f91 polish code
3 years ago
ver217 88804aee49 add bucket tensor shard strategy
3 years ago
HELSON 7c079d9c33
[hotfix] fixed bugs in ShardStrategy and PcieProfiler (#394)
3 years ago
Frank Lee 1e4bf85cdb fixed bug in activation checkpointing test (#387)
3 years ago
Jiarui Fang 3af13a2c3e [zero] polish ShardedOptimV2 unittest (#385)
3 years ago
Jiang Zhuo 5a4a3b77d9 fix format (#376)
3 years ago
LuGY de46450461 Added activation offload (#331)
3 years ago
Jiarui Fang 272ebfb57d [bug] shard param during initializing the ShardedModelV2 (#381)
3 years ago
HELSON 8c18eb0998 [profiler] Fixed bugs in CommProfiler and PcieProfiler (#377)
3 years ago
Jiarui Fang b5f43acee3 [zero] find miss code (#378)
3 years ago
Jiarui Fang 6b6002962a [zero] zero init context collect numel of model (#375)
3 years ago
HELSON 1ed7c24c02 Added PCIE profiler to dectect data transmission (#373)
3 years ago
jiaruifang d9217e1960 Revert "[zero] bucketized tensor cpu gpu copy (#368)"
3 years ago
RichardoLuo 8539898ec6 flake8 style change (#363)
3 years ago
Kai Wang (Victor Kai) 53bb3bcc0a fix format (#362)
3 years ago
ziyu huang a77d73f22b fix format parallel_context.py (#359)
3 years ago
Zangwei c695369af0 fix format constants.py (#358)
3 years ago
Yuer867 4a0f8c2c50 fix format parallel_2p5d (#357)
3 years ago
Liang Bowen 7eb87f516d flake8 style (#352)
3 years ago
Xu Kai 54ee8d1254 Fix/format colossalai/engine/paramhooks/(#350)
3 years ago
Maruyama_Aya e83970e3dc fix format ColossalAI\colossalai\context\process_group_initializer
3 years ago
yuxuan-lou 3b88eb2259 Flake8 code restyle
3 years ago
xuqifan897 148207048e Qifan formated file ColossalAI\colossalai\nn\layer\parallel_1d\layers.py (#342)
3 years ago