Commit Graph

413 Commits (c7d4932956175bb42461d63b00e31ecca113a4d7)

Author SHA1 Message Date
Frank Lee abf6a262dc
[fx] added module patch for pooling layers (#1197)
2 years ago
YuliangLiu0306 63d2a93878
[context]support arbitary module materialization. (#1193)
2 years ago
YuliangLiu0306 2053e138a2
[context]use meta tensor to init model lazily. (#1187)
2 years ago
Frank Lee 2c8c05675d
[fx] patched conv and normalization (#1188)
2 years ago
Frank Lee 6d86f1bc91
[fx] supported data-dependent control flow in model tracing (#1185)
2 years ago
Jiarui Fang c463f8adf9
[tensor] remove gpc in tensor tests (#1186)
2 years ago
Jiarui Fang 372f791444
[refactor] move chunk and chunkmgr to directory gemini (#1182)
2 years ago
ver217 6b2f2ab9bb
[ddp] ColoDDP uses bucket all-reduce (#1177)
2 years ago
Jiarui Fang 7487215b95
[ColoTensor] add independent process group (#1179)
2 years ago
Jiarui Fang 1b657f9ce1
[tensor] revert local view back (#1178)
2 years ago
Jiarui Fang 0dd4e2bbfb
[Tensor] rename some APIs in TensorSpec and Polish view unittest (#1176)
2 years ago
Jiarui Fang aa7bef73d4
[Tensor] distributed view supports inter-process hybrid parallel (#1169)
2 years ago
ver217 9e1daa63d2
[zero] sharded optim supports loading local state dict (#1170)
2 years ago
ver217 561e90493f
[zero] zero optim supports loading local state dict (#1171)
2 years ago
Jiarui Fang 4b9bba8116
[ColoTensor] rename APIs and add output_replicate to ComputeSpec (#1168)
2 years ago
Jiarui Fang f4ef224358
[Tensor] remove ParallelAction, use ComputeSpec instread (#1166)
2 years ago
Jiarui Fang 177c374401
remove gather out in parallel action (#1163)
2 years ago
Jiarui Fang 07f9c781f9
[graph] improve the graph building. (#1157)
2 years ago
ver217 22717a856f
[tensor] add embedding bag op (#1156)
2 years ago
ver217 ae86151968
[tensor] add more element-wise ops (#1155)
2 years ago
ver217 ffa025e120
[tensor] dist spec s2s uses all-to-all (#1136)
2 years ago
Jiarui Fang ff644ee5e4
polish unitest test with titans (#1152)
2 years ago
Jiarui Fang 8cdce0399c
[ColoTensor] improves init functions. (#1150)
2 years ago
ver217 8106d7b8c7
[ddp] refactor ColoDDP and ZeroDDP (#1146)
2 years ago
ver217 d26902645e
[ddp] add save/load state dict for ColoDDP (#1127)
2 years ago
ver217 789cad301b
[hotfix] fix param op hook (#1131)
2 years ago
ver217 f0a954f16d
[ddp] add set_params_to_ignore for ColoDDP (#1122)
2 years ago
YuliangLiu0306 fcf55777dd
[fx]add autoparallel passes (#1121)
2 years ago
Frank Lee 16302a5359
[fx] added unit test for coloproxy (#1119)
2 years ago
ver217 7d14b473f0
[gemini] gemini mgr supports "cpu" placement policy (#1118)
2 years ago
Frank Lee 53297330c0
[test] fixed hybrid parallel test case on 8 GPUs (#1106)
2 years ago
ver217 1f894e033f
[gemini] zero supports gemini (#1093)
3 years ago
Frank Lee 2b2dc1c86b
[pipeline] refactor the pipeline module (#1087)
3 years ago
Frank Lee bad5d4c0a1
[context] support lazy init of module (#1088)
3 years ago
ver217 be01db37c8
[tensor] refactor chunk mgr and impl MemStatsCollectorV2 (#1077)
3 years ago
Ziyue Jiang b3a03e4bfd
[Tensor] fix equal assert (#1091)
3 years ago
Frank Lee 50ec3a7e06
[test] skip tests when not enough GPUs are detected (#1090)
3 years ago
Frank Lee 65ee6dcc20
[test] ignore 8 gpu test (#1080)
3 years ago
Ziyue Jiang 0653c63eaa
[Tensor] 1d row embedding (#1075)
3 years ago
ver217 1b17859328
[tensor] chunk manager monitor mem usage (#1076)
3 years ago
Ziyue Jiang 4fc748f69b
[Tensor] fix optimizer for CPU parallel (#1069)
3 years ago
Jiarui Fang 49832b2344
[refactory] add nn.parallel module (#1068)
3 years ago
Jiarui Fang a00644079e
reorgnize colotensor directory (#1062)
3 years ago
Ziyue Jiang df9dcbbff6
[Tensor] add hybrid device demo and fix bugs (#1059)
3 years ago
YuliangLiu0306 b167258b6a
[pipeline]refactor ppschedule to support tensor list (#1050)
3 years ago
ver217 51b9a49655
[zero] add zero optimizer for ColoTensor (#1046)
3 years ago
ver217 7faef93326
fix dist spec mgr (#1045)
3 years ago
ver217 9492a561c3
[tensor] ColoTensor supports ZeRo (#1015)
3 years ago
YuliangLiu0306 9feff0f760
[titans]remove model zoo (#1042)
3 years ago
Ziyue Jiang 7c530b9de2
[Tensor] add Parameter inheritance for ColoParameter (#1041)
3 years ago
Ziyue Jiang 6c5996a56e
[Tensor] add module check and bert test (#1031)
3 years ago
YuliangLiu0306 7106bd671d
[p2p]add object list send/recv (#1024)
3 years ago
Ziyue Jiang 32291dd73f
[Tensor] add module handler for linear (#1021)
3 years ago
ver217 cefc29ff06
[tensor] impl ColoDDP for ColoTensor (#1009)
3 years ago
ver217 a3b66f6def
[tensor] refactor parallel action (#1007)
3 years ago
ver217 8e3d0ad8f1
[unit test] refactor test tensor (#1005)
3 years ago
ver217 ad536e308e
[tensor] refactor colo-tensor (#992)
3 years ago
ver217 c2fdc6a011
[tensor] derive compute pattern from dist spec (#971)
3 years ago
Ziyue Jiang 797a9dc5a9
add DistSpec for loss and test_model (#947)
3 years ago
ver217 67c33f57eb
[tensor] design DistSpec and DistSpecManager for ColoTensor (#934)
3 years ago
Ziyue Jiang 830d3bca26
[Tensor] add optimizer to bert test (#933)
3 years ago
Ziyue Jiang d73c2b1d79
[Tensor] fix init context (#931)
3 years ago
Ziyue Jiang dfc88b85ea
[Tensor] simplify named param (#928)
3 years ago
ver217 45b9124df4
[tensor] hijack addmm for colo tensor (#923)
3 years ago
Jiarui Fang 534afb018a
test pretrain loading on multi-process (#922)
3 years ago
Ziyue Jiang c195d2814c
[Tensor] add from_pretrained support and bert pretrained test (#921)
3 years ago
Jiarui Fang 845856ea29
[Graph] building computing graph with ColoTensor, Linear only (#917)
3 years ago
Ziyue Jiang 75d221918a
[Tensor] add 1d vocab loss (#918)
3 years ago
Ziyue Jiang dfaff4e243
[Tensor] fix test_model (#916)
3 years ago
Jiarui Fang ed6426c300
[Tensor] polish model test (#915)
3 years ago
Ziyue Jiang 0fab86b12a
[Tensor] add a basic bert. (#911)
3 years ago
Jiarui Fang ab95ec9aea
[Tensor] init ColoParameter (#914)
3 years ago
Ziyue Jiang 193d629311
update pytest.mark.parametrize in tensor tests (#913)
3 years ago
Ziyue Jiang f593a5637e
[Tensor] add embedding tp1d row (#904)
3 years ago
Ziyue Jiang 2c0d19d755
[Tensor] add ColoTensor TP1Dcol Embedding (#899)
3 years ago
Jiarui Fang d16671da75
[Tensor] initialize the ColoOptimizer (#898)
3 years ago
Jiarui Fang e76f76c08b
[Tensor] test parameters() as member function (#896)
3 years ago
Ziyue Jiang cb182da7c5
[tensor] refine linear and add gather for laynorm (#893)
3 years ago
Jiarui Fang 26c49639d8
[Tensor] overriding paramters() for Module using ColoTensor (#889)
3 years ago
Ziyue Jiang 1d0aba4153
[tensor] add ColoTensor 1Dcol (#888)
3 years ago
Jiarui Fang a0e5971692
[Tensor] test model check results for a simple net (#887)
3 years ago
Jiarui Fang 72cdc06875
[Tensor] make ColoTensor more robust for getattr (#886)
3 years ago
Ziyue Jiang 9bc5a77c31
[tensor] wrap function in the torch_tensor to ColoTensor (#881)
3 years ago
Jiarui Fang 7f76517a85
[Tensor] make a simple net works with 1D row TP (#879)
3 years ago
ver217 c4d903e64a
[gemini] accelerate adjust_layout() (#878)
3 years ago
Jiarui Fang 909211453b
[Tensor] Add some attributes to ColoTensor (#877)
3 years ago
Jiarui Fang e43f83aa5c
[Tensor] get named parameters for model using ColoTensors (#874)
3 years ago
Jiarui Fang 96211c2cc8
[tensor] customized op returns ColoTensor (#875)
3 years ago
Ziyue Jiang 26d4ab8b03
[Tensor] Add function to spec and update linear 1Drow and unit tests (#869)
3 years ago
Jiarui Fang 1190b2c4a4
[tensor] add cross_entrophy_loss (#868)
3 years ago
HELSON 3107817172
[gemini] add stateful tensor container (#867)
3 years ago
Jiarui Fang d01d3b8cb0
colo init context add device attr. (#866)
3 years ago
Jiarui Fang 126ba573a8
[Tensor] add layer norm Op (#852)
3 years ago
Frank Lee 1258af71cc
[ci] cache cuda extension (#860)
3 years ago
Ziyue Jiang bcc8655021
[Tensor ] Add 1Drow weight reshard by spec (#854)
3 years ago
Jiarui Fang 62f059251b
[Tensor] init a tp network training unittest (#849)
3 years ago
Ziyue Jiang 2a0a427e04
[tensor]add assert for colo_tensor 1Drow (#846)
3 years ago
Ziyue Jiang 05023ecfee
[Tensor] TP Linear 1D row (#843)
3 years ago
HELSON e5ea3fdeef
[gemini] add GeminiMemoryManger (#832)
3 years ago
YuliangLiu0306 35ea6e1023
[pipelinable]use pipelinable context to initialize non-pipeline model (#816)
3 years ago
Jiarui Fang ea0a2ed25f
[hotfix] the bug of numel() in ColoTensor (#845)
3 years ago
Jiarui Fang 8789850eea
Init Conext supports lazy allocate model memory (#842)
3 years ago
Frank Lee 943982d29a
[unittest] refactored unit tests for change in dependency (#838)
3 years ago
Frank Lee 01e9f834f5
[dependency] removed torchvision (#833)
3 years ago
Jiarui Fang cb5a4778e1
Revert "[WIP] Applying ColoTensor on TP-1D-row Linear. (#831)" (#835)
3 years ago
Jiarui Fang ac88de6dfc
[WIP] Applying ColoTensor on TP-1D-row Linear. (#831)
3 years ago
Jiarui Fang 294a6060d0
[tensor] ZeRO use ColoTensor as the base class. (#828)
3 years ago
Ziyue Jiang 8e6fdb4f29
[tensor]fix test_linear (#826)
3 years ago
Ziyue Jiang 1a9e2c2dff
[tensor] fix kwargs in colo_tensor torch_funtion (#825)
3 years ago
Jiarui Fang 2ecc3d7a55
[tensor] lazy init (#823)
3 years ago
Jiarui Fang 660d2d1f1b
[Tensor] apply ColoTensor on Torch functions (#821)
3 years ago
Jiarui Fang 0ce8924ceb
[tensor] reorganize files (#820)
3 years ago
Jiarui Fang ab962b9735
[gemini] a new tensor structure (#818)
3 years ago
Jiarui Fang e761ad2cd7
Revert "[zero] add ZeroTensorShardStrategy (#793)" (#806)
3 years ago
HELSON 88759e289e
[zero] add ZeroTensorShardStrategy (#793)
3 years ago
Jiarui Fang 681addb512
[refactor] moving grad acc logic to engine (#804)
3 years ago
Jiarui Fang 4d9332b4c5
[refactor] moving memtracer to gemini (#801)
3 years ago
HELSON 4c4388c46e
[hotfix] fix memory leak in zero (#781)
3 years ago
Frank Lee 5a1a095b92
[test] refactored with the new rerun decorator (#763)
3 years ago
Jiarui Fang 10ef8afdd2
[gemini] init genimi individual directory (#754)
3 years ago
ver217 dcca614eee
[hotfix] fix test_stateful_tensor_mgr (#762)
3 years ago
ver217 a93a7d7364
[hotfix] fix reuse_fp16_shard of sharded model (#756)
3 years ago
HELSON 84c6700b2a
[zero] refactor memstats_collector (#746)
3 years ago
ver217 e396bb71f2
[zero] add tensor placement policies (#743)
3 years ago
HELSON 22c4b88d56
[zero] refactor ShardedParamV2 for convenience (#742)
3 years ago
Frank Lee f4f42d4c3c
[bug] fixed DDP compatibility with torch 1.8 (#739)
3 years ago
Jiarui Fang 53cb584808
[utils] correct cpu memory used and capacity in the context of multi-process (#726)
3 years ago
HELSON b9b469ea50
[moe] add checkpoint for moe zero test (#729)
3 years ago
FrankLeeeee e88a498c9c [test] removed trivial outdated test
3 years ago
FrankLeeeee 62b4ce7326 [test] added missing decorators to model checkpointing tests
3 years ago
Jiarui Fang 4d90a7b513
[refactor] zero directory (#724)
3 years ago
Frank Lee 20ab1f5520
[bug] fixed broken test_found_inf (#725)
3 years ago
Jiarui Fang 193dc8dacb
[refactor] refactor the memory utils (#715)
3 years ago
HELSON dbd96fe90a
[zero] check whether gradients have inf and nan in gpu (#712)
3 years ago
HELSON a9b8300d54
[zero] improve adaptability for not-shard parameters (#708)
3 years ago
ver217 ab8c6b4a0e
[zero] refactor memstats collector (#706)
3 years ago
HELSON ee112fe1da
[zero] adapt zero hooks for unsharded module (#699)
3 years ago
ver217 3c9cd5bb5e
[zero] stateful tensor manager (#687)
3 years ago
HELSON d7ecaf362b
[zero] fix init bugs in zero context (#686)
3 years ago
Jiarui Fang 0aab52301e
[hotfix] fix a bug in model data stats tracing (#655)
3 years ago
YuliangLiu0306 ade05a5d83
[refactor] pipeline, put runtime schedule into engine. (#627)
3 years ago
HELSON e5d615aeee
[hotfix] fix bugs in testing (#659)
3 years ago
HELSON b31daed4cf
fix bugs in CPU adam (#633)
3 years ago
HELSON 055fbf5be6
[zero] adapt zero for unsharded paramters (Optimizer part) (#601)
3 years ago
アマデウス 354b7954d1
[model checkpoint] added unit tests for checkpoint save/load (#599)
3 years ago
FredHuang99 93f14d2a33
[zero] test zero tensor utils (#609)
3 years ago
Jiarui Fang e956d93ac2
[refactor] memory utils (#577)
3 years ago
HELSON e6d50ec107
[zero] adapt zero for unsharded parameters (#561)
3 years ago
ver217 7c6c427db1
[zero] trace states of fp16/32 grad and fp32 param (#571)
3 years ago
Jiarui Fang 7675366fce
[polish] rename col_attr -> colo_attr (#558)
3 years ago
ver217 014bac0c49
[zero] hijack p.grad in sharded model (#554)
3 years ago
Jiarui Fang f552b11294
[zero] label state for param fp16 and grad (#551)
3 years ago
Jiarui Fang 214da761d4
[zero] add stateful tensor (#549)
3 years ago
HELSON 8c90d4df54
[zero] add zero context manager to change config during initialization (#546)
3 years ago
Liang Bowen ec5086c49c Refactored docstring to google style
3 years ago
Jiarui Fang 53b1b6e340
[zero] non model data tracing (#545)
3 years ago
ver217 1f90a3b129
[zero] polish ZeroInitContext (#540)
3 years ago
Jiarui Fang c11ff81b15
[zero] get memory usage of sharded optim v2. (#542)
3 years ago
HELSON a30e2b4c24
[zero] adapt for no-leaf module in zero (#535)
3 years ago
Jiarui Fang 705f56107c
[zero] refactor model data tracing (#537)
3 years ago
Jiarui Fang a590ed0ba3
[zero] improve the accuracy of get_memory_usage of sharded param (#538)
3 years ago
Jiarui Fang 37cb70feec
[zero] get memory usage for sharded param (#536)
3 years ago
LuGY 105c5301c3
[zero]added hybrid adam, removed loss scale in adam (#527)
3 years ago
Jiarui Fang 8d8c5407c0
[zero] refactor model data tracing (#522)
3 years ago
Frank Lee 3601b2bad0
[test] fixed rerun_on_exception and adapted test cases (#487)
3 years ago
Jiarui Fang 4d322b79da
[refactor] remove old zero code (#517)
3 years ago
LuGY 6a3f9fda83
[cuda] modify the fused adam, support hybrid of fp16 and fp32 (#497)
3 years ago
Jiarui Fang 920c5889a7
[zero] add colo move inline (#521)
3 years ago
Jiarui Fang 0bebda6ea5
[zero] fix init device bug in zero init context unittest (#516)
3 years ago
Jiarui Fang 7ef3507ace
[zero] show model data cuda memory usage after zero context init. (#515)
3 years ago
Jiarui Fang 9330be0f3c
[memory] set cuda mem frac (#506)
3 years ago
Jiarui Fang 0035b7be07
[memory] add model data tensor moving api (#503)
3 years ago
Jiarui Fang a445e118cf
[polish] polish singleton and global context (#500)
3 years ago
ver217 9ec1ce6ab1
[zero] sharded model support the reuse of fp16 shard (#495)
3 years ago
ver217 62b0a8d644
[zero] sharded optim support hybrid cpu adam (#486)
3 years ago
Jiarui Fang b334822163
[zero] polish sharded param name (#484)
3 years ago
Jiarui Fang 65c0f380c2
[format] polish name format for MOE (#481)
3 years ago
HELSON 7544347145
[MOE] add unitest for MOE experts layout, gradient handler and kernel (#469)
3 years ago
HELSON 84fd7c1d4d
add moe context, moe utilities and refactor gradient handler (#455)
3 years ago
Frank Lee af185b5519
[test] fixed amp convergence comparison test (#454)
3 years ago
ver217 a241f61b34
[zero] Update initialize for ZeRO (#458)
3 years ago
ver217 642846d6f9
update sharded optim and fix zero init ctx (#457)
3 years ago
Jiarui Fang e2e9f82588
Revert "[zero] update sharded optim and fix zero init ctx" (#456)
3 years ago
ver217 8cf7ff08cf polish code
3 years ago
ver217 46add4a5c5 remove surplus imports
3 years ago
ver217 57567ee768 update sharded optim and fix zero init ctx
3 years ago
Frank Lee f27d801a13
[test] optimized zero data parallel test (#452)
3 years ago
Jiarui Fang 0fcfb1e00d
[test] make zero engine test really work (#447)
3 years ago
Frank Lee bb2790cf0b
optimize engine and trainer test (#448)
3 years ago
Frank Lee b72b8445c6
optimized context test time consumption (#446)
3 years ago
Jiarui Fang 496cbb0760
[hotfix] fix initialize bug with zero (#442)
3 years ago
Jiarui Fang 17b8274f8a
[unitest] polish zero config in unittest (#438)
3 years ago
Jiarui Fang 640a6cd304
[refactory] refactory the initialize method for new zero design (#431)
3 years ago
ver217 fce9432f08 sync before creating empty grad
3 years ago
Jiarui Fang f9c762df85
[test] merge zero optim tests (#428)
3 years ago
Jiarui Fang 5d7dc3525b
[hotfix] run cpu adam unittest in pytest (#424)
3 years ago
Jiarui Fang adebb3e041
[zero] cuda margin space for OS (#418)
3 years ago
Jiarui Fang 56bb412e72
[polish] use GLOBAL_MODEL_DATA_TRACER (#417)
3 years ago
Jiarui Fang 23ba3fc450
[zero] refactory ShardedOptimV2 init method (#416)
3 years ago
Frank Lee e79ea44247
[fp16] refactored fp16 optimizer (#392)
3 years ago