Commit Graph

768 Commits (788e07dbc5dc5acaf34e24d98238780ecf134ef2)

Author SHA1 Message Date
YuliangLiu0306 0f3042363c
[tensor] shape consistency generate transform path and communication cost (#1435)
2 years ago
Boyuan Yao 5774fe0270
[fx] Use colossalai checkpoint and add offload recognition in codegen (#1439)
2 years ago
Kirigaya Kazuto e9460b45c8
[engin/schedule] use p2p_v2 to recontruct pipeline_schedule (#1408)
2 years ago
Frank Lee ae1b58cd16
[tensor] added linear implementation for the new sharding spec (#1416)
2 years ago
Super Daniel d40a9392ba
[fx] fix the false interpretation of algorithm 3 in https://arxiv.org/abs/1604.06174. (#1446)
2 years ago
ver217 821c6172e2
[utils] Impl clip_grad_norm for ColoTensor and ZeroOptimizer (#1442)
2 years ago
HELSON b80340168e
[zero] add chunk_managerV2 for all-gather chunk (#1441)
2 years ago
Super Daniel 3b26516c69
[fx] add vanilla activation checkpoint search with test on resnet and densenet (#1433)
2 years ago
Jiarui Fang 30b4dd17c0
[FAW] export FAW in _ops (#1438)
2 years ago
HELSON 9056677b13
[zero] add chunk size searching algorithm for parameters in different groups (#1436)
2 years ago
HELSON 039b7ed3bc
[polish] add update directory in gemini; rename AgChunk to ChunkV2 (#1432)
2 years ago
Super Daniel f20cb4e893
[fx] modify the calculation of node_size in MetaInfoProp for activation checkpointing usages (#1425)
2 years ago
Jiarui Fang 89c434a0a6
[polish] add test_ops directory (#1431)
2 years ago
Jiarui Fang 10b3df65c8
[FAW] move coloparam setting in test code. (#1429)
2 years ago
Jiarui Fang cb98cf5558
[FAW] parallel FreqAwareEmbedding (#1424)
2 years ago
HELSON 0d212183c4
[zero] add has_inf_or_nan in AgChunk; enhance the unit test of AgChunk (#1426)
2 years ago
YuliangLiu0306 33f0744d51
[tensor] add shape consistency feature to support auto spec transform (#1418)
2 years ago
HELSON 4fb3c52cf0
[zero] add unit test for AgChunk's append, close, access (#1423)
2 years ago
Jiarui Fang d209aff684
Add FreqAwareEmbeddingBag (#1421)
2 years ago
Jiarui Fang 504419d261
[FAW] add cache manager for the cached embedding (#1419)
2 years ago
Kirigaya Kazuto 44fd3c83ab
[communication] add p2p_v2.py to support communication with List[Any] (#1407)
2 years ago
YuliangLiu0306 7c96055c68
[tensor]build sharding spec to replace distspec in future. (#1405)
2 years ago
ver217 12b4887097
[hotfix] fix CPUAdam kernel nullptr (#1410)
2 years ago
YuliangLiu0306 0442f940f0
[device] add DeviceMesh class to support logical device layout (#1394)
2 years ago
HELSON 4e98e938ce
[zero] alleviate memory usage in ZeRODDP state_dict (#1398)
2 years ago
Frank Lee adf5054ff8
[fx] fixed torchaudio conformer tracing (#1392)
2 years ago
Frank Lee 7d6293927f
[fx] patched torch.max and data movement operator (#1391)
2 years ago
HELSON 527758b2ae
[hotfix] fix a running error in test_colo_checkpoint.py (#1387)
2 years ago
ver217 8dced41ad0
[zero] zero optim state_dict takes only_rank_0 (#1384)
2 years ago
ver217 7d5d628e07
[DDP] test ddp state dict uses more strict threshold (#1382)
2 years ago
ver217 828b9e5e0d
[hotfix] fix zero optim save/load state dict (#1381)
2 years ago
Super Daniel be229217ce
[fx] add torchaudio test (#1369)
2 years ago
Boyuan Yao bb640ec728
[fx] Add colotracer compatibility test on torchrec (#1370)
2 years ago
ver217 c415240db6
[nvme] CPUAdam and HybridAdam support NVMe offload (#1360)
2 years ago
HELSON 87775a0682
[colotensor] use cpu memory to store state_dict (#1367)
2 years ago
Frank Lee cd063ac37f
[fx] added activation checkpoint codegen support for torch < 1.12 (#1359)
2 years ago
HELSON 4417804129
[unit test] add megatron init test in zero_optim (#1358)
2 years ago
HELSON 7a065dc9f6
[hotfix] fix megatron_init in test_gpt2.py (#1357)
2 years ago
Frank Lee 644582eee9
[fx] added activation checkpoint codegen (#1355)
2 years ago
Frank Lee 05fae1fd56
[fx] added activation checkpointing annotation (#1349)
2 years ago
HELSON 7a8702c06d
[colotensor] add Tensor.view op and its unit test (#1343)
2 years ago
YuliangLiu0306 942c8cd1fb
[fx] refactor tracer to trace complete graph (#1342)
2 years ago
Frank Lee 2cc1175c76
[fx] tested the complete workflow for auto-parallel (#1336)
2 years ago
YuliangLiu0306 4631fef8a0
[fx]refactor tracer (#1335)
2 years ago
HELSON bf5066fba7
[refactor] refactor ColoTensor's unit tests (#1340)
2 years ago
HELSON f92c100ddd
[checkpoint] use gather_tensor in checkpoint and update its unit test (#1339)
2 years ago
Frank Lee f3ce7b8336
[fx] recovered skipped pipeline tests (#1338)
2 years ago
ver217 0c51ff2c13
[hotfix] ZeroDDP use new process group (#1333)
2 years ago
Frank Lee 75abc75c15
[fx] fixed compatiblity issue with torch 1.10 (#1331)
2 years ago
Frank Lee 169954f87e
[test] removed outdated unit test for meta context (#1329)
2 years ago
ver217 7a05367101
[hotfix] shared model returns cpu state_dict (#1328)
2 years ago
Frank Lee b2475d8c5c
[fx] fixed unit tests for torch 1.12 (#1327)
2 years ago
HELSON d49708ae43
[hotfix] fix ddp for unit test test_gpt2 (#1326)
2 years ago
Frank Lee 250be4d31e
[utils] integrated colotensor with lazy init context (#1324)
2 years ago
YuliangLiu0306 e8acf55e8b
[fx] add balanced policy v2 (#1251)
2 years ago
XYE ca2d3f284f
[fx] Add unit test and fix bugs for transform_mlp_pass (#1299)
2 years ago
HELSON 1b41686461
[hotfix] fix unit test test_module_spec (#1321)
2 years ago
Jiarui Fang 9e4c6449b0
[checkpoint] add ColoOptimizer checkpointing (#1316)
2 years ago
Jiarui Fang 85f933b58b
[Optimizer] Remove useless ColoOptimizer (#1312)
2 years ago
Jiarui Fang 9f10524313
[Optimizer] polish the init method of ColoOptimizer (#1310)
2 years ago
HELSON 36086927e1
[hotfix] fix ColoTensor GPT2 unitest (#1309)
2 years ago
Jiarui Fang 3ef3791a3b
[checkpoint] add test for bert and hotfix save bugs (#1297)
2 years ago
Jiarui Fang bd71e2a88b
[hotfix] add missing file (#1308)
2 years ago
Frank Lee 4f4d8c3656
[fx] added apex normalization to patched modules (#1300)
2 years ago
Jiarui Fang 4165eabb1e
[hotfix] remove potiential circle import (#1307)
2 years ago
YuliangLiu0306 93a75433df
[hotfix] skip some unittest due to CI environment. (#1301)
2 years ago
HELSON 260a55804a
[hotfix] fix shape error in backward when using ColoTensor (#1298)
2 years ago
Frank Lee 7e8114a8dd
[hotfix] skipped unsafe test cases (#1282)
2 years ago
Jiarui Fang 79fe7b027a
[hotfix] test model unittest hotfix (#1281)
2 years ago
Jiarui Fang e56731e916 [hotfix] test_gpt.py duplicated (#1279)
2 years ago
HELSON abba4d84e1
[hotfix] fix bert model test in unitests (#1272)
2 years ago
YuliangLiu0306 01ea68b2e6
[tests] remove T5 test skip decorator (#1271)
2 years ago
Jiarui Fang ca9d5ee91c
[hotfix] torchvison fx unittests miss import pytest (#1277)
2 years ago
Jiarui Fang c92f84fcdb
[tensor] distributed checkpointing for parameters (#1240)
2 years ago
Frank Lee 4a09fc0947
[fx] fixed tracing with apex-based T5 model (#1252)
2 years ago
YuliangLiu0306 97d713855a
[fx] methods to get fx graph property. (#1246)
2 years ago
YuliangLiu0306 30b4fc0eb0
[fx]add split module pass and unit test from pipeline passes (#1242)
2 years ago
Jiarui Fang 1aad903c15
[tensor] redistribute among different process groups (#1247)
2 years ago
Jiarui Fang 9bcd2fd4af
[tensor] a shorter shard and replicate spec (#1245)
2 years ago
Jiarui Fang 2699dfbbfd
[rename] convert_to_dist -> redistribute (#1243)
2 years ago
HELSON f6add9b720
[tensor] redirect .data.__get__ to a tensor instance (#1239)
2 years ago
Jiarui Fang 20da6e48c8
[checkpoint] save sharded optimizer states (#1237)
2 years ago
Jiarui Fang 4a76084dc9
[tensor] add zero_like colo op, important for Optimizer (#1236)
2 years ago
Jiarui Fang 3b500984b1
[tensor] fix some unittests (#1234)
2 years ago
HELSON 0453776def
[tensor] fix a assertion in colo_tensor cross_entropy (#1232)
2 years ago
Jiarui Fang 0e199d71e8
[hotfix] fx get comm size bugs (#1233)
2 years ago
HELSON 42ab36b762
[tensor] add unitest for colo_tensor 1DTP cross_entropy (#1230)
2 years ago
Yi Zhao 04537bf83e
[checkpoint]support generalized scheduler (#1222)
2 years ago
Jiarui Fang a98319f023
[tensor] torch function return colotensor (#1229)
2 years ago
Frank Lee 5581170890
[fx] fixed huggingface OPT and T5 results misalignment (#1227)
2 years ago
YuliangLiu0306 2b7dca44b5
[fx]get communication size between partitions (#1224)
2 years ago
Frank Lee 84f2298a96
[fx] added patches for tracing swin transformer (#1228)
2 years ago
Frank Lee 37fcf96b7f
[fx] fixed timm tracing result misalignment (#1225)
2 years ago
Frank Lee b6cb5a47ad
[fx] added timm model tracing testing (#1221)
2 years ago
Jiarui Fang 15d988f954
[tensor] sharded global process group (#1219)
2 years ago
Frank Lee 11973d892d
[fx] added torchvision model tracing testing (#1216)
2 years ago
Jiarui Fang 52736205d9
[checkpoint] make unitest faster (#1217)
2 years ago
Jiarui Fang f38006ea83
[checkpoint] checkpoint for ColoTensor Model (#1196)
2 years ago
Jiarui Fang ae7d3f4927
[refactor] move process group from _DistSpec to ColoTensor. (#1203)
2 years ago
Frank Lee 5da87ce35d
[fx] added testing for all albert variants (#1211)
2 years ago
Frank Lee 2d13a45a3b
[fx] added testing for all gpt variants (#1210)
2 years ago
YuliangLiu0306 189946c5c4
[fx]add uniform policy (#1208)
2 years ago
Frank Lee 426a279ce7
[fx] added testing for all bert variants (#1207)
2 years ago
Frank Lee f7878f465c
[fx] supported model tracing for huggingface bert (#1201)
2 years ago
Jiarui Fang 060b917daf
[refactor] remove gpc dependency in colotensor's _ops (#1189)
2 years ago
Frank Lee abf6a262dc
[fx] added module patch for pooling layers (#1197)
2 years ago
YuliangLiu0306 63d2a93878
[context]support arbitary module materialization. (#1193)
2 years ago
YuliangLiu0306 2053e138a2
[context]use meta tensor to init model lazily. (#1187)
2 years ago
Frank Lee 2c8c05675d
[fx] patched conv and normalization (#1188)
2 years ago
Frank Lee 6d86f1bc91
[fx] supported data-dependent control flow in model tracing (#1185)
2 years ago
Jiarui Fang c463f8adf9
[tensor] remove gpc in tensor tests (#1186)
2 years ago
Jiarui Fang 372f791444
[refactor] move chunk and chunkmgr to directory gemini (#1182)
2 years ago
ver217 6b2f2ab9bb
[ddp] ColoDDP uses bucket all-reduce (#1177)
2 years ago
Jiarui Fang 7487215b95
[ColoTensor] add independent process group (#1179)
2 years ago
Jiarui Fang 1b657f9ce1
[tensor] revert local view back (#1178)
2 years ago
Jiarui Fang 0dd4e2bbfb
[Tensor] rename some APIs in TensorSpec and Polish view unittest (#1176)
2 years ago
Jiarui Fang aa7bef73d4
[Tensor] distributed view supports inter-process hybrid parallel (#1169)
2 years ago
ver217 9e1daa63d2
[zero] sharded optim supports loading local state dict (#1170)
2 years ago
ver217 561e90493f
[zero] zero optim supports loading local state dict (#1171)
2 years ago
Jiarui Fang 4b9bba8116
[ColoTensor] rename APIs and add output_replicate to ComputeSpec (#1168)
2 years ago
Jiarui Fang f4ef224358
[Tensor] remove ParallelAction, use ComputeSpec instread (#1166)
2 years ago
Jiarui Fang 177c374401
remove gather out in parallel action (#1163)
2 years ago
Jiarui Fang 07f9c781f9
[graph] improve the graph building. (#1157)
2 years ago
ver217 22717a856f
[tensor] add embedding bag op (#1156)
2 years ago
ver217 ae86151968
[tensor] add more element-wise ops (#1155)
2 years ago
ver217 ffa025e120
[tensor] dist spec s2s uses all-to-all (#1136)
2 years ago
Jiarui Fang ff644ee5e4
polish unitest test with titans (#1152)
2 years ago
Jiarui Fang 8cdce0399c
[ColoTensor] improves init functions. (#1150)
2 years ago
ver217 8106d7b8c7
[ddp] refactor ColoDDP and ZeroDDP (#1146)
2 years ago
ver217 d26902645e
[ddp] add save/load state dict for ColoDDP (#1127)
2 years ago
ver217 789cad301b
[hotfix] fix param op hook (#1131)
2 years ago
ver217 f0a954f16d
[ddp] add set_params_to_ignore for ColoDDP (#1122)
2 years ago
YuliangLiu0306 fcf55777dd
[fx]add autoparallel passes (#1121)
2 years ago
Frank Lee 16302a5359
[fx] added unit test for coloproxy (#1119)
2 years ago
ver217 7d14b473f0
[gemini] gemini mgr supports "cpu" placement policy (#1118)
2 years ago
Frank Lee 53297330c0
[test] fixed hybrid parallel test case on 8 GPUs (#1106)
2 years ago
ver217 1f894e033f
[gemini] zero supports gemini (#1093)
3 years ago
Frank Lee 2b2dc1c86b
[pipeline] refactor the pipeline module (#1087)
3 years ago
Frank Lee bad5d4c0a1
[context] support lazy init of module (#1088)
3 years ago
ver217 be01db37c8
[tensor] refactor chunk mgr and impl MemStatsCollectorV2 (#1077)
3 years ago
Ziyue Jiang b3a03e4bfd
[Tensor] fix equal assert (#1091)
3 years ago
Frank Lee 50ec3a7e06
[test] skip tests when not enough GPUs are detected (#1090)
3 years ago
Frank Lee 65ee6dcc20
[test] ignore 8 gpu test (#1080)
3 years ago
Ziyue Jiang 0653c63eaa
[Tensor] 1d row embedding (#1075)
3 years ago
ver217 1b17859328
[tensor] chunk manager monitor mem usage (#1076)
3 years ago
Ziyue Jiang 4fc748f69b
[Tensor] fix optimizer for CPU parallel (#1069)
3 years ago
Jiarui Fang 49832b2344
[refactory] add nn.parallel module (#1068)
3 years ago
Jiarui Fang a00644079e
reorgnize colotensor directory (#1062)
3 years ago
Ziyue Jiang df9dcbbff6
[Tensor] add hybrid device demo and fix bugs (#1059)
3 years ago
YuliangLiu0306 b167258b6a
[pipeline]refactor ppschedule to support tensor list (#1050)
3 years ago
ver217 51b9a49655
[zero] add zero optimizer for ColoTensor (#1046)
3 years ago
ver217 7faef93326
fix dist spec mgr (#1045)
3 years ago
ver217 9492a561c3
[tensor] ColoTensor supports ZeRo (#1015)
3 years ago
YuliangLiu0306 9feff0f760
[titans]remove model zoo (#1042)
3 years ago
Ziyue Jiang 7c530b9de2
[Tensor] add Parameter inheritance for ColoParameter (#1041)
3 years ago
Ziyue Jiang 6c5996a56e
[Tensor] add module check and bert test (#1031)
3 years ago
YuliangLiu0306 7106bd671d
[p2p]add object list send/recv (#1024)
3 years ago
Ziyue Jiang 32291dd73f
[Tensor] add module handler for linear (#1021)
3 years ago
ver217 cefc29ff06
[tensor] impl ColoDDP for ColoTensor (#1009)
3 years ago
ver217 a3b66f6def
[tensor] refactor parallel action (#1007)
3 years ago
ver217 8e3d0ad8f1
[unit test] refactor test tensor (#1005)
3 years ago
ver217 ad536e308e
[tensor] refactor colo-tensor (#992)
3 years ago
ver217 c2fdc6a011
[tensor] derive compute pattern from dist spec (#971)
3 years ago
Ziyue Jiang 797a9dc5a9
add DistSpec for loss and test_model (#947)
3 years ago
ver217 67c33f57eb
[tensor] design DistSpec and DistSpecManager for ColoTensor (#934)
3 years ago
Ziyue Jiang 830d3bca26
[Tensor] add optimizer to bert test (#933)
3 years ago
Ziyue Jiang d73c2b1d79
[Tensor] fix init context (#931)
3 years ago
Ziyue Jiang dfc88b85ea
[Tensor] simplify named param (#928)
3 years ago
ver217 45b9124df4
[tensor] hijack addmm for colo tensor (#923)
3 years ago
Jiarui Fang 534afb018a
test pretrain loading on multi-process (#922)
3 years ago
Ziyue Jiang c195d2814c
[Tensor] add from_pretrained support and bert pretrained test (#921)
3 years ago
Jiarui Fang 845856ea29
[Graph] building computing graph with ColoTensor, Linear only (#917)
3 years ago
Ziyue Jiang 75d221918a
[Tensor] add 1d vocab loss (#918)
3 years ago
Ziyue Jiang dfaff4e243
[Tensor] fix test_model (#916)
3 years ago
Jiarui Fang ed6426c300
[Tensor] polish model test (#915)
3 years ago
Ziyue Jiang 0fab86b12a
[Tensor] add a basic bert. (#911)
3 years ago
Jiarui Fang ab95ec9aea
[Tensor] init ColoParameter (#914)
3 years ago
Ziyue Jiang 193d629311
update pytest.mark.parametrize in tensor tests (#913)
3 years ago
Ziyue Jiang f593a5637e
[Tensor] add embedding tp1d row (#904)
3 years ago
Ziyue Jiang 2c0d19d755
[Tensor] add ColoTensor TP1Dcol Embedding (#899)
3 years ago
Jiarui Fang d16671da75
[Tensor] initialize the ColoOptimizer (#898)
3 years ago
Jiarui Fang e76f76c08b
[Tensor] test parameters() as member function (#896)
3 years ago
Ziyue Jiang cb182da7c5
[tensor] refine linear and add gather for laynorm (#893)
3 years ago
Jiarui Fang 26c49639d8
[Tensor] overriding paramters() for Module using ColoTensor (#889)
3 years ago
Ziyue Jiang 1d0aba4153
[tensor] add ColoTensor 1Dcol (#888)
3 years ago
Jiarui Fang a0e5971692
[Tensor] test model check results for a simple net (#887)
3 years ago
Jiarui Fang 72cdc06875
[Tensor] make ColoTensor more robust for getattr (#886)
3 years ago
Ziyue Jiang 9bc5a77c31
[tensor] wrap function in the torch_tensor to ColoTensor (#881)
3 years ago
Jiarui Fang 7f76517a85
[Tensor] make a simple net works with 1D row TP (#879)
3 years ago
ver217 c4d903e64a
[gemini] accelerate adjust_layout() (#878)
3 years ago
Jiarui Fang 909211453b
[Tensor] Add some attributes to ColoTensor (#877)
3 years ago
Jiarui Fang e43f83aa5c
[Tensor] get named parameters for model using ColoTensors (#874)
3 years ago
Jiarui Fang 96211c2cc8
[tensor] customized op returns ColoTensor (#875)
3 years ago
Ziyue Jiang 26d4ab8b03
[Tensor] Add function to spec and update linear 1Drow and unit tests (#869)
3 years ago
Jiarui Fang 1190b2c4a4
[tensor] add cross_entrophy_loss (#868)
3 years ago
HELSON 3107817172
[gemini] add stateful tensor container (#867)
3 years ago
Jiarui Fang d01d3b8cb0
colo init context add device attr. (#866)
3 years ago
Jiarui Fang 126ba573a8
[Tensor] add layer norm Op (#852)
3 years ago
Frank Lee 1258af71cc
[ci] cache cuda extension (#860)
3 years ago
Ziyue Jiang bcc8655021
[Tensor ] Add 1Drow weight reshard by spec (#854)
3 years ago
Jiarui Fang 62f059251b
[Tensor] init a tp network training unittest (#849)
3 years ago
Ziyue Jiang 2a0a427e04
[tensor]add assert for colo_tensor 1Drow (#846)
3 years ago
Ziyue Jiang 05023ecfee
[Tensor] TP Linear 1D row (#843)
3 years ago
HELSON e5ea3fdeef
[gemini] add GeminiMemoryManger (#832)
3 years ago
YuliangLiu0306 35ea6e1023
[pipelinable]use pipelinable context to initialize non-pipeline model (#816)
3 years ago
Jiarui Fang ea0a2ed25f
[hotfix] the bug of numel() in ColoTensor (#845)
3 years ago
Jiarui Fang 8789850eea
Init Conext supports lazy allocate model memory (#842)
3 years ago
Frank Lee 943982d29a
[unittest] refactored unit tests for change in dependency (#838)
3 years ago
Frank Lee 01e9f834f5
[dependency] removed torchvision (#833)
3 years ago
Jiarui Fang cb5a4778e1
Revert "[WIP] Applying ColoTensor on TP-1D-row Linear. (#831)" (#835)
3 years ago
Jiarui Fang ac88de6dfc
[WIP] Applying ColoTensor on TP-1D-row Linear. (#831)
3 years ago
Jiarui Fang 294a6060d0
[tensor] ZeRO use ColoTensor as the base class. (#828)
3 years ago
Ziyue Jiang 8e6fdb4f29
[tensor]fix test_linear (#826)
3 years ago
Ziyue Jiang 1a9e2c2dff
[tensor] fix kwargs in colo_tensor torch_funtion (#825)
3 years ago
Jiarui Fang 2ecc3d7a55
[tensor] lazy init (#823)
3 years ago
Jiarui Fang 660d2d1f1b
[Tensor] apply ColoTensor on Torch functions (#821)
3 years ago
Jiarui Fang 0ce8924ceb
[tensor] reorganize files (#820)
3 years ago
Jiarui Fang ab962b9735
[gemini] a new tensor structure (#818)
3 years ago
Jiarui Fang e761ad2cd7
Revert "[zero] add ZeroTensorShardStrategy (#793)" (#806)
3 years ago
HELSON 88759e289e
[zero] add ZeroTensorShardStrategy (#793)
3 years ago
Jiarui Fang 681addb512
[refactor] moving grad acc logic to engine (#804)
3 years ago
Jiarui Fang 4d9332b4c5
[refactor] moving memtracer to gemini (#801)
3 years ago
HELSON 4c4388c46e
[hotfix] fix memory leak in zero (#781)
3 years ago
Frank Lee 5a1a095b92
[test] refactored with the new rerun decorator (#763)
3 years ago
Jiarui Fang 10ef8afdd2
[gemini] init genimi individual directory (#754)
3 years ago
ver217 dcca614eee
[hotfix] fix test_stateful_tensor_mgr (#762)
3 years ago
ver217 a93a7d7364
[hotfix] fix reuse_fp16_shard of sharded model (#756)
3 years ago
HELSON 84c6700b2a
[zero] refactor memstats_collector (#746)
3 years ago
ver217 e396bb71f2
[zero] add tensor placement policies (#743)
3 years ago
HELSON 22c4b88d56
[zero] refactor ShardedParamV2 for convenience (#742)
3 years ago
Frank Lee f4f42d4c3c
[bug] fixed DDP compatibility with torch 1.8 (#739)
3 years ago
Jiarui Fang 53cb584808
[utils] correct cpu memory used and capacity in the context of multi-process (#726)
3 years ago
HELSON b9b469ea50
[moe] add checkpoint for moe zero test (#729)
3 years ago
FrankLeeeee e88a498c9c [test] removed trivial outdated test
3 years ago
FrankLeeeee 62b4ce7326 [test] added missing decorators to model checkpointing tests
3 years ago
Jiarui Fang 4d90a7b513
[refactor] zero directory (#724)
3 years ago
Frank Lee 20ab1f5520
[bug] fixed broken test_found_inf (#725)
3 years ago
Jiarui Fang 193dc8dacb
[refactor] refactor the memory utils (#715)
3 years ago
HELSON dbd96fe90a
[zero] check whether gradients have inf and nan in gpu (#712)
3 years ago
HELSON a9b8300d54
[zero] improve adaptability for not-shard parameters (#708)
3 years ago
ver217 ab8c6b4a0e
[zero] refactor memstats collector (#706)
3 years ago
HELSON ee112fe1da
[zero] adapt zero hooks for unsharded module (#699)
3 years ago
ver217 3c9cd5bb5e
[zero] stateful tensor manager (#687)
3 years ago
HELSON d7ecaf362b
[zero] fix init bugs in zero context (#686)
3 years ago
Jiarui Fang 0aab52301e
[hotfix] fix a bug in model data stats tracing (#655)
3 years ago
YuliangLiu0306 ade05a5d83
[refactor] pipeline, put runtime schedule into engine. (#627)
3 years ago
HELSON e5d615aeee
[hotfix] fix bugs in testing (#659)
3 years ago
HELSON b31daed4cf
fix bugs in CPU adam (#633)
3 years ago
HELSON 055fbf5be6
[zero] adapt zero for unsharded paramters (Optimizer part) (#601)
3 years ago
アマデウス 354b7954d1
[model checkpoint] added unit tests for checkpoint save/load (#599)
3 years ago
FredHuang99 93f14d2a33
[zero] test zero tensor utils (#609)
3 years ago
Jiarui Fang e956d93ac2
[refactor] memory utils (#577)
3 years ago
HELSON e6d50ec107
[zero] adapt zero for unsharded parameters (#561)
3 years ago
ver217 7c6c427db1
[zero] trace states of fp16/32 grad and fp32 param (#571)
3 years ago
Jiarui Fang 7675366fce
[polish] rename col_attr -> colo_attr (#558)
3 years ago
ver217 014bac0c49
[zero] hijack p.grad in sharded model (#554)
3 years ago
Jiarui Fang f552b11294
[zero] label state for param fp16 and grad (#551)
3 years ago
Jiarui Fang 214da761d4
[zero] add stateful tensor (#549)
3 years ago
HELSON 8c90d4df54
[zero] add zero context manager to change config during initialization (#546)
3 years ago
Liang Bowen ec5086c49c Refactored docstring to google style
3 years ago
Jiarui Fang 53b1b6e340
[zero] non model data tracing (#545)
3 years ago
ver217 1f90a3b129
[zero] polish ZeroInitContext (#540)
3 years ago
Jiarui Fang c11ff81b15
[zero] get memory usage of sharded optim v2. (#542)
3 years ago
HELSON a30e2b4c24
[zero] adapt for no-leaf module in zero (#535)
3 years ago
Jiarui Fang 705f56107c
[zero] refactor model data tracing (#537)
3 years ago
Jiarui Fang a590ed0ba3
[zero] improve the accuracy of get_memory_usage of sharded param (#538)
3 years ago
Jiarui Fang 37cb70feec
[zero] get memory usage for sharded param (#536)
3 years ago
LuGY 105c5301c3
[zero]added hybrid adam, removed loss scale in adam (#527)
3 years ago
Jiarui Fang 8d8c5407c0
[zero] refactor model data tracing (#522)
3 years ago
Frank Lee 3601b2bad0
[test] fixed rerun_on_exception and adapted test cases (#487)
3 years ago
Jiarui Fang 4d322b79da
[refactor] remove old zero code (#517)
3 years ago
LuGY 6a3f9fda83
[cuda] modify the fused adam, support hybrid of fp16 and fp32 (#497)
3 years ago
Jiarui Fang 920c5889a7
[zero] add colo move inline (#521)
3 years ago
Jiarui Fang 0bebda6ea5
[zero] fix init device bug in zero init context unittest (#516)
3 years ago
Jiarui Fang 7ef3507ace
[zero] show model data cuda memory usage after zero context init. (#515)
3 years ago
Jiarui Fang 9330be0f3c
[memory] set cuda mem frac (#506)
3 years ago
Jiarui Fang 0035b7be07
[memory] add model data tensor moving api (#503)
3 years ago
Jiarui Fang a445e118cf
[polish] polish singleton and global context (#500)
3 years ago
ver217 9ec1ce6ab1
[zero] sharded model support the reuse of fp16 shard (#495)
3 years ago
ver217 62b0a8d644
[zero] sharded optim support hybrid cpu adam (#486)
3 years ago
Jiarui Fang b334822163
[zero] polish sharded param name (#484)
3 years ago
Jiarui Fang 65c0f380c2
[format] polish name format for MOE (#481)
3 years ago
HELSON 7544347145
[MOE] add unitest for MOE experts layout, gradient handler and kernel (#469)
3 years ago
HELSON 84fd7c1d4d
add moe context, moe utilities and refactor gradient handler (#455)
3 years ago
Frank Lee af185b5519
[test] fixed amp convergence comparison test (#454)
3 years ago
ver217 a241f61b34
[zero] Update initialize for ZeRO (#458)
3 years ago
ver217 642846d6f9
update sharded optim and fix zero init ctx (#457)
3 years ago
Jiarui Fang e2e9f82588
Revert "[zero] update sharded optim and fix zero init ctx" (#456)
3 years ago
ver217 8cf7ff08cf polish code
3 years ago
ver217 46add4a5c5 remove surplus imports
3 years ago
ver217 57567ee768 update sharded optim and fix zero init ctx
3 years ago
Frank Lee f27d801a13
[test] optimized zero data parallel test (#452)
3 years ago
Jiarui Fang 0fcfb1e00d
[test] make zero engine test really work (#447)
3 years ago
Frank Lee bb2790cf0b
optimize engine and trainer test (#448)
3 years ago
Frank Lee b72b8445c6
optimized context test time consumption (#446)
3 years ago
Jiarui Fang 496cbb0760
[hotfix] fix initialize bug with zero (#442)
3 years ago
Jiarui Fang 17b8274f8a
[unitest] polish zero config in unittest (#438)
3 years ago
Jiarui Fang 640a6cd304
[refactory] refactory the initialize method for new zero design (#431)
3 years ago
ver217 fce9432f08 sync before creating empty grad
3 years ago
Jiarui Fang f9c762df85
[test] merge zero optim tests (#428)
3 years ago
Jiarui Fang 5d7dc3525b
[hotfix] run cpu adam unittest in pytest (#424)
3 years ago
Jiarui Fang adebb3e041
[zero] cuda margin space for OS (#418)
3 years ago
Jiarui Fang 56bb412e72
[polish] use GLOBAL_MODEL_DATA_TRACER (#417)
3 years ago
Jiarui Fang 23ba3fc450
[zero] refactory ShardedOptimV2 init method (#416)
3 years ago
Frank Lee e79ea44247
[fp16] refactored fp16 optimizer (#392)
3 years ago
Jiarui Fang 21dc54e019
[zero] memtracer to record cuda memory usage of model data and overall system (#395)
3 years ago
Jiarui Fang a37bf1bc42
[hotfix] rm test_tensor_detector.py (#413)
3 years ago
Jiarui Fang 370f567e7d
[zero] new interface for ShardedOptimv2 (#406)
3 years ago
LuGY a9c27be42e
Added tensor detector (#393)
3 years ago
ver217 54fd37f0e0 polish unit test
3 years ago
Frank Lee 1e4bf85cdb fixed bug in activation checkpointing test (#387)
3 years ago
Jiarui Fang 3af13a2c3e [zero] polish ShardedOptimV2 unittest (#385)
3 years ago
Frank Lee 526a318032 [unit test] Refactored test cases with component func (#339)
3 years ago
LuGY de46450461 Added activation offload (#331)
3 years ago
Jiarui Fang b5f43acee3 [zero] find miss code (#378)
3 years ago
Jiarui Fang 6b6002962a [zero] zero init context collect numel of model (#375)
3 years ago
jiaruifang d9217e1960 Revert "[zero] bucketized tensor cpu gpu copy (#368)"
3 years ago
Jiarui Fang 00670c870e [zero] bucketized tensor cpu gpu copy (#368)
3 years ago
Jiarui Fang 44e4891f57 [zero] able to place params on cpu after zero init context (#365)
3 years ago
Jiarui Fang ea2872073f [zero] global model data memory tracer (#360)
3 years ago
Jiarui Fang cb34cd384d [test] polish zero related unitest (#351)
3 years ago
ver217 532ae79cb0 add test sharded optim with cpu adam (#347)
3 years ago
HELSON 425bb0df3f Added Profiler Context to manage all profilers (#340)
3 years ago
ver217 d0ae0f2215 [zero] update sharded optim v2 (#334)
3 years ago
ver217 2b8cddd40e skip bert in test engine
3 years ago
ver217 f5f0ad266e fix bert unit test
3 years ago
jiaruifang d271f2596b polish engine unitest
3 years ago
jiaruifang 354c0f9047 polish code
3 years ago
jiaruifang 4d94cd513e adapting bert unitest interface
3 years ago
jiaruifang 7977422aeb add bert for unitest and sharded model is not able to pass the bert case
3 years ago
ver217 1388671699 [zero] Update sharded model v2 using sharded param v2 (#323)
3 years ago
jiaruifang 799d105bb4 using pytest parametrize
3 years ago
jiaruifang dec24561cf show pytest parameterize
3 years ago
Jiarui Fang 11bddb6e55 [zero] update zero context init with the updated test utils (#327)
3 years ago
Frank Lee 6268446b81 [test] refactored testing components (#324)
3 years ago
Jiarui Fang de0468c7a8 [zero] zero init context (#321)
3 years ago
1SAA 73bff11288 Added profiler communication operations
3 years ago
LuGY a3269de5c9 [zero] cpu adam kernel (#288)
3 years ago
Jiarui Fang 90d3aef62c [zero] yet an improved sharded param (#311)
3 years ago
Jiarui Fang c9e7d9582d [zero] polish shard strategy (#310)
3 years ago
ver217 36f9a74ab2 fix sharded param hook and unit test
3 years ago
ver217 001ca624dd impl shard optim v2 and add unit test
3 years ago
Jiarui Fang 74f77e314b [zero] a shard strategy in granularity of tensor (#307)
3 years ago
Jiarui Fang 80364c7686 [zero] sharded tensor (#305)
3 years ago
Jie Zhu d344689274 [profiler] primary memory tracer
3 years ago
Jiarui Fang e17e92c54d Polish sharded parameter (#297)
3 years ago
ver217 7aef75ca42 [zero] add sharded grad and refactor grad hooks for ShardedModel (#287)
3 years ago
Frank Lee 27155b8513 added unit test for sharded optimizer (#293)
3 years ago
Frank Lee e17e54e32a added buffer sync to naive amp model wrapper (#291)
3 years ago
Jiarui Fang 8d653af408 add a common util for hooks registered on parameter. (#292)
3 years ago
Jiarui Fang 5a560a060a Feature/zero (#279)
3 years ago
1SAA 82023779bb Added TPExpert for special situation
3 years ago
1SAA 219df6e685 Optimized MoE layer and fixed some bugs;
3 years ago
zbian 3dba070580 fixed padding index issue for vocab parallel embedding layers; updated 3D linear to be compatible with examples in the tutorial
3 years ago
アマデウス 9ee197d0e9 moved env variables to global variables; (#215)
3 years ago
Jiarui Fang 569357fea0
add pytorch hooks (#179)
3 years ago
Frank Lee e2089c5c15
adapted for sequence parallel (#163)
3 years ago
ver217 7bf1e98b97
pipeline last stage supports multi output (#151)
3 years ago
ver217 96780e6ee4
Optimize pipeline schedule (#94)
3 years ago
アマデウス 01a80cd86d
Hotfix/Colossalai layers (#92)
3 years ago
アマデウス 0fedef4f3c
Layer integration (#83)
3 years ago
ver217 8f02a88db2
add interleaved pipeline, fix naive amp and update pipeline model initializer (#80)
3 years ago
Frank Lee 91c327cb44
fixed zero level 3 dtype bug (#76)
3 years ago
Frank Lee cd9c28e055
added CI for unit testing (#69)
3 years ago
Frank Lee da01c234e1
Develop/experiments (#59)
3 years ago
Frank Lee 3defa32aee
Support TP-compatible Torch AMP and Update trainer API (#27)
3 years ago
アマデウス 3245a69fc2
cleaned test scripts
3 years ago
zbian 404ecbdcc6 Migrated project
3 years ago