YuliangLiu0306
0f3042363c
[tensor] shape consistency generate transform path and communication cost ( #1435 )
...
* [tensor] shape consistency output transform path and communication cost
* polish code
2 years ago
Boyuan Yao
5774fe0270
[fx] Use colossalai checkpoint and add offload recognition in codegen ( #1439 )
...
* [fx] Use colossalai.utils.checkpoint to replace torch.utils.checkpoint for offload activation and add offload annotation recognition in codegen
* [fx] Use colossalai.utils.checkpoint to replace torch.utils.checkpoint for offload activation and add offload annotation recognition in codegen
* Modification of test and add TODO in codegen
* [fx] Modification of colossal ckpt usage
* [fx] add gpc.destroy() to test_codegen
2 years ago
Kirigaya Kazuto
e9460b45c8
[engin/schedule] use p2p_v2 to recontruct pipeline_schedule ( #1408 )
...
* support p2p communication with any type of object | pass test
* reconstruct pipeline schedule with p2p_v2.py(support communication with List[Any]) | pass test
* [communication] add p2p_v2.py to support communication with List[Any]
* Delete _pipeline_schedule_v2.py
* Delete test_cifar_with_data_pipeline_tensor_v2.py
* [engin/schedule] use p2p_v2 to recontruct pipeline_schedule
* [engin/schedule] use p2p_v2 to recontruct pipeline_schedule
* [engin/schedule] use p2p_v2 to recontruct pipeline_schedule
* [engin/schedule] use p2p_v2 to recontruct pipeline_schedule
* [engin/schedule] use p2p_v2 to recontruct pipeline_schedule
* Delete p2p_v2.py
* Delete test_boardcast_send_recv_v2.py
* Delete test_object_list_p2p_v2.py
* [engin/schedule] use p2p_v2 to recontruct pipeline_schedule
* [communication] remove print code
* [communication] remove print code
* [engin/schedule] shorten the running time of testing file to prevent cancelling in CI
2 years ago
Frank Lee
ae1b58cd16
[tensor] added linear implementation for the new sharding spec ( #1416 )
...
* [tensor] added linear implementation for the new sharding spec
* polish code
2 years ago
Super Daniel
d40a9392ba
[fx] fix the false interpretation of algorithm 3 in https://arxiv.org/abs/1604.06174 . ( #1446 )
...
* [fx] modify the calculation of node_size in MetaInfoProp for activation checkpointing usages
* [fx] modify the calculation of node_size in MetaInfoProp for activation checkpointing usages
* [fx] modify the calculation of node_size in MetaInfoProp for activation checkpointing usages
* [fx] activation checkpointing using Chen strategies.
* [fx] add test for ckpt_solver_chen
* mend
* [fx] add vanilla activation checkpoint search with test on resnet and densenet
* [fx] add vanilla activation checkpoint search with test on resnet and densenet
* [fx] add a namespace code for solver_chen.
* [fx] fix the false interpretation of algorithm 3 in https://arxiv.org/abs/1604.06174 .
* [fx] fix lowercase naming conventions.
2 years ago
ver217
821c6172e2
[utils] Impl clip_grad_norm for ColoTensor and ZeroOptimizer ( #1442 )
2 years ago
HELSON
b80340168e
[zero] add chunk_managerV2 for all-gather chunk ( #1441 )
2 years ago
Super Daniel
3b26516c69
[fx] add vanilla activation checkpoint search with test on resnet and densenet ( #1433 )
...
* [fx] activation checkpointing using Chen strategies.
* [fx] add test for ckpt_solver_chen
* [fx] add vanilla activation checkpoint search with test on resnet and densenet
* [fx] add vanilla activation checkpoint search with test on resnet and densenet
* [fx] add a namespace code for solver_chen.
2 years ago
Jiarui Fang
30b4dd17c0
[FAW] export FAW in _ops ( #1438 )
2 years ago
HELSON
9056677b13
[zero] add chunk size searching algorithm for parameters in different groups ( #1436 )
2 years ago
HELSON
039b7ed3bc
[polish] add update directory in gemini; rename AgChunk to ChunkV2 ( #1432 )
2 years ago
Super Daniel
f20cb4e893
[fx] modify the calculation of node_size in MetaInfoProp for activation checkpointing usages ( #1425 )
...
* [fx] modify the calculation of node_size in MetaInfoProp for activation checkpointing usages
* [fx] modify the calculation of node_size in MetaInfoProp for activation checkpointing usages
* [fx] modify the calculation of node_size in MetaInfoProp for activation checkpointing usages
2 years ago
Jiarui Fang
89c434a0a6
[polish] add test_ops directory ( #1431 )
2 years ago
Jiarui Fang
10b3df65c8
[FAW] move coloparam setting in test code. ( #1429 )
2 years ago
Jiarui Fang
cb98cf5558
[FAW] parallel FreqAwareEmbedding ( #1424 )
2 years ago
HELSON
0d212183c4
[zero] add has_inf_or_nan in AgChunk; enhance the unit test of AgChunk ( #1426 )
2 years ago
YuliangLiu0306
33f0744d51
[tensor] add shape consistency feature to support auto spec transform ( #1418 )
...
* [tensor] add shape consistency feature to supportauto sharding spec transform.
* [tensor] remove unused argument in simulator, add doc string for target pair.
2 years ago
HELSON
4fb3c52cf0
[zero] add unit test for AgChunk's append, close, access ( #1423 )
2 years ago
Jiarui Fang
d209aff684
Add FreqAwareEmbeddingBag ( #1421 )
2 years ago
Jiarui Fang
504419d261
[FAW] add cache manager for the cached embedding ( #1419 )
2 years ago
Kirigaya Kazuto
44fd3c83ab
[communication] add p2p_v2.py to support communication with List[Any] ( #1407 )
...
* support p2p communication with any type of object | pass test
* reconstruct pipeline schedule with p2p_v2.py(support communication with List[Any]) | pass test
* [communication] add p2p_v2.py to support communication with List[Any]
* Delete _pipeline_schedule_v2.py
* Delete test_cifar_with_data_pipeline_tensor_v2.py
* [engin/schedule] use p2p_v2 to recontruct pipeline_schedule
* [communication] remove print code
* [communication] remove print code
2 years ago
YuliangLiu0306
7c96055c68
[tensor]build sharding spec to replace distspec in future. ( #1405 )
2 years ago
ver217
12b4887097
[hotfix] fix CPUAdam kernel nullptr ( #1410 )
2 years ago
YuliangLiu0306
0442f940f0
[device] add DeviceMesh class to support logical device layout ( #1394 )
...
* [device] add DeviceMesh class to support logical device layout
* polish code
* add doc string
2 years ago
HELSON
4e98e938ce
[zero] alleviate memory usage in ZeRODDP state_dict ( #1398 )
2 years ago
Frank Lee
adf5054ff8
[fx] fixed torchaudio conformer tracing ( #1392 )
2 years ago
Frank Lee
7d6293927f
[fx] patched torch.max and data movement operator ( #1391 )
...
* [fx] patched torch.max and data movement operator
* polish code
2 years ago
HELSON
527758b2ae
[hotfix] fix a running error in test_colo_checkpoint.py ( #1387 )
2 years ago
ver217
8dced41ad0
[zero] zero optim state_dict takes only_rank_0 ( #1384 )
...
* zero optim state_dict takes only_rank_0
* fix unit test
2 years ago
ver217
7d5d628e07
[DDP] test ddp state dict uses more strict threshold ( #1382 )
2 years ago
ver217
828b9e5e0d
[hotfix] fix zero optim save/load state dict ( #1381 )
2 years ago
Super Daniel
be229217ce
[fx] add torchaudio test ( #1369 )
...
* [fx]add torchaudio test
* [fx]add torchaudio test
* [fx] add torchaudio test
* [fx] add torchaudio test
* [fx] add torchaudio test
* [fx] add torchaudio test
* [fx] add torchaudio test
* [fx] add torchaudio test and test patches
* Delete ~
* [fx] add patches and patches test
* [fx] add patches and patches test
* [fx] fix patches
* [fx] fix rnn patches
* [fx] fix rnn patches
* [fx] fix rnn patches
* [fx] fix rnn patches
* [fx] merge upstream
* [fx] fix import errors
2 years ago
Boyuan Yao
bb640ec728
[fx] Add colotracer compatibility test on torchrec ( #1370 )
2 years ago
ver217
c415240db6
[nvme] CPUAdam and HybridAdam support NVMe offload ( #1360 )
...
* impl nvme optimizer
* update cpu adam
* add unit test
* update hybrid adam
* update docstr
* add TODOs
* update CI
* fix CI
* fix CI
* fix CI path
* fix CI path
* fix CI path
* fix install tensornvme
* fix CI
* fix CI path
* fix CI env variables
* test CI
* test CI
* fix CI
* fix nvme optim __del__
* fix adam __del__
* fix nvme optim
* fix CI env variables
* fix nvme optim import
* test CI
* test CI
* fix CI
2 years ago
HELSON
87775a0682
[colotensor] use cpu memory to store state_dict ( #1367 )
2 years ago
Frank Lee
cd063ac37f
[fx] added activation checkpoint codegen support for torch < 1.12 ( #1359 )
2 years ago
HELSON
4417804129
[unit test] add megatron init test in zero_optim ( #1358 )
2 years ago
HELSON
7a065dc9f6
[hotfix] fix megatron_init in test_gpt2.py ( #1357 )
2 years ago
Frank Lee
644582eee9
[fx] added activation checkpoint codegen ( #1355 )
2 years ago
Frank Lee
05fae1fd56
[fx] added activation checkpointing annotation ( #1349 )
...
* [fx] added activation checkpointing annotation
* polish code
* polish code
2 years ago
HELSON
7a8702c06d
[colotensor] add Tensor.view op and its unit test ( #1343 )
...
[colotensor] add megatron initialization for gpt2
2 years ago
YuliangLiu0306
942c8cd1fb
[fx] refactor tracer to trace complete graph ( #1342 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [fx] refactor tracer to trace complete graph
* add comments and solve conflicts.
2 years ago
Frank Lee
2cc1175c76
[fx] tested the complete workflow for auto-parallel ( #1336 )
...
* [fx] tested the complete workflow for auto-parallel
* polish code
* polish code
* polish code
2 years ago
YuliangLiu0306
4631fef8a0
[fx]refactor tracer ( #1335 )
2 years ago
HELSON
bf5066fba7
[refactor] refactor ColoTensor's unit tests ( #1340 )
2 years ago
HELSON
f92c100ddd
[checkpoint] use gather_tensor in checkpoint and update its unit test ( #1339 )
2 years ago
Frank Lee
f3ce7b8336
[fx] recovered skipped pipeline tests ( #1338 )
2 years ago
ver217
0c51ff2c13
[hotfix] ZeroDDP use new process group ( #1333 )
...
* process group supports getting ranks in group
* chunk mgr receives a process group
* update unit test
* fix unit tests
2 years ago
Frank Lee
75abc75c15
[fx] fixed compatiblity issue with torch 1.10 ( #1331 )
2 years ago
Frank Lee
169954f87e
[test] removed outdated unit test for meta context ( #1329 )
2 years ago
ver217
7a05367101
[hotfix] shared model returns cpu state_dict ( #1328 )
2 years ago
Frank Lee
b2475d8c5c
[fx] fixed unit tests for torch 1.12 ( #1327 )
2 years ago
HELSON
d49708ae43
[hotfix] fix ddp for unit test test_gpt2 ( #1326 )
2 years ago
Frank Lee
250be4d31e
[utils] integrated colotensor with lazy init context ( #1324 )
...
* [utils] integrated colotensor with lazy init context
* polish code
* polish code
* polish code
2 years ago
YuliangLiu0306
e8acf55e8b
[fx] add balanced policy v2 ( #1251 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [fx] add balanced policy v2
* add unittest
2 years ago
XYE
ca2d3f284f
[fx] Add unit test and fix bugs for transform_mlp_pass ( #1299 )
...
* add test and fix bugs
* add functions back
* add comments
2 years ago
HELSON
1b41686461
[hotfix] fix unit test test_module_spec ( #1321 )
2 years ago
Jiarui Fang
9e4c6449b0
[checkpoint] add ColoOptimizer checkpointing ( #1316 )
2 years ago
Jiarui Fang
85f933b58b
[Optimizer] Remove useless ColoOptimizer ( #1312 )
2 years ago
Jiarui Fang
9f10524313
[Optimizer] polish the init method of ColoOptimizer ( #1310 )
2 years ago
HELSON
36086927e1
[hotfix] fix ColoTensor GPT2 unitest ( #1309 )
2 years ago
Jiarui Fang
3ef3791a3b
[checkpoint] add test for bert and hotfix save bugs ( #1297 )
2 years ago
Jiarui Fang
bd71e2a88b
[hotfix] add missing file ( #1308 )
2 years ago
Frank Lee
4f4d8c3656
[fx] added apex normalization to patched modules ( #1300 )
...
* [fx] added apex normalization to patched modules
* remove unused imports
2 years ago
Jiarui Fang
4165eabb1e
[hotfix] remove potiential circle import ( #1307 )
...
* make it faster
* [hotfix] remove circle import
2 years ago
YuliangLiu0306
93a75433df
[hotfix] skip some unittest due to CI environment. ( #1301 )
2 years ago
HELSON
260a55804a
[hotfix] fix shape error in backward when using ColoTensor ( #1298 )
2 years ago
Frank Lee
7e8114a8dd
[hotfix] skipped unsafe test cases ( #1282 )
2 years ago
Jiarui Fang
79fe7b027a
[hotfix] test model unittest hotfix ( #1281 )
2 years ago
Jiarui Fang
e56731e916
[hotfix] test_gpt.py duplicated ( #1279 )
...
* make it faster
* [hotfix] torchvison fx tests
* [hotfix] rename duplicated named test_gpt.py
2 years ago
HELSON
abba4d84e1
[hotfix] fix bert model test in unitests ( #1272 )
2 years ago
YuliangLiu0306
01ea68b2e6
[tests] remove T5 test skip decorator ( #1271 )
2 years ago
Jiarui Fang
ca9d5ee91c
[hotfix] torchvison fx unittests miss import pytest ( #1277 )
2 years ago
Jiarui Fang
c92f84fcdb
[tensor] distributed checkpointing for parameters ( #1240 )
2 years ago
Frank Lee
4a09fc0947
[fx] fixed tracing with apex-based T5 model ( #1252 )
...
* [fx] fixed tracing with apex-based T5 model
* polish code
* polish code
2 years ago
YuliangLiu0306
97d713855a
[fx] methods to get fx graph property. ( #1246 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* manipulation
* [fx]add graph manipulation methods.
* [fx]methods to get fx graph property.
* add unit test
* add docstring to explain top node and leaf node in this context
2 years ago
YuliangLiu0306
30b4fc0eb0
[fx]add split module pass and unit test from pipeline passes ( #1242 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [fx]add split module pass and unit test from pipeline passes
* fix MNASNet bug
* polish
2 years ago
Jiarui Fang
1aad903c15
[tensor] redistribute among different process groups ( #1247 )
...
* make it faster
* [tensor] rename convert_to_dist -> redistribute
* [tensor] ShardSpec and ReplicaSpec
* [tensor] redistribute among diff pgs
* polish code
2 years ago
Jiarui Fang
9bcd2fd4af
[tensor] a shorter shard and replicate spec ( #1245 )
2 years ago
Jiarui Fang
2699dfbbfd
[rename] convert_to_dist -> redistribute ( #1243 )
2 years ago
HELSON
f6add9b720
[tensor] redirect .data.__get__ to a tensor instance ( #1239 )
2 years ago
Jiarui Fang
20da6e48c8
[checkpoint] save sharded optimizer states ( #1237 )
2 years ago
Jiarui Fang
4a76084dc9
[tensor] add zero_like colo op, important for Optimizer ( #1236 )
2 years ago
Jiarui Fang
3b500984b1
[tensor] fix some unittests ( #1234 )
2 years ago
HELSON
0453776def
[tensor] fix a assertion in colo_tensor cross_entropy ( #1232 )
2 years ago
Jiarui Fang
0e199d71e8
[hotfix] fx get comm size bugs ( #1233 )
...
* init a checkpoint dir
* [checkpoint]support resume for cosinewarmuplr
* [checkpoint]add unit test
* fix some bugs but still not OK
* fix bugs
* make it faster
* [checkpoint]support generalized scheduler
* polish
* [tensor] torch function return colotensor
* polish
* fix bugs
* remove debug info
* polish
* polish
* [tensor] test_model pass unittests
* polish
* [hotfix] fx get comm size bug
Co-authored-by: ZhaoYi1222 <zhaoyi9499@gmail.com>
2 years ago
HELSON
42ab36b762
[tensor] add unitest for colo_tensor 1DTP cross_entropy ( #1230 )
2 years ago
Yi Zhao
04537bf83e
[checkpoint]support generalized scheduler ( #1222 )
2 years ago
Jiarui Fang
a98319f023
[tensor] torch function return colotensor ( #1229 )
2 years ago
Frank Lee
5581170890
[fx] fixed huggingface OPT and T5 results misalignment ( #1227 )
2 years ago
YuliangLiu0306
2b7dca44b5
[fx]get communication size between partitions ( #1224 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [fx]get communication size between partitions.
* polish
2 years ago
Frank Lee
84f2298a96
[fx] added patches for tracing swin transformer ( #1228 )
2 years ago
Frank Lee
37fcf96b7f
[fx] fixed timm tracing result misalignment ( #1225 )
2 years ago
Frank Lee
b6cb5a47ad
[fx] added timm model tracing testing ( #1221 )
2 years ago
Jiarui Fang
15d988f954
[tensor] sharded global process group ( #1219 )
2 years ago
Frank Lee
11973d892d
[fx] added torchvision model tracing testing ( #1216 )
...
* [fx] added torchvision model tracing testing
* remove unused imports
2 years ago
Jiarui Fang
52736205d9
[checkpoint] make unitest faster ( #1217 )
2 years ago
Jiarui Fang
f38006ea83
[checkpoint] checkpoint for ColoTensor Model ( #1196 )
2 years ago
Jiarui Fang
ae7d3f4927
[refactor] move process group from _DistSpec to ColoTensor. ( #1203 )
2 years ago
Frank Lee
5da87ce35d
[fx] added testing for all albert variants ( #1211 )
2 years ago
Frank Lee
2d13a45a3b
[fx] added testing for all gpt variants ( #1210 )
...
* [fx] added testing for all gpt variants
* polish code
* polish code
2 years ago
YuliangLiu0306
189946c5c4
[fx]add uniform policy ( #1208 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [fx]add uniform policy
2 years ago
Frank Lee
426a279ce7
[fx] added testing for all bert variants ( #1207 )
...
* [fx] added testing for all bert variants
* polish code
2 years ago
Frank Lee
f7878f465c
[fx] supported model tracing for huggingface bert ( #1201 )
...
* [fx] supported model tracing for huggingface bert
* polish test
2 years ago
Jiarui Fang
060b917daf
[refactor] remove gpc dependency in colotensor's _ops ( #1189 )
2 years ago
Frank Lee
abf6a262dc
[fx] added module patch for pooling layers ( #1197 )
2 years ago
YuliangLiu0306
63d2a93878
[context]support arbitary module materialization. ( #1193 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [context]support arbitary module materialization.
* [test]add numerical check for lazy init context.
2 years ago
YuliangLiu0306
2053e138a2
[context]use meta tensor to init model lazily. ( #1187 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [context]use meta tensor to init model lazily.
* polish
* make module with device kwargs bypass the normal init.
* change unit test to adapt updated context.
2 years ago
Frank Lee
2c8c05675d
[fx] patched conv and normalization ( #1188 )
2 years ago
Frank Lee
6d86f1bc91
[fx] supported data-dependent control flow in model tracing ( #1185 )
...
* [fx] supported data-dependent control flow in model tracing
* polish code
2 years ago
Jiarui Fang
c463f8adf9
[tensor] remove gpc in tensor tests ( #1186 )
2 years ago
Jiarui Fang
372f791444
[refactor] move chunk and chunkmgr to directory gemini ( #1182 )
2 years ago
ver217
6b2f2ab9bb
[ddp] ColoDDP uses bucket all-reduce ( #1177 )
...
* add reducer
* update colo ddp with reducer
* polish unit test
* polish unit test
2 years ago
Jiarui Fang
7487215b95
[ColoTensor] add independent process group ( #1179 )
2 years ago
Jiarui Fang
1b657f9ce1
[tensor] revert local view back ( #1178 )
2 years ago
Jiarui Fang
0dd4e2bbfb
[Tensor] rename some APIs in TensorSpec and Polish view unittest ( #1176 )
2 years ago
Jiarui Fang
aa7bef73d4
[Tensor] distributed view supports inter-process hybrid parallel ( #1169 )
2 years ago
ver217
9e1daa63d2
[zero] sharded optim supports loading local state dict ( #1170 )
...
* sharded optim supports loading local state dict
* polish code
* add unit test
2 years ago
ver217
561e90493f
[zero] zero optim supports loading local state dict ( #1171 )
...
* zero optim supports loading local state dict
* polish code
* add unit test
2 years ago
Jiarui Fang
4b9bba8116
[ColoTensor] rename APIs and add output_replicate to ComputeSpec ( #1168 )
2 years ago
Jiarui Fang
f4ef224358
[Tensor] remove ParallelAction, use ComputeSpec instread ( #1166 )
2 years ago
Jiarui Fang
177c374401
remove gather out in parallel action ( #1163 )
2 years ago
Jiarui Fang
07f9c781f9
[graph] improve the graph building. ( #1157 )
2 years ago
ver217
22717a856f
[tensor] add embedding bag op ( #1156 )
2 years ago
ver217
ae86151968
[tensor] add more element-wise ops ( #1155 )
...
* add more element-wise ops
* update test_op
* polish unit test
2 years ago
ver217
ffa025e120
[tensor] dist spec s2s uses all-to-all ( #1136 )
...
* dist spec s2s uses all-to-all
* update unit test
* add sanity check
* polish unitest test with titans
* add sanity check for DistMgr
* add sanity check
Co-authored-by: jiaruifang <fangjiarui123@gmail.com>
2 years ago
Jiarui Fang
ff644ee5e4
polish unitest test with titans ( #1152 )
2 years ago
Jiarui Fang
8cdce0399c
[ColoTensor] improves init functions. ( #1150 )
2 years ago
ver217
8106d7b8c7
[ddp] refactor ColoDDP and ZeroDDP ( #1146 )
...
* ColoDDP supports overwriting default process group
* rename ColoDDPV2 to ZeroDDP
* add docstr for ZeroDDP
* polish docstr
2 years ago
ver217
d26902645e
[ddp] add save/load state dict for ColoDDP ( #1127 )
...
* add save/load state dict for ColoDDP
* add unit test
* refactor unit test folder
* polish unit test
* rename unit test
2 years ago
ver217
789cad301b
[hotfix] fix param op hook ( #1131 )
...
* fix param op hook
* update zero tp test
* fix bugs
2 years ago
ver217
f0a954f16d
[ddp] add set_params_to_ignore for ColoDDP ( #1122 )
...
* add set_params_to_ignore for ColoDDP
* polish code
* fix zero hook v2
* add unit test
* polish docstr
2 years ago
YuliangLiu0306
fcf55777dd
[fx]add autoparallel passes ( #1121 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* feature/add autoparallel passes
2 years ago
Frank Lee
16302a5359
[fx] added unit test for coloproxy ( #1119 )
...
* [fx] added unit test for coloproxy
* polish code
* polish code
2 years ago
ver217
7d14b473f0
[gemini] gemini mgr supports "cpu" placement policy ( #1118 )
...
* update gemini mgr
* update chunk
* add docstr
* polish placement policy
* update test chunk
* update test zero
* polish unit test
* remove useless unit test
2 years ago
Frank Lee
53297330c0
[test] fixed hybrid parallel test case on 8 GPUs ( #1106 )
2 years ago
ver217
1f894e033f
[gemini] zero supports gemini ( #1093 )
...
* add placement policy
* add gemini mgr
* update mem stats collector
* update zero
* update zero optim
* fix bugs
* zero optim monitor os
* polish unit test
* polish unit test
* add assert
3 years ago
Frank Lee
2b2dc1c86b
[pipeline] refactor the pipeline module ( #1087 )
...
* [pipeline] refactor the pipeline module
* polish code
3 years ago
Frank Lee
bad5d4c0a1
[context] support lazy init of module ( #1088 )
...
* [context] support lazy init of module
* polish code
3 years ago
ver217
be01db37c8
[tensor] refactor chunk mgr and impl MemStatsCollectorV2 ( #1077 )
...
* polish chunk manager
* polish unit test
* impl add_extern_static_tensor for chunk mgr
* add mem stats collector v2
* polish code
* polish unit test
* polish code
* polish get chunks
3 years ago
Ziyue Jiang
b3a03e4bfd
[Tensor] fix equal assert ( #1091 )
...
* fix equal assert
* polish
3 years ago
Frank Lee
50ec3a7e06
[test] skip tests when not enough GPUs are detected ( #1090 )
...
* [test] skip tests when not enough GPUs are detected
* polish code
* polish code
3 years ago
Frank Lee
65ee6dcc20
[test] ignore 8 gpu test ( #1080 )
...
* [test] ignore 8 gpu test
* polish code
* polish workflow
* polish workflow
3 years ago
Ziyue Jiang
0653c63eaa
[Tensor] 1d row embedding ( #1075 )
...
* Add CPU 1d row embedding
* polish
3 years ago
ver217
1b17859328
[tensor] chunk manager monitor mem usage ( #1076 )
3 years ago
Ziyue Jiang
4fc748f69b
[Tensor] fix optimizer for CPU parallel ( #1069 )
3 years ago
Jiarui Fang
49832b2344
[refactory] add nn.parallel module ( #1068 )
3 years ago
Jiarui Fang
a00644079e
reorgnize colotensor directory ( #1062 )
...
* reorgnize colotensor directory
* polish code
3 years ago
Ziyue Jiang
df9dcbbff6
[Tensor] add hybrid device demo and fix bugs ( #1059 )
3 years ago
YuliangLiu0306
b167258b6a
[pipeline]refactor ppschedule to support tensor list ( #1050 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* refactor ppschedule to support tensor list
* polish
3 years ago
ver217
51b9a49655
[zero] add zero optimizer for ColoTensor ( #1046 )
...
* add zero optimizer
* torch ok
* unit test ok
* polish code
* fix bugs
* polish unit test
* polish zero optim
* polish colo ddp v2
* refactor folder structure
* add comment
* polish unit test
* polish zero optim
* polish unit test
3 years ago
ver217
7faef93326
fix dist spec mgr ( #1045 )
3 years ago
ver217
9492a561c3
[tensor] ColoTensor supports ZeRo ( #1015 )
...
* impl chunk manager
* impl param op hook
* add reduce_chunk
* add zero hook v2
* add zero dp
* fix TensorInfo
* impl load balancing when using zero without chunk
* fix zero hook
* polish chunk
* fix bugs
* ddp ok
* zero ok
* polish code
* fix bugs about load balancing
* polish code
* polish code
* add ene-to-end test
* polish code
* polish code
* polish code
* fix typo
* add test_chunk
* fix bugs
* fix bugs
* polish code
3 years ago
YuliangLiu0306
9feff0f760
[titans]remove model zoo ( #1042 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* rm model zoo
3 years ago
Ziyue Jiang
7c530b9de2
[Tensor] add Parameter inheritance for ColoParameter ( #1041 )
...
* add Parameter inheritance for ColoParameter
* remove tricks
* remove tricks
* polish
* polish
3 years ago
Ziyue Jiang
6c5996a56e
[Tensor] add module check and bert test ( #1031 )
...
* add Embedding
* Add bert test
* polish
* add check module test
* polish
* polish
* polish
* polish
3 years ago
YuliangLiu0306
7106bd671d
[p2p]add object list send/recv ( #1024 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [p2p]add object list send recv
* refactor for code reusability
* polish
3 years ago
Ziyue Jiang
32291dd73f
[Tensor] add module handler for linear ( #1021 )
...
* add module spec for linear
* polish
* polish
* polish
3 years ago
ver217
cefc29ff06
[tensor] impl ColoDDP for ColoTensor ( #1009 )
...
* impl ColoDDP for ColoTensor
* polish code
3 years ago
ver217
a3b66f6def
[tensor] refactor parallel action ( #1007 )
...
* refactor parallel action
* polish unit tests
3 years ago
ver217
8e3d0ad8f1
[unit test] refactor test tensor ( #1005 )
...
* polish test_gpt
* update op unit tests
* update test model
3 years ago
ver217
ad536e308e
[tensor] refactor colo-tensor ( #992 )
...
* refactor colo-tensor and update linear op
* polish code
* polish code
* update ops and unit tests
* update unit tests
* polish code
* rename dist_spec module
* polish code
* polish code
* remove unneeded import
* fix pipelinable
3 years ago
ver217
c2fdc6a011
[tensor] derive compute pattern from dist spec ( #971 )
...
* derive compute pattern from dist spec
* polish code
3 years ago
Ziyue Jiang
797a9dc5a9
add DistSpec for loss and test_model ( #947 )
3 years ago
ver217
67c33f57eb
[tensor] design DistSpec and DistSpecManager for ColoTensor ( #934 )
...
* add dist spec
* update linear op
* polish code
* polish code
* update embedding op
* polish unit tests
* polish unit tests
* polish comments
* polish code
* add test_dist_spec_mgr
* polish code
* refactor folder structure
* polish unit tests
* add get_process_group() for TensorSpec
* polish code
3 years ago
Ziyue Jiang
830d3bca26
[Tensor] add optimizer to bert test ( #933 )
...
* add optimizer to bert test
* polish
3 years ago
Ziyue Jiang
d73c2b1d79
[Tensor] fix init context ( #931 )
...
* change torch.Parameter to ColoParameter
* fix post assignment for init context
* polish
* polish
3 years ago
Ziyue Jiang
dfc88b85ea
[Tensor] simplify named param ( #928 )
...
* simplify ColoModulize
* simplify ColoModulize
* polish
* polish
3 years ago
ver217
45b9124df4
[tensor] hijack addmm for colo tensor ( #923 )
...
* hijack addmm for colo tensor
* fix bugs
* polish unit test
* polish comments
3 years ago
Jiarui Fang
534afb018a
test pretrain loading on multi-process ( #922 )
3 years ago
Ziyue Jiang
c195d2814c
[Tensor] add from_pretrained support and bert pretrained test ( #921 )
...
* add from_pretrained support and test
* polish
* polish
* polish
* polish
3 years ago
Jiarui Fang
845856ea29
[Graph] building computing graph with ColoTensor, Linear only ( #917 )
3 years ago
Ziyue Jiang
75d221918a
[Tensor] add 1d vocab loss ( #918 )
...
* add 1d vocab loss
* polish
3 years ago
Ziyue Jiang
dfaff4e243
[Tensor] fix test_model ( #916 )
...
* polish test_model
* polish
3 years ago
Jiarui Fang
ed6426c300
[Tensor] polish model test ( #915 )
3 years ago
Ziyue Jiang
0fab86b12a
[Tensor] add a basic bert. ( #911 )
...
* add base bert test
* Add bert test
* polish
* remove test_bert
* polish
3 years ago
Jiarui Fang
ab95ec9aea
[Tensor] init ColoParameter ( #914 )
3 years ago
Ziyue Jiang
193d629311
update pytest.mark.parametrize in tensor tests ( #913 )
3 years ago
Ziyue Jiang
f593a5637e
[Tensor] add embedding tp1d row ( #904 )
3 years ago
Ziyue Jiang
2c0d19d755
[Tensor] add ColoTensor TP1Dcol Embedding ( #899 )
3 years ago
Jiarui Fang
d16671da75
[Tensor] initialize the ColoOptimizer ( #898 )
...
* [Tensor] activation is an attr of ColoTensor
* [Tensor] add optimizer
* only detach parameters in context
* polish code
3 years ago
Jiarui Fang
e76f76c08b
[Tensor] test parameters() as member function ( #896 )
3 years ago
Ziyue Jiang
cb182da7c5
[tensor] refine linear and add gather for laynorm ( #893 )
...
* refine linear and add function to ColoTensor
* add gather for layernorm
* polish
* polish
3 years ago
Jiarui Fang
26c49639d8
[Tensor] overriding paramters() for Module using ColoTensor ( #889 )
3 years ago
Ziyue Jiang
1d0aba4153
[tensor] add ColoTensor 1Dcol ( #888 )
3 years ago
Jiarui Fang
a0e5971692
[Tensor] test model check results for a simple net ( #887 )
3 years ago
Jiarui Fang
72cdc06875
[Tensor] make ColoTensor more robust for getattr ( #886 )
...
* [Tensor] make ColoTensor more robust for getattr
* polish
* polish
3 years ago
Ziyue Jiang
9bc5a77c31
[tensor] wrap function in the torch_tensor to ColoTensor ( #881 )
3 years ago
Jiarui Fang
7f76517a85
[Tensor] make a simple net works with 1D row TP ( #879 )
3 years ago
ver217
c4d903e64a
[gemini] accelerate adjust_layout() ( #878 )
...
* add lru cache
* polish code
* update unit test
* fix sharded optim
3 years ago
Jiarui Fang
909211453b
[Tensor] Add some attributes to ColoTensor ( #877 )
...
* [Tensor] add some function to ColoTensor
* torch.allclose
* rm torch.add
3 years ago
Jiarui Fang
e43f83aa5c
[Tensor] get named parameters for model using ColoTensors ( #874 )
3 years ago
Jiarui Fang
96211c2cc8
[tensor] customized op returns ColoTensor ( #875 )
...
* [tensor] customized op returns ColoTensor
* polish
* polish code
3 years ago
Ziyue Jiang
26d4ab8b03
[Tensor] Add function to spec and update linear 1Drow and unit tests ( #869 )
3 years ago
Jiarui Fang
1190b2c4a4
[tensor] add cross_entrophy_loss ( #868 )
3 years ago
HELSON
3107817172
[gemini] add stateful tensor container ( #867 )
3 years ago
Jiarui Fang
d01d3b8cb0
colo init context add device attr. ( #866 )
3 years ago
Jiarui Fang
126ba573a8
[Tensor] add layer norm Op ( #852 )
3 years ago
Frank Lee
1258af71cc
[ci] cache cuda extension ( #860 )
3 years ago
Ziyue Jiang
bcc8655021
[Tensor ] Add 1Drow weight reshard by spec ( #854 )
3 years ago
Jiarui Fang
62f059251b
[Tensor] init a tp network training unittest ( #849 )
3 years ago
Ziyue Jiang
2a0a427e04
[tensor]add assert for colo_tensor 1Drow ( #846 )
3 years ago
Ziyue Jiang
05023ecfee
[Tensor] TP Linear 1D row ( #843 )
3 years ago
HELSON
e5ea3fdeef
[gemini] add GeminiMemoryManger ( #832 )
...
* refactor StatefulTensor, tensor utilities
* add unitest for GeminiMemoryManager
3 years ago
YuliangLiu0306
35ea6e1023
[pipelinable]use pipelinable context to initialize non-pipeline model ( #816 )
...
* [CLI] add CLI launcher
* Revert "[CLI] add CLI launcher"
This reverts commit df7e6506d4
.
* [pipeline]add module lazy init feature to support large model initization.
* [pipeline]add to_layer_list and partition method to support arbitrary non-pp model
* refactor the module structure
* polish
* [pipelinable]add unit test for pipelinable
* polish
* polish
* Fix CodeFactor issues.
3 years ago
Jiarui Fang
ea0a2ed25f
[hotfix] the bug of numel() in ColoTensor ( #845 )
3 years ago
Jiarui Fang
8789850eea
Init Conext supports lazy allocate model memory ( #842 )
3 years ago
Frank Lee
943982d29a
[unittest] refactored unit tests for change in dependency ( #838 )
3 years ago
Frank Lee
01e9f834f5
[dependency] removed torchvision ( #833 )
...
* [dependency] removed torchvision
* fixed transforms
3 years ago
Jiarui Fang
cb5a4778e1
Revert "[WIP] Applying ColoTensor on TP-1D-row Linear. ( #831 )" ( #835 )
...
This reverts commit ac88de6dfc
.
3 years ago
Jiarui Fang
ac88de6dfc
[WIP] Applying ColoTensor on TP-1D-row Linear. ( #831 )
...
* revert zero tensors back
* [tensor] init row 1d linear
3 years ago
Jiarui Fang
294a6060d0
[tensor] ZeRO use ColoTensor as the base class. ( #828 )
...
* [refactor] moving InsertPostInitMethodToModuleSubClasses to utils.
* [tensor] ZeRO use ColoTensor as the base class.
* polish
3 years ago
Ziyue Jiang
8e6fdb4f29
[tensor]fix test_linear ( #826 )
3 years ago
Ziyue Jiang
1a9e2c2dff
[tensor] fix kwargs in colo_tensor torch_funtion ( #825 )
3 years ago
Jiarui Fang
2ecc3d7a55
[tensor] lazy init ( #823 )
3 years ago
Jiarui Fang
660d2d1f1b
[Tensor] apply ColoTensor on Torch functions ( #821 )
...
* Revert "[zero] add ZeroTensorShardStrategy (#793 )"
This reverts commit 88759e289e
.
* [gemini] set cpu memory capacity
* [log] local throughput collecting
* polish
* polish
* polish
* polish code
* polish
* polish code
* add a new tensor structure and override linear for it
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* [tensor] renaming and reorganize directory structure.
* rm useless dir
* polish
* polish
* [tensor] hander the function not wrapped
3 years ago
Jiarui Fang
0ce8924ceb
[tensor] reorganize files ( #820 )
3 years ago
Jiarui Fang
ab962b9735
[gemini] a new tensor structure ( #818 )
...
* Revert "[zero] add ZeroTensorShardStrategy (#793 )"
This reverts commit 88759e289e
.
* [gemini] set cpu memory capacity
* [log] local throughput collecting
* polish
* polish
* polish
* polish code
* polish
* polish code
* add a new tensor structure and override linear for it
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* polish
3 years ago
Jiarui Fang
e761ad2cd7
Revert "[zero] add ZeroTensorShardStrategy ( #793 )" ( #806 )
3 years ago
HELSON
88759e289e
[zero] add ZeroTensorShardStrategy ( #793 )
3 years ago
Jiarui Fang
681addb512
[refactor] moving grad acc logic to engine ( #804 )
3 years ago
Jiarui Fang
4d9332b4c5
[refactor] moving memtracer to gemini ( #801 )
3 years ago
HELSON
4c4388c46e
[hotfix] fix memory leak in zero ( #781 )
3 years ago
Frank Lee
5a1a095b92
[test] refactored with the new rerun decorator ( #763 )
...
* [test] refactored with the new rerun decorator
* polish test case
3 years ago
Jiarui Fang
10ef8afdd2
[gemini] init genimi individual directory ( #754 )
3 years ago
ver217
dcca614eee
[hotfix] fix test_stateful_tensor_mgr ( #762 )
3 years ago
ver217
a93a7d7364
[hotfix] fix reuse_fp16_shard of sharded model ( #756 )
...
* fix reuse_fp16_shard
* disable test stm
* polish code
3 years ago
HELSON
84c6700b2a
[zero] refactor memstats_collector ( #746 )
3 years ago
ver217
e396bb71f2
[zero] add tensor placement policies ( #743 )
...
* add tensor placement policies
* polish comments
* polish comments
* update moe unit tests
3 years ago
HELSON
22c4b88d56
[zero] refactor ShardedParamV2 for convenience ( #742 )
3 years ago
Frank Lee
f4f42d4c3c
[bug] fixed DDP compatibility with torch 1.8 ( #739 )
3 years ago
Jiarui Fang
53cb584808
[utils] correct cpu memory used and capacity in the context of multi-process ( #726 )
3 years ago
HELSON
b9b469ea50
[moe] add checkpoint for moe zero test ( #729 )
3 years ago
FrankLeeeee
e88a498c9c
[test] removed trivial outdated test
3 years ago
FrankLeeeee
62b4ce7326
[test] added missing decorators to model checkpointing tests
3 years ago
Jiarui Fang
4d90a7b513
[refactor] zero directory ( #724 )
3 years ago
Frank Lee
20ab1f5520
[bug] fixed broken test_found_inf ( #725 )
3 years ago
Jiarui Fang
193dc8dacb
[refactor] refactor the memory utils ( #715 )
3 years ago
HELSON
dbd96fe90a
[zero] check whether gradients have inf and nan in gpu ( #712 )
3 years ago
HELSON
a9b8300d54
[zero] improve adaptability for not-shard parameters ( #708 )
...
* adapt post grad hooks for not-shard parameters
* adapt optimizer for not-shard parameters
* offload gradients for not-replicated parameters
3 years ago
ver217
ab8c6b4a0e
[zero] refactor memstats collector ( #706 )
...
* refactor memstats collector
* fix disposable
* polish code
3 years ago
HELSON
ee112fe1da
[zero] adapt zero hooks for unsharded module ( #699 )
3 years ago
ver217
3c9cd5bb5e
[zero] stateful tensor manager ( #687 )
...
* [WIP] stateful tensor manager
* add eviction strategy
* polish code
* polish code
* polish comment
* add unit test
* fix sampler bug
* polish code
* fix max sampling cnt resetting bug
* fix sampler bug
* polish code
* fix bug
* fix unit test
Co-authored-by: jiaruifang <fangjiarui123@gmail.com>
3 years ago
HELSON
d7ecaf362b
[zero] fix init bugs in zero context ( #686 )
...
* adapt model weight initialization for methods in Pytorch nn.init
3 years ago
Jiarui Fang
0aab52301e
[hotfix] fix a bug in model data stats tracing ( #655 )
3 years ago
YuliangLiu0306
ade05a5d83
[refactor] pipeline, put runtime schedule into engine. ( #627 )
3 years ago
HELSON
e5d615aeee
[hotfix] fix bugs in testing ( #659 )
...
* remove hybrid adam in test_moe_zero_optim
* fix activation checkpointing and its unitest
3 years ago
HELSON
b31daed4cf
fix bugs in CPU adam ( #633 )
...
* add cpu adam counter for all cpu adam
* fixed updating error in adam kernel
3 years ago
HELSON
055fbf5be6
[zero] adapt zero for unsharded paramters (Optimizer part) ( #601 )
3 years ago
アマデウス
354b7954d1
[model checkpoint] added unit tests for checkpoint save/load ( #599 )
3 years ago
FredHuang99
93f14d2a33
[zero] test zero tensor utils ( #609 )
3 years ago
Jiarui Fang
e956d93ac2
[refactor] memory utils ( #577 )
3 years ago
HELSON
e6d50ec107
[zero] adapt zero for unsharded parameters ( #561 )
...
* support existing sharded and unsharded parameters in zero
* add unitest for moe-zero model init
* polish moe gradient handler
3 years ago
ver217
7c6c427db1
[zero] trace states of fp16/32 grad and fp32 param ( #571 )
3 years ago
Jiarui Fang
7675366fce
[polish] rename col_attr -> colo_attr ( #558 )
3 years ago
ver217
014bac0c49
[zero] hijack p.grad in sharded model ( #554 )
...
* hijack p.grad in sharded model
* polish comments
* polish comments
3 years ago
Jiarui Fang
f552b11294
[zero] label state for param fp16 and grad ( #551 )
3 years ago
Jiarui Fang
214da761d4
[zero] add stateful tensor ( #549 )
3 years ago
HELSON
8c90d4df54
[zero] add zero context manager to change config during initialization ( #546 )
3 years ago
Liang Bowen
ec5086c49c
Refactored docstring to google style
3 years ago
Jiarui Fang
53b1b6e340
[zero] non model data tracing ( #545 )
3 years ago
ver217
1f90a3b129
[zero] polish ZeroInitContext ( #540 )
3 years ago
Jiarui Fang
c11ff81b15
[zero] get memory usage of sharded optim v2. ( #542 )
3 years ago
HELSON
a30e2b4c24
[zero] adapt for no-leaf module in zero ( #535 )
...
only process module's own parameters in Zero context
add zero hooks for all modules that contrain parameters
gather parameters only belonging to module itself
3 years ago
Jiarui Fang
705f56107c
[zero] refactor model data tracing ( #537 )
3 years ago
Jiarui Fang
a590ed0ba3
[zero] improve the accuracy of get_memory_usage of sharded param ( #538 )
3 years ago
Jiarui Fang
37cb70feec
[zero] get memory usage for sharded param ( #536 )
3 years ago
LuGY
105c5301c3
[zero]added hybrid adam, removed loss scale in adam ( #527 )
...
* [zero]added hybrid adam, removed loss scale of adam
* remove useless code
3 years ago
Jiarui Fang
8d8c5407c0
[zero] refactor model data tracing ( #522 )
3 years ago
Frank Lee
3601b2bad0
[test] fixed rerun_on_exception and adapted test cases ( #487 )
3 years ago
Jiarui Fang
4d322b79da
[refactor] remove old zero code ( #517 )
3 years ago
LuGY
6a3f9fda83
[cuda] modify the fused adam, support hybrid of fp16 and fp32 ( #497 )
3 years ago
Jiarui Fang
920c5889a7
[zero] add colo move inline ( #521 )
3 years ago
Jiarui Fang
0bebda6ea5
[zero] fix init device bug in zero init context unittest ( #516 )
3 years ago
Jiarui Fang
7ef3507ace
[zero] show model data cuda memory usage after zero context init. ( #515 )
3 years ago
Jiarui Fang
9330be0f3c
[memory] set cuda mem frac ( #506 )
3 years ago
Jiarui Fang
0035b7be07
[memory] add model data tensor moving api ( #503 )
3 years ago
Jiarui Fang
a445e118cf
[polish] polish singleton and global context ( #500 )
3 years ago
ver217
9ec1ce6ab1
[zero] sharded model support the reuse of fp16 shard ( #495 )
...
* sharded model supports reuse fp16 shard
* rename variable
* polish code
* polish code
* polish code
3 years ago
ver217
62b0a8d644
[zero] sharded optim support hybrid cpu adam ( #486 )
...
* sharded optim support hybrid cpu adam
* update unit test
* polish docstring
3 years ago
Jiarui Fang
b334822163
[zero] polish sharded param name ( #484 )
...
* [zero] polish sharded param name
* polish code
* polish
* polish code
* polish
* polsih
* polish
3 years ago
Jiarui Fang
65c0f380c2
[format] polish name format for MOE ( #481 )
3 years ago
HELSON
7544347145
[MOE] add unitest for MOE experts layout, gradient handler and kernel ( #469 )
3 years ago
HELSON
84fd7c1d4d
add moe context, moe utilities and refactor gradient handler ( #455 )
3 years ago
Frank Lee
af185b5519
[test] fixed amp convergence comparison test ( #454 )
3 years ago
ver217
a241f61b34
[zero] Update initialize for ZeRO ( #458 )
...
* polish code
* shard strategy receive pg in shard() / gather()
* update zero engine
* polish code
3 years ago
ver217
642846d6f9
update sharded optim and fix zero init ctx ( #457 )
3 years ago
Jiarui Fang
e2e9f82588
Revert "[zero] update sharded optim and fix zero init ctx" ( #456 )
...
* Revert "polish code"
This reverts commit 8cf7ff08cf
.
* Revert "rename variables"
This reverts commit e99af94ab8
.
* Revert "remove surplus imports"
This reverts commit 46add4a5c5
.
* Revert "update sharded optim and fix zero init ctx"
This reverts commit 57567ee768
.
3 years ago
ver217
8cf7ff08cf
polish code
3 years ago
ver217
46add4a5c5
remove surplus imports
3 years ago
ver217
57567ee768
update sharded optim and fix zero init ctx
3 years ago
Frank Lee
f27d801a13
[test] optimized zero data parallel test ( #452 )
3 years ago
Jiarui Fang
0fcfb1e00d
[test] make zero engine test really work ( #447 )
3 years ago
Frank Lee
bb2790cf0b
optimize engine and trainer test ( #448 )
3 years ago
Frank Lee
b72b8445c6
optimized context test time consumption ( #446 )
3 years ago
Jiarui Fang
496cbb0760
[hotfix] fix initialize bug with zero ( #442 )
3 years ago
Jiarui Fang
17b8274f8a
[unitest] polish zero config in unittest ( #438 )
3 years ago
Jiarui Fang
640a6cd304
[refactory] refactory the initialize method for new zero design ( #431 )
3 years ago
ver217
fce9432f08
sync before creating empty grad
3 years ago
Jiarui Fang
f9c762df85
[test] merge zero optim tests ( #428 )
3 years ago
Jiarui Fang
5d7dc3525b
[hotfix] run cpu adam unittest in pytest ( #424 )
3 years ago
Jiarui Fang
adebb3e041
[zero] cuda margin space for OS ( #418 )
3 years ago
Jiarui Fang
56bb412e72
[polish] use GLOBAL_MODEL_DATA_TRACER ( #417 )
3 years ago
Jiarui Fang
23ba3fc450
[zero] refactory ShardedOptimV2 init method ( #416 )
3 years ago
Frank Lee
e79ea44247
[fp16] refactored fp16 optimizer ( #392 )
3 years ago
Jiarui Fang
21dc54e019
[zero] memtracer to record cuda memory usage of model data and overall system ( #395 )
3 years ago
Jiarui Fang
a37bf1bc42
[hotfix] rm test_tensor_detector.py ( #413 )
3 years ago
Jiarui Fang
370f567e7d
[zero] new interface for ShardedOptimv2 ( #406 )
3 years ago
LuGY
a9c27be42e
Added tensor detector ( #393 )
...
* Added tensor detector
* Added the - states
* Allowed change include_cpu when detect()
3 years ago
ver217
54fd37f0e0
polish unit test
3 years ago
Frank Lee
1e4bf85cdb
fixed bug in activation checkpointing test ( #387 )
3 years ago
Jiarui Fang
3af13a2c3e
[zero] polish ShardedOptimV2 unittest ( #385 )
...
* place params on cpu after zero init context
* polish code
* bucketzed cpu gpu tensor transter
* find a bug in sharded optim unittest
* add offload unittest for ShardedOptimV2.
* polish code and make it more robust
3 years ago
Frank Lee
526a318032
[unit test] Refactored test cases with component func ( #339 )
...
* refactored test with component func
* fixed bug
3 years ago
LuGY
de46450461
Added activation offload ( #331 )
...
* Added activation offload
* Fixed the import bug, used the pytest
3 years ago
Jiarui Fang
b5f43acee3
[zero] find miss code ( #378 )
3 years ago
Jiarui Fang
6b6002962a
[zero] zero init context collect numel of model ( #375 )
3 years ago
jiaruifang
d9217e1960
Revert "[zero] bucketized tensor cpu gpu copy ( #368 )"
...
This reverts commit bef05489b6
.
3 years ago
Jiarui Fang
00670c870e
[zero] bucketized tensor cpu gpu copy ( #368 )
3 years ago
Jiarui Fang
44e4891f57
[zero] able to place params on cpu after zero init context ( #365 )
...
* place params on cpu after zero init context
* polish code
3 years ago
Jiarui Fang
ea2872073f
[zero] global model data memory tracer ( #360 )
3 years ago
Jiarui Fang
cb34cd384d
[test] polish zero related unitest ( #351 )
3 years ago
ver217
532ae79cb0
add test sharded optim with cpu adam ( #347 )
3 years ago
HELSON
425bb0df3f
Added Profiler Context to manage all profilers ( #340 )
3 years ago
ver217
d0ae0f2215
[zero] update sharded optim v2 ( #334 )
3 years ago
ver217
2b8cddd40e
skip bert in test engine
3 years ago
ver217
f5f0ad266e
fix bert unit test
3 years ago
jiaruifang
d271f2596b
polish engine unitest
3 years ago
jiaruifang
354c0f9047
polish code
3 years ago
jiaruifang
4d94cd513e
adapting bert unitest interface
3 years ago
jiaruifang
7977422aeb
add bert for unitest and sharded model is not able to pass the bert case
3 years ago
ver217
1388671699
[zero] Update sharded model v2 using sharded param v2 ( #323 )
3 years ago
jiaruifang
799d105bb4
using pytest parametrize
3 years ago
jiaruifang
dec24561cf
show pytest parameterize
3 years ago
Jiarui Fang
11bddb6e55
[zero] update zero context init with the updated test utils ( #327 )
3 years ago
Frank Lee
6268446b81
[test] refactored testing components ( #324 )
3 years ago
Jiarui Fang
de0468c7a8
[zero] zero init context ( #321 )
...
* add zero init context
* add more flags for zero init context
fix bug of repeated converting param to ShardedParamV2
* polish code
3 years ago
1SAA
73bff11288
Added profiler communication operations
...
Fixed bug for learning rate scheduler
3 years ago
LuGY
a3269de5c9
[zero] cpu adam kernel ( #288 )
...
* Added CPU Adam
* finished the cpu adam
* updated the license
* delete useless parameters, removed resnet
* modified the method off cpu adam unittest
* deleted some useless codes
* removed useless codes
Co-authored-by: ver217 <lhx0217@gmail.com>
Co-authored-by: Frank Lee <somerlee.9@gmail.com>
Co-authored-by: jiaruifang <fangjiarui123@gmail.com>
3 years ago
Jiarui Fang
90d3aef62c
[zero] yet an improved sharded param ( #311 )
3 years ago
Jiarui Fang
c9e7d9582d
[zero] polish shard strategy ( #310 )
...
* init shard param from shape tuple
* add more unitest for shard param
* add set_payload method for ShardedParam
* [zero] add shareded tensor class
* polish code
* add shard stratgy
* move shard and gather logic to shard strategy from shard tensor.
* polish code
3 years ago
ver217
36f9a74ab2
fix sharded param hook and unit test
3 years ago
ver217
001ca624dd
impl shard optim v2 and add unit test
3 years ago
Jiarui Fang
74f77e314b
[zero] a shard strategy in granularity of tensor ( #307 )
3 years ago
Jiarui Fang
80364c7686
[zero] sharded tensor ( #305 )
...
* init shard param from shape tuple
* add more unitest for shard param
* add set_payload method for ShardedParam
* [zero] add shareded tensor class
* polish code
3 years ago
Jie Zhu
d344689274
[profiler] primary memory tracer
3 years ago
Jiarui Fang
e17e92c54d
Polish sharded parameter ( #297 )
...
* init shard param from shape tuple
* add more unitest for shard param
* add more unittests to shareded param
3 years ago
ver217
7aef75ca42
[zero] add sharded grad and refactor grad hooks for ShardedModel ( #287 )
3 years ago
Frank Lee
27155b8513
added unit test for sharded optimizer ( #293 )
...
* added unit test for sharded optimizer
* refactor for elegance
3 years ago
Frank Lee
e17e54e32a
added buffer sync to naive amp model wrapper ( #291 )
3 years ago
Jiarui Fang
8d653af408
add a common util for hooks registered on parameter. ( #292 )
3 years ago
Jiarui Fang
5a560a060a
Feature/zero ( #279 )
...
* add zero1 (#209 )
* add zero1
* add test zero1
* update zero stage 1 develop (#212 )
* Implement naive zero3 (#240 )
* naive zero3 works well
* add zero3 param manager
* add TODOs in comments
* add gather full param ctx
* fix sub module streams
* add offload
* fix bugs of hook and add unit tests
* fix bugs of hook and add unit tests (#252 )
* add gather full param ctx
* fix sub module streams
* add offload
* fix bugs of hook and add unit tests
* polish code and add state dict hook
* fix bug
* update unit test
* refactor reconstructed zero code
* clip_grad support zero3 and add unit test
* add unit test for Zero3ParameterManager
* [WIP] initialize the shard param class
* [WIP] Yet another sharded model implementation (#274 )
* [WIP] initialize the shard param class
* [WIP] Yes another implementation of shardModel. Using a better hook method.
* torch.concat -> torch.cat
* fix test_zero_level_1.py::test_zero_level_1 unitest
* remove deepspeed implementation and refactor for the reconstructed zero module
* polish zero dp unittests
Co-authored-by: ver217 <lhx0217@gmail.com>
Co-authored-by: Frank Lee <somerlee.9@gmail.com>
3 years ago
1SAA
82023779bb
Added TPExpert for special situation
3 years ago
1SAA
219df6e685
Optimized MoE layer and fixed some bugs;
...
Decreased moe tests;
Added FFNExperts and ViTMoE model
3 years ago
zbian
3dba070580
fixed padding index issue for vocab parallel embedding layers; updated 3D linear to be compatible with examples in the tutorial
3 years ago
アマデウス
9ee197d0e9
moved env variables to global variables; ( #215 )
...
added branch context;
added vocab parallel layers;
moved split_batch from load_batch to tensor parallel embedding layers;
updated gpt model;
updated unit test cases;
fixed few collective communicator bugs
3 years ago
Jiarui Fang
569357fea0
add pytorch hooks ( #179 )
...
* add pytorch hooks
fix #175
* remove licenses in src code
* add gpu memory tracer
* replacing print with logger in ophooks.
3 years ago
Frank Lee
e2089c5c15
adapted for sequence parallel ( #163 )
3 years ago
ver217
7bf1e98b97
pipeline last stage supports multi output ( #151 )
3 years ago
ver217
96780e6ee4
Optimize pipeline schedule ( #94 )
...
* add pipeline shared module wrapper and update load batch
* added model parallel process group for amp and clip grad (#86 )
* added model parallel process group for amp and clip grad
* update amp and clip with model parallel process group
* remove pipeline_prev/next group (#88 )
* micro batch offload
* optimize pipeline gpu memory usage
* pipeline can receive tensor shape (#93 )
* optimize pipeline gpu memory usage
* fix grad accumulation step counter
* rename classes and functions
Co-authored-by: Frank Lee <somerlee.9@gmail.com>
3 years ago
アマデウス
01a80cd86d
Hotfix/Colossalai layers ( #92 )
...
* optimized 1d layer apis; reorganized nn.layer modules; fixed tests
* fixed 2.5d runtime issue
* reworked split batch, now called in trainer.schedule.load_batch
Co-authored-by: BoxiangW <45734921+BoxiangW@users.noreply.github.com>
3 years ago
アマデウス
0fedef4f3c
Layer integration ( #83 )
...
* integrated parallel layers for ease of building models
* integrated 2.5d layers
* cleaned codes and unit tests
* added log metric by step hook; updated imagenet benchmark; fixed some bugs
* reworked initialization; cleaned codes
Co-authored-by: BoxiangW <45734921+BoxiangW@users.noreply.github.com>
3 years ago
ver217
8f02a88db2
add interleaved pipeline, fix naive amp and update pipeline model initializer ( #80 )
3 years ago
Frank Lee
91c327cb44
fixed zero level 3 dtype bug ( #76 )
3 years ago
Frank Lee
cd9c28e055
added CI for unit testing ( #69 )
3 years ago
Frank Lee
da01c234e1
Develop/experiments ( #59 )
...
* Add gradient accumulation, fix lr scheduler
* fix FP16 optimizer and adapted torch amp with tensor parallel (#18 )
* fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes
* fixed trainer
* Revert "fixed trainer"
This reverts commit 2e0b0b7699
.
* improved consistency between trainer, engine and schedule (#23 )
Co-authored-by: 1SAA <c2h214748@gmail.com>
* Split conv2d, class token, positional embedding in 2d, Fix random number in ddp
Fix convergence in cifar10, Imagenet1000
* Integrate 1d tensor parallel in Colossal-AI (#39 )
* fixed 1D and 2D convergence (#38 )
* optimized 2D operations
* fixed 1D ViT convergence problem
* Feature/ddp (#49 )
* remove redundancy func in setup (#19 ) (#20 )
* use env to control the language of doc (#24 ) (#25 )
* Support TP-compatible Torch AMP and Update trainer API (#27 )
* Add gradient accumulation, fix lr scheduler
* fix FP16 optimizer and adapted torch amp with tensor parallel (#18 )
* fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes
* fixed trainer
* Revert "fixed trainer"
This reverts commit 2e0b0b7699
.
* improved consistency between trainer, engine and schedule (#23 )
Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: ver217 <lhx0217@gmail.com>
* add an example of ViT-B/16 and remove w_norm clipping in LAMB (#29 )
* add explanation for ViT example (#35 ) (#36 )
* support torch ddp
* fix loss accumulation
* add log for ddp
* change seed
* modify timing hook
Co-authored-by: Frank Lee <somerlee.9@gmail.com>
Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: binmakeswell <binmakeswell@gmail.com>
* Feature/pipeline (#40 )
* remove redundancy func in setup (#19 ) (#20 )
* use env to control the language of doc (#24 ) (#25 )
* Support TP-compatible Torch AMP and Update trainer API (#27 )
* Add gradient accumulation, fix lr scheduler
* fix FP16 optimizer and adapted torch amp with tensor parallel (#18 )
* fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes
* fixed trainer
* Revert "fixed trainer"
This reverts commit 2e0b0b7699
.
* improved consistency between trainer, engine and schedule (#23 )
Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: ver217 <lhx0217@gmail.com>
* add an example of ViT-B/16 and remove w_norm clipping in LAMB (#29 )
* add explanation for ViT example (#35 ) (#36 )
* optimize communication of pipeline parallel
* fix grad clip for pipeline
Co-authored-by: Frank Lee <somerlee.9@gmail.com>
Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: binmakeswell <binmakeswell@gmail.com>
* optimized 3d layer to fix slow computation ; tested imagenet performance with 3d; reworked lr_scheduler config definition; fixed launch args; fixed some printing issues; simplified apis of 3d layers (#51 )
* Update 2.5d layer code to get a similar accuracy on imagenet-1k dataset
* update api for better usability (#58 )
update api for better usability
Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: ver217 <lhx0217@gmail.com>
Co-authored-by: puck_WCR <46049915+WANG-CR@users.noreply.github.com>
Co-authored-by: binmakeswell <binmakeswell@gmail.com>
Co-authored-by: アマデウス <kurisusnowdeng@users.noreply.github.com>
Co-authored-by: BoxiangW <45734921+BoxiangW@users.noreply.github.com>
3 years ago
Frank Lee
3defa32aee
Support TP-compatible Torch AMP and Update trainer API ( #27 )
...
* Add gradient accumulation, fix lr scheduler
* fix FP16 optimizer and adapted torch amp with tensor parallel (#18 )
* fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes
* fixed trainer
* Revert "fixed trainer"
This reverts commit 2e0b0b7699
.
* improved consistency between trainer, engine and schedule (#23 )
Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: 1SAA <c2h214748@gmail.com>
Co-authored-by: ver217 <lhx0217@gmail.com>
3 years ago
アマデウス
3245a69fc2
cleaned test scripts
3 years ago
zbian
404ecbdcc6
Migrated project
3 years ago