Jiarui Fang
e99edfcb51
[NFC] polish comments for Chunk class ( #2116 )
2 years ago
Jiarui Fang
b3b89865e2
[Gemini] ParamOpHook -> ColoParamOpHook ( #2080 )
2 years ago
YuliangLiu0306
81330b0352
[autoparallel] add experimental permute handler ( #2029 )
2 years ago
Genghan Zhang
d655eea515
[autoparallel] mix gather ( #1977 )
...
* Add mix-gather
* Add comments
* Add comments
* Polish comments
* Change the global rank assumption
* Add tests
* Add two-step tests
* Fix 10 and 01
* Skip test becasue the number of GPUs
2 years ago
YuliangLiu0306
36c0f3ea5b
[autoparallel] remove redundancy comm node ( #1893 )
2 years ago
YuliangLiu0306
49216d7ab1
[autoparallel] fix bugs caused by negative dim key ( #1808 )
...
* [autoparallel] fix bugs caused by negative dim key
* fix import error
* fix matmul test issue
* fix unit test issue
2 years ago
Jiarui Fang
218c75fd9d
[NFC] polish type hint for shape consistency ( #1801 )
...
* [NFC] polish type hint for shape consistency
* polish code
* polish code
2 years ago
HELSON
c6a1a62636
[hotfix] fix zero's incompatibility with checkpoint in torch-1.12 ( #1786 )
...
* [hotfix] fix zero's incompatibility with checkpoint in torch-1.12
* [zero] add cpu shard init
* [zero] add tiny example test
* [colo_tensor] fix bugs for torch-1.11
2 years ago
Frank Lee
f3f19a5c47
[autoparallel] added matmul handler ( #1763 )
...
* [autoparallel] added matmul handler
* polish code
2 years ago
YuliangLiu0306
b0f7c8bde8
[autoparallel] update CommSpec to CommActions ( #1768 )
...
* [autoparallel] update CommSpec to CommActions
* polish code
2 years ago
YuliangLiu0306
b4cc59b61e
[autoparallel] add numerical test for node strategies ( #1760 )
...
* [autoparallel] add numerical test for node strategies
* polish code
* polish code
2 years ago
YuliangLiu0306
980ed21723
[autoparallel] shard param and buffer as expected ( #1753 )
...
* [autoparallel] shard param and buffer as expected
* fix unit test issue
2 years ago
YuliangLiu0306
a4ce180e85
[autoparallel] add sequential order to communication actions ( #1735 )
2 years ago
Frank Lee
993b8875b6
[autoparallel] handled illegal sharding strategy in shape consistency ( #1744 )
...
* [autoparallel] handled illegal sharding strategy in shape consistency
* polish code
2 years ago
Frank Lee
eee84908d4
[autoparallel] handled illegal sharding strategy ( #1728 )
...
* [autoparallel] handled illegal sharding strategy
* polish code
2 years ago
YuliangLiu0306
51b89d2202
[autoparallel] runtime_backward_apply ( #1720 )
2 years ago
Frank Lee
4973157ad7
[autoparallel] added sharding spec conversion for linear handler ( #1687 )
2 years ago
YuliangLiu0306
3f068d1409
[autoparallel] update CommSpec ( #1667 )
2 years ago
Frank Lee
154d3ef432
[fix] fixed the collective pattern name for consistency ( #1649 )
...
* [fix] fixed the collective pattern name for consistency
* polish code
2 years ago
YuliangLiu0306
702dbc5288
[tensor] use communication autograd func ( #1617 )
...
* [tensor] use communication autograd func
* change all to all comm spec info
* rename pattern and distinguish fwd/bwd
* polish code
2 years ago
Frank Lee
27fe8af60c
[autoparallel] refactored shape consistency to remove redundancy ( #1591 )
...
* [autoparallel] refactored shape consistency to remove redundancy
* polish code
* polish code
* polish code
2 years ago
YuliangLiu0306
44c866a3e3
[autoparallel] change the merge node logic ( #1533 )
2 years ago
YuliangLiu0306
4b03c25f85
[tensor]add 1D device mesh ( #1492 )
2 years ago
YuliangLiu0306
26a37b5cd5
[autoparallel] Add conv handler to generate strategies and costs info for conv ( #1467 )
2 years ago
Jiarui Fang
1b491ad7de
[doc] update docstring in ProcessGroup ( #1468 )
2 years ago
YuliangLiu0306
b73fb7a077
[tensor] support runtime ShardingSpec apply ( #1453 )
...
* [tensor] support runtime ShardingSpec apply
* polish code
* polish code
2 years ago
Jiarui Fang
36824a304c
[Doc] add more doc for ColoTensor. ( #1458 )
2 years ago
Jiarui Fang
a1476ea882
[NFC] polish doc style for ColoTensor ( #1457 )
2 years ago
YuliangLiu0306
0f3042363c
[tensor] shape consistency generate transform path and communication cost ( #1435 )
...
* [tensor] shape consistency output transform path and communication cost
* polish code
2 years ago
Frank Lee
ae1b58cd16
[tensor] added linear implementation for the new sharding spec ( #1416 )
...
* [tensor] added linear implementation for the new sharding spec
* polish code
2 years ago
YuliangLiu0306
33f0744d51
[tensor] add shape consistency feature to support auto spec transform ( #1418 )
...
* [tensor] add shape consistency feature to supportauto sharding spec transform.
* [tensor] remove unused argument in simulator, add doc string for target pair.
2 years ago
YuliangLiu0306
7c96055c68
[tensor]build sharding spec to replace distspec in future. ( #1405 )
2 years ago
HELSON
c7221cb2d4
[hotfix] adapt ProcessGroup and Optimizer to ColoTensor ( #1388 )
2 years ago
ver217
828b9e5e0d
[hotfix] fix zero optim save/load state dict ( #1381 )
2 years ago
HELSON
943a96323e
[hotfix] fix no optimizer in save/load ( #1363 )
2 years ago
ver217
d068af81a3
[doc] update rst and docstring ( #1351 )
...
* update rst
* add zero docstr
* fix docstr
* remove fx.tracer.meta_patch
* fix docstr
* fix docstr
* update fx rst
* fix fx docstr
* remove useless rst
2 years ago
HELSON
7a8702c06d
[colotensor] add Tensor.view op and its unit test ( #1343 )
...
[colotensor] add megatron initialization for gpt2
2 years ago
HELSON
f92c100ddd
[checkpoint] use gather_tensor in checkpoint and update its unit test ( #1339 )
2 years ago
ver217
0c51ff2c13
[hotfix] ZeroDDP use new process group ( #1333 )
...
* process group supports getting ranks in group
* chunk mgr receives a process group
* update unit test
* fix unit tests
2 years ago
HELSON
d49708ae43
[hotfix] fix ddp for unit test test_gpt2 ( #1326 )
2 years ago
HELSON
1b41686461
[hotfix] fix unit test test_module_spec ( #1321 )
2 years ago
Jiarui Fang
85f933b58b
[Optimizer] Remove useless ColoOptimizer ( #1312 )
2 years ago
Jiarui Fang
9f10524313
[Optimizer] polish the init method of ColoOptimizer ( #1310 )
2 years ago
HELSON
260a55804a
[hotfix] fix shape error in backward when using ColoTensor ( #1298 )
2 years ago
Jiarui Fang
556b9b7e1a
[hotfix] Dist Mgr gather torch version ( #1284 )
...
* make it faster
* [hotfix] torchvison fx tests
* [hotfix] rename duplicated named test_gpt.py
* [hotfix] dist mgr torch version
2 years ago
ver217
7aadcbd070
hotfix colotensor _scan_for_pg_from_args ( #1276 )
2 years ago
Jiarui Fang
c92f84fcdb
[tensor] distributed checkpointing for parameters ( #1240 )
2 years ago
Jiarui Fang
1aad903c15
[tensor] redistribute among different process groups ( #1247 )
...
* make it faster
* [tensor] rename convert_to_dist -> redistribute
* [tensor] ShardSpec and ReplicaSpec
* [tensor] redistribute among diff pgs
* polish code
2 years ago
Jiarui Fang
9bcd2fd4af
[tensor] a shorter shard and replicate spec ( #1245 )
2 years ago
Jiarui Fang
2699dfbbfd
[rename] convert_to_dist -> redistribute ( #1243 )
2 years ago
HELSON
f6add9b720
[tensor] redirect .data.__get__ to a tensor instance ( #1239 )
2 years ago
Jiarui Fang
20da6e48c8
[checkpoint] save sharded optimizer states ( #1237 )
2 years ago
Jiarui Fang
4a76084dc9
[tensor] add zero_like colo op, important for Optimizer ( #1236 )
2 years ago
Jiarui Fang
3b500984b1
[tensor] fix some unittests ( #1234 )
2 years ago
HELSON
f071b500b6
[polish] polish __repr__ for ColoTensor, DistSpec, ProcessGroup ( #1235 )
2 years ago
Yi Zhao
04537bf83e
[checkpoint]support generalized scheduler ( #1222 )
2 years ago
Jiarui Fang
a98319f023
[tensor] torch function return colotensor ( #1229 )
2 years ago
HELSON
280a81243d
[tensor] improve robustness of class 'ProcessGroup' ( #1223 )
2 years ago
Jiarui Fang
15d988f954
[tensor] sharded global process group ( #1219 )
2 years ago
Jiarui Fang
ae7d3f4927
[refactor] move process group from _DistSpec to ColoTensor. ( #1203 )
2 years ago
Jiarui Fang
b5f25eb32a
[Tensor] add cpu group to ddp ( #1200 )
2 years ago
Jiarui Fang
060b917daf
[refactor] remove gpc dependency in colotensor's _ops ( #1189 )
2 years ago
Jiarui Fang
c463f8adf9
[tensor] remove gpc in tensor tests ( #1186 )
2 years ago
Jiarui Fang
372f791444
[refactor] move chunk and chunkmgr to directory gemini ( #1182 )
2 years ago
Jiarui Fang
7487215b95
[ColoTensor] add independent process group ( #1179 )
2 years ago
Jiarui Fang
1b657f9ce1
[tensor] revert local view back ( #1178 )
2 years ago
Jiarui Fang
0dd4e2bbfb
[Tensor] rename some APIs in TensorSpec and Polish view unittest ( #1176 )
2 years ago
Ziyue Jiang
dd0420909f
[Tensor] rename parallel_action ( #1174 )
...
* rename parallel_action
* polish
2 years ago
Jiarui Fang
aa7bef73d4
[Tensor] distributed view supports inter-process hybrid parallel ( #1169 )
2 years ago
Jiarui Fang
4b9bba8116
[ColoTensor] rename APIs and add output_replicate to ComputeSpec ( #1168 )
2 years ago
Jiarui Fang
f4ef224358
[Tensor] remove ParallelAction, use ComputeSpec instread ( #1166 )
2 years ago
Jiarui Fang
177c374401
remove gather out in parallel action ( #1163 )
2 years ago
ver217
634eecb98e
mark sanity_check of dist_spec_mgr as staticmethod ( #1161 )
2 years ago
ver217
4e67b2a890
fix chunk move device ( #1158 )
2 years ago
Jiarui Fang
07f9c781f9
[graph] improve the graph building. ( #1157 )
2 years ago
ver217
ffa025e120
[tensor] dist spec s2s uses all-to-all ( #1136 )
...
* dist spec s2s uses all-to-all
* update unit test
* add sanity check
* polish unitest test with titans
* add sanity check for DistMgr
* add sanity check
Co-authored-by: jiaruifang <fangjiarui123@gmail.com>
2 years ago
Jiarui Fang
8cdce0399c
[ColoTensor] improves init functions. ( #1150 )
2 years ago
Frank Lee
0e4e62d30d
[tensor] added __repr__ to spec ( #1147 )
2 years ago
ver217
789cad301b
[hotfix] fix param op hook ( #1131 )
...
* fix param op hook
* update zero tp test
* fix bugs
2 years ago
ver217
7d14b473f0
[gemini] gemini mgr supports "cpu" placement policy ( #1118 )
...
* update gemini mgr
* update chunk
* add docstr
* polish placement policy
* update test chunk
* update test zero
* polish unit test
* remove useless unit test
2 years ago
ver217
f99f56dff4
fix colo parameter torch function ( #1117 )
2 years ago
ver217
895c1c5ee7
[tensor] refactor param op hook ( #1097 )
...
* refactor param op hook
* add docstr
* fix bug
2 years ago
Frank Lee
cb18922c47
[doc] added documentation to chunk and chunk manager ( #1094 )
...
* [doc] added documentation to chunk and chunk manager
* polish code
* polish code
* polish code
3 years ago
ver217
1f894e033f
[gemini] zero supports gemini ( #1093 )
...
* add placement policy
* add gemini mgr
* update mem stats collector
* update zero
* update zero optim
* fix bugs
* zero optim monitor os
* polish unit test
* polish unit test
* add assert
3 years ago
ver217
be01db37c8
[tensor] refactor chunk mgr and impl MemStatsCollectorV2 ( #1077 )
...
* polish chunk manager
* polish unit test
* impl add_extern_static_tensor for chunk mgr
* add mem stats collector v2
* polish code
* polish unit test
* polish code
* polish get chunks
3 years ago
ver217
1b17859328
[tensor] chunk manager monitor mem usage ( #1076 )
3 years ago
ver217
98cdbf49c6
[hotfix] fix chunk comm src rank ( #1072 )
3 years ago
ver217
c5cd3b0f35
[zero] zero optim copy chunk rather than copy tensor ( #1070 )
3 years ago
Jiarui Fang
a00644079e
reorgnize colotensor directory ( #1062 )
...
* reorgnize colotensor directory
* polish code
3 years ago
Ziyue Jiang
df9dcbbff6
[Tensor] add hybrid device demo and fix bugs ( #1059 )
3 years ago
ver217
e1922ea4f6
[zero] add chunk size search for chunk manager ( #1052 )
3 years ago
ver217
51b9a49655
[zero] add zero optimizer for ColoTensor ( #1046 )
...
* add zero optimizer
* torch ok
* unit test ok
* polish code
* fix bugs
* polish unit test
* polish zero optim
* polish colo ddp v2
* refactor folder structure
* add comment
* polish unit test
* polish zero optim
* polish unit test
3 years ago
ver217
7faef93326
fix dist spec mgr ( #1045 )
3 years ago
ver217
9492a561c3
[tensor] ColoTensor supports ZeRo ( #1015 )
...
* impl chunk manager
* impl param op hook
* add reduce_chunk
* add zero hook v2
* add zero dp
* fix TensorInfo
* impl load balancing when using zero without chunk
* fix zero hook
* polish chunk
* fix bugs
* ddp ok
* zero ok
* polish code
* fix bugs about load balancing
* polish code
* polish code
* add ene-to-end test
* polish code
* polish code
* polish code
* fix typo
* add test_chunk
* fix bugs
* fix bugs
* polish code
3 years ago
Ziyue Jiang
7c530b9de2
[Tensor] add Parameter inheritance for ColoParameter ( #1041 )
...
* add Parameter inheritance for ColoParameter
* remove tricks
* remove tricks
* polish
* polish
3 years ago
Ziyue Jiang
6c5996a56e
[Tensor] add module check and bert test ( #1031 )
...
* add Embedding
* Add bert test
* polish
* add check module test
* polish
* polish
* polish
* polish
3 years ago
Ziyue Jiang
32291dd73f
[Tensor] add module handler for linear ( #1021 )
...
* add module spec for linear
* polish
* polish
* polish
3 years ago
ver217
a3b66f6def
[tensor] refactor parallel action ( #1007 )
...
* refactor parallel action
* polish unit tests
3 years ago
ver217
ad536e308e
[tensor] refactor colo-tensor ( #992 )
...
* refactor colo-tensor and update linear op
* polish code
* polish code
* update ops and unit tests
* update unit tests
* polish code
* rename dist_spec module
* polish code
* polish code
* remove unneeded import
* fix pipelinable
3 years ago
Jiarui Fang
802ac297cc
[Tensor] remove useless import in tensor dir ( #997 )
3 years ago