Boyuan Yao
22e947f982
[autoparallel] fix runtime apply memory estimation ( #2281 )
...
* [autoparallel] align the data_ptr with the old version of auto activation checkpoint pipeline
* [autoparallel] using fwd_time and bwd_time instead of fwd_flop and bwd_flop
* [autoparallel] specifycomm nodes' memory cost in construct chain
* [autoparallel] fix wrong runtime apply calculation
* [autoparallel] fix wrong runtime apply calculation
* [autoparallel] fix wrong runtime apply calculation
2 years ago
xcnick
85178a397a
[hotfix] fix error for torch 2.0 ( #2243 )
2 years ago
Boyuan Yao
24246f7aa5
[autoparallel] Attach input, buffer and output tensor to MetaInfo class ( #2162 )
...
* [fx] metainfo class for auto parallel
* [fx] add unit test for linear metainfo
* [fx] fix bwd param for linear
* [fx] modify unit test
* [fx] modify unit test
* [fx] modify import
* [fx] modify import
* [fx] modify import
* [fx] move meta profiler to auto parallel
* [fx] add conv metainfo class
* [fx] restore profiler
* [fx] restore meta profiler
* [autoparallel] modify unit test
* [fx] modify unit test
* [autoparallel] add batchnorm metainfo class
* [autoparallel] fix batchnorm unit test function declaration
* [fx] restore profiler
* [fx] add relu metainfo class
* [fx] restore profiler
* [autoparallel] modify metainfo input
* [autoparallel] add pooling metainfo
* [autoparallel] add F.linear metainfo generator
* [autoparallel] add binary elementwise metainfo
* [fx] recover profiler
* [autoparallel] fix forward memory calculation
* [autoparallel] modify constants.py
* [autoparallel] remove redundant print
* [autoparallel] add F.conv metainfo
* [autoparallel] linear fix
* [autoparallel] memory estimation for communication actions
* [autoparallel] fix docstring
* [autoparallel] fix variables name
* [autoparallel] attach tensor to metainfo class
* [autoparallel] fix dangerous try except
* [autoparallel] attach memory cost to shape consistency node
* [autoparallel] attach shape consistency node's metainfo to the node
* [autoparallel] remove todo in shape consistency memory estimation
* [autoparallel] fix the annotation
2 years ago
HELSON
2458659919
[zero] fix error for BEiT models ( #2169 )
...
* [zero] fix error for BEiT models
* [ColoParameter] add unpack operation for tuple arguments
* fix bugs
* fix chunkv2 unit testing
* add assertion for gradient state
2 years ago
Boyuan Yao
cfe2a9bd90
[autoparallel] memory estimation for shape consistency ( #2144 )
...
* [fx] metainfo class for auto parallel
* [fx] add unit test for linear metainfo
* [fx] fix bwd param for linear
* [fx] modify unit test
* [fx] modify unit test
* [fx] modify import
* [fx] modify import
* [fx] modify import
* [fx] move meta profiler to auto parallel
* [fx] add conv metainfo class
* [fx] restore profiler
* [fx] restore meta profiler
* [autoparallel] modify unit test
* [fx] modify unit test
* [autoparallel] add batchnorm metainfo class
* [autoparallel] fix batchnorm unit test function declaration
* [fx] restore profiler
* [fx] add relu metainfo class
* [fx] restore profiler
* [autoparallel] modify metainfo input
* [autoparallel] add pooling metainfo
* [autoparallel] add F.linear metainfo generator
* [autoparallel] add binary elementwise metainfo
* [fx] recover profiler
* [autoparallel] fix forward memory calculation
* [autoparallel] modify constants.py
* [autoparallel] remove redundant print
* [autoparallel] add F.conv metainfo
* [autoparallel] linear fix
* [autoparallel] memory estimation for communication actions
* [autoparallel] fix docstring
* [autoparallel] fix variables name
2 years ago
Jiarui Fang
2827f41898
[Gemini] GeminiDPP convert to PyTorch Module. ( #2151 )
2 years ago
Jiarui Fang
e99edfcb51
[NFC] polish comments for Chunk class ( #2116 )
2 years ago
Jiarui Fang
b3b89865e2
[Gemini] ParamOpHook -> ColoParamOpHook ( #2080 )
2 years ago
YuliangLiu0306
81330b0352
[autoparallel] add experimental permute handler ( #2029 )
2 years ago
Genghan Zhang
d655eea515
[autoparallel] mix gather ( #1977 )
...
* Add mix-gather
* Add comments
* Add comments
* Polish comments
* Change the global rank assumption
* Add tests
* Add two-step tests
* Fix 10 and 01
* Skip test becasue the number of GPUs
2 years ago
YuliangLiu0306
36c0f3ea5b
[autoparallel] remove redundancy comm node ( #1893 )
2 years ago
YuliangLiu0306
49216d7ab1
[autoparallel] fix bugs caused by negative dim key ( #1808 )
...
* [autoparallel] fix bugs caused by negative dim key
* fix import error
* fix matmul test issue
* fix unit test issue
2 years ago
Jiarui Fang
218c75fd9d
[NFC] polish type hint for shape consistency ( #1801 )
...
* [NFC] polish type hint for shape consistency
* polish code
* polish code
2 years ago
HELSON
c6a1a62636
[hotfix] fix zero's incompatibility with checkpoint in torch-1.12 ( #1786 )
...
* [hotfix] fix zero's incompatibility with checkpoint in torch-1.12
* [zero] add cpu shard init
* [zero] add tiny example test
* [colo_tensor] fix bugs for torch-1.11
2 years ago
Frank Lee
f3f19a5c47
[autoparallel] added matmul handler ( #1763 )
...
* [autoparallel] added matmul handler
* polish code
2 years ago
YuliangLiu0306
b0f7c8bde8
[autoparallel] update CommSpec to CommActions ( #1768 )
...
* [autoparallel] update CommSpec to CommActions
* polish code
2 years ago
YuliangLiu0306
b4cc59b61e
[autoparallel] add numerical test for node strategies ( #1760 )
...
* [autoparallel] add numerical test for node strategies
* polish code
* polish code
2 years ago
YuliangLiu0306
980ed21723
[autoparallel] shard param and buffer as expected ( #1753 )
...
* [autoparallel] shard param and buffer as expected
* fix unit test issue
2 years ago
YuliangLiu0306
a4ce180e85
[autoparallel] add sequential order to communication actions ( #1735 )
2 years ago
Frank Lee
993b8875b6
[autoparallel] handled illegal sharding strategy in shape consistency ( #1744 )
...
* [autoparallel] handled illegal sharding strategy in shape consistency
* polish code
2 years ago
Frank Lee
eee84908d4
[autoparallel] handled illegal sharding strategy ( #1728 )
...
* [autoparallel] handled illegal sharding strategy
* polish code
2 years ago
YuliangLiu0306
51b89d2202
[autoparallel] runtime_backward_apply ( #1720 )
2 years ago
Frank Lee
4973157ad7
[autoparallel] added sharding spec conversion for linear handler ( #1687 )
2 years ago
YuliangLiu0306
3f068d1409
[autoparallel] update CommSpec ( #1667 )
2 years ago
Frank Lee
154d3ef432
[fix] fixed the collective pattern name for consistency ( #1649 )
...
* [fix] fixed the collective pattern name for consistency
* polish code
2 years ago
YuliangLiu0306
702dbc5288
[tensor] use communication autograd func ( #1617 )
...
* [tensor] use communication autograd func
* change all to all comm spec info
* rename pattern and distinguish fwd/bwd
* polish code
2 years ago
Frank Lee
27fe8af60c
[autoparallel] refactored shape consistency to remove redundancy ( #1591 )
...
* [autoparallel] refactored shape consistency to remove redundancy
* polish code
* polish code
* polish code
2 years ago
YuliangLiu0306
44c866a3e3
[autoparallel] change the merge node logic ( #1533 )
2 years ago
YuliangLiu0306
4b03c25f85
[tensor]add 1D device mesh ( #1492 )
2 years ago
YuliangLiu0306
26a37b5cd5
[autoparallel] Add conv handler to generate strategies and costs info for conv ( #1467 )
2 years ago
Jiarui Fang
1b491ad7de
[doc] update docstring in ProcessGroup ( #1468 )
2 years ago
YuliangLiu0306
b73fb7a077
[tensor] support runtime ShardingSpec apply ( #1453 )
...
* [tensor] support runtime ShardingSpec apply
* polish code
* polish code
2 years ago
Jiarui Fang
36824a304c
[Doc] add more doc for ColoTensor. ( #1458 )
2 years ago
Jiarui Fang
a1476ea882
[NFC] polish doc style for ColoTensor ( #1457 )
2 years ago
YuliangLiu0306
0f3042363c
[tensor] shape consistency generate transform path and communication cost ( #1435 )
...
* [tensor] shape consistency output transform path and communication cost
* polish code
2 years ago
Frank Lee
ae1b58cd16
[tensor] added linear implementation for the new sharding spec ( #1416 )
...
* [tensor] added linear implementation for the new sharding spec
* polish code
2 years ago
YuliangLiu0306
33f0744d51
[tensor] add shape consistency feature to support auto spec transform ( #1418 )
...
* [tensor] add shape consistency feature to supportauto sharding spec transform.
* [tensor] remove unused argument in simulator, add doc string for target pair.
2 years ago
YuliangLiu0306
7c96055c68
[tensor]build sharding spec to replace distspec in future. ( #1405 )
2 years ago
HELSON
c7221cb2d4
[hotfix] adapt ProcessGroup and Optimizer to ColoTensor ( #1388 )
2 years ago
ver217
828b9e5e0d
[hotfix] fix zero optim save/load state dict ( #1381 )
2 years ago
HELSON
943a96323e
[hotfix] fix no optimizer in save/load ( #1363 )
2 years ago
ver217
d068af81a3
[doc] update rst and docstring ( #1351 )
...
* update rst
* add zero docstr
* fix docstr
* remove fx.tracer.meta_patch
* fix docstr
* fix docstr
* update fx rst
* fix fx docstr
* remove useless rst
2 years ago
HELSON
7a8702c06d
[colotensor] add Tensor.view op and its unit test ( #1343 )
...
[colotensor] add megatron initialization for gpt2
2 years ago
HELSON
f92c100ddd
[checkpoint] use gather_tensor in checkpoint and update its unit test ( #1339 )
2 years ago
ver217
0c51ff2c13
[hotfix] ZeroDDP use new process group ( #1333 )
...
* process group supports getting ranks in group
* chunk mgr receives a process group
* update unit test
* fix unit tests
2 years ago
HELSON
d49708ae43
[hotfix] fix ddp for unit test test_gpt2 ( #1326 )
2 years ago
HELSON
1b41686461
[hotfix] fix unit test test_module_spec ( #1321 )
2 years ago
Jiarui Fang
85f933b58b
[Optimizer] Remove useless ColoOptimizer ( #1312 )
2 years ago
Jiarui Fang
9f10524313
[Optimizer] polish the init method of ColoOptimizer ( #1310 )
2 years ago
HELSON
260a55804a
[hotfix] fix shape error in backward when using ColoTensor ( #1298 )
2 years ago