YuliangLiu0306
78509124d3
[autoparallel] update getitem handler ( #2207 )
2022-12-27 19:58:32 +08:00
YuliangLiu0306
4851f2d607
[autoparallel] update_getattr_handler ( #2193 )
2022-12-26 21:57:39 +08:00
YuliangLiu0306
550f8f8905
[autoparallel] integrate_gpt_related_tests ( #2134 )
...
* [autoparallel] integrate_gpt_related_tests
* polish code
* polish code
* add GPT2Model into runtime test
2022-12-23 12:36:59 +08:00
Boyuan Yao
cfe2a9bd90
[autoparallel] memory estimation for shape consistency ( #2144 )
...
* [fx] metainfo class for auto parallel
* [fx] add unit test for linear metainfo
* [fx] fix bwd param for linear
* [fx] modify unit test
* [fx] modify unit test
* [fx] modify import
* [fx] modify import
* [fx] modify import
* [fx] move meta profiler to auto parallel
* [fx] add conv metainfo class
* [fx] restore profiler
* [fx] restore meta profiler
* [autoparallel] modify unit test
* [fx] modify unit test
* [autoparallel] add batchnorm metainfo class
* [autoparallel] fix batchnorm unit test function declaration
* [fx] restore profiler
* [fx] add relu metainfo class
* [fx] restore profiler
* [autoparallel] modify metainfo input
* [autoparallel] add pooling metainfo
* [autoparallel] add F.linear metainfo generator
* [autoparallel] add binary elementwise metainfo
* [fx] recover profiler
* [autoparallel] fix forward memory calculation
* [autoparallel] modify constants.py
* [autoparallel] remove redundant print
* [autoparallel] add F.conv metainfo
* [autoparallel] linear fix
* [autoparallel] memory estimation for communication actions
* [autoparallel] fix docstring
* [autoparallel] fix variables name
2022-12-21 10:39:37 +08:00
YuliangLiu0306
1cce6e36ca
[autoparallel] use metainfo in handler ( #2149 )
2022-12-20 10:31:22 +08:00
YuliangLiu0306
a3c6924deb
[autoparallel] process size nodes in runtime pass ( #2130 )
...
* [autoparallel] process size nodes in runtime pass
* polish code
2022-12-14 16:10:50 +08:00
YuliangLiu0306
536560ccc0
[autoparallel] implement softmax handler ( #2132 )
2022-12-14 16:09:53 +08:00
YuliangLiu0306
cd0af9f7f6
[autoparallel] gpt2lp runtimee test ( #2113 )
2022-12-12 18:06:40 +08:00
YuliangLiu0306
d3d4630495
[autoparallel] add sum handler ( #2101 )
2022-12-08 17:02:54 +08:00
YuliangLiu0306
3af7e65dea
[autoparallel] complete gpt related module search ( #2097 )
2022-12-08 10:04:09 +08:00
YuliangLiu0306
7f72eb0510
[autoparallel]add embedding handler ( #2089 )
...
* [autoparallel] add embedding handler
* fix bugs
2022-12-07 09:41:46 +08:00
YuliangLiu0306
0e9db368ef
[autoparallel] add tensor constructor handler ( #2082 )
2022-12-06 10:20:10 +08:00
YuliangLiu0306
cdf537a648
[autoparallel] add non_split linear strategy ( #2078 )
...
* [autoparallel] add non_split linear stategy
* polish
2022-12-06 10:19:33 +08:00
Boyuan Yao
cf0268da93
[autoparallel] Add F.conv metainfo ( #2069 )
...
* [fx] metainfo class for auto parallel
* [fx] add unit test for linear metainfo
* [fx] fix bwd param for linear
* [fx] modify unit test
* [fx] modify unit test
* [fx] modify import
* [fx] modify import
* [fx] modify import
* [fx] move meta profiler to auto parallel
* [fx] add conv metainfo class
* [fx] restore profiler
* [fx] restore meta profiler
* [autoparallel] modify unit test
* [fx] modify unit test
* [autoparallel] add batchnorm metainfo class
* [autoparallel] fix batchnorm unit test function declaration
* [fx] restore profiler
* [fx] add relu metainfo class
* [fx] restore profiler
* [autoparallel] modify metainfo input
* [autoparallel] add pooling metainfo
* [autoparallel] add F.linear metainfo generator
* [autoparallel] add binary elementwise metainfo
* [fx] recover profiler
* [autoparallel] fix forward memory calculation
* [autoparallel] modify constants.py
* [autoparallel] remove redundant print
* [autoparallel] add F.conv metainfo
* [autoparallel] linear fix
2022-12-06 10:17:57 +08:00
YuliangLiu0306
f123476666
[autoparallel] complete gpt block searching ( #2065 )
...
* [autoparallel] complete gpt block searching
* fix test
2022-12-06 10:17:10 +08:00
Boyuan Yao
616da17fab
[autoparallel] add binary elementwise metainfo for auto parallel ( #2058 )
...
* [fx] metainfo class for auto parallel
* [fx] add unit test for linear metainfo
* [fx] fix bwd param for linear
* [fx] modify unit test
* [fx] modify unit test
* [fx] modify import
* [fx] modify import
* [fx] modify import
* [fx] move meta profiler to auto parallel
* [fx] add conv metainfo class
* [fx] restore profiler
* [fx] restore meta profiler
* [autoparallel] modify unit test
* [fx] modify unit test
* [autoparallel] add batchnorm metainfo class
* [autoparallel] fix batchnorm unit test function declaration
* [fx] restore profiler
* [fx] add relu metainfo class
* [fx] restore profiler
* [autoparallel] modify metainfo input
* [autoparallel] add pooling metainfo
* [autoparallel] add F.linear metainfo generator
* [autoparallel] add binary elementwise metainfo
* [fx] recover profiler
* [autoparallel] fix forward memory calculation
* [autoparallel] modify constants.py
* [autoparallel] remove redundant print
2022-12-04 15:18:51 +08:00
Boyuan Yao
4b40fbd743
[autoparallel] fix forward memory calculation ( #2062 )
2022-12-04 15:00:16 +08:00
YuliangLiu0306
e4293e5077
[hotfix] update test for latest version ( #2060 )
2022-12-02 18:12:30 +08:00
YuliangLiu0306
1c1fe44305
[autoparallel] adapt solver with self attention ( #2037 )
...
* [autoparallel] adapt solver with self attention
* polish code
2022-12-01 17:53:15 +08:00
YuliangLiu0306
0dbcd4a6f5
[autoparallel] add split handler ( #2032 )
...
* [autoparallel] add split handler
* add numerical test and runtime passes
2022-11-29 11:03:51 +08:00
YuliangLiu0306
81330b0352
[autoparallel] add experimental permute handler ( #2029 )
2022-11-27 20:26:52 +08:00
YuliangLiu0306
ea0f6b8df9
[autoparallel] add runtime pass and numerical test for view handler ( #2018 )
2022-11-25 15:50:16 +08:00
YuliangLiu0306
1438993113
[autoparallel] add experimental view handler ( #2011 )
...
* [autoparallel] add experimental view handler
* polish
* polish
* polish code
* rename variables
2022-11-24 11:34:41 +08:00
Boyuan Yao
6cd784ffee
[autoparallel] Add metainfo support for F.linear ( #1987 )
...
* [fx] metainfo class for auto parallel
* [fx] add unit test for linear metainfo
* [fx] fix bwd param for linear
* [fx] modify unit test
* [fx] modify unit test
* [fx] modify import
* [fx] modify import
* [fx] modify import
* [fx] move meta profiler to auto parallel
* [fx] add conv metainfo class
* [fx] restore profiler
* [fx] restore meta profiler
* [autoparallel] modify unit test
* [fx] modify unit test
* [autoparallel] add batchnorm metainfo class
* [autoparallel] fix batchnorm unit test function declaration
* [fx] restore profiler
* [fx] add relu metainfo class
* [fx] restore profiler
* [autoparallel] modify metainfo input
* [autoparallel] add pooling metainfo
* [autoparallel] add F.linear metainfo generator
2022-11-23 14:12:34 +08:00
YuliangLiu0306
155891113e
[autoparallel] use pytree map style to process data ( #1989 )
2022-11-21 10:44:22 +08:00
YuliangLiu0306
35e6b9ec82
[autoparallel] adapt handlers with attention block ( #1990 )
...
* [autoparallel] adapt handlers with attention block
* polish
2022-11-21 10:44:11 +08:00
YuliangLiu0306
05020e50d0
[autoparallel] support more flexible data type ( #1967 )
2022-11-18 17:01:06 +08:00
Boyuan Yao
c26f21d365
[autoparallel] add pooling metainfo ( #1968 )
...
* [fx] metainfo class for auto parallel
* [fx] add unit test for linear metainfo
* [fx] fix bwd param for linear
* [fx] modify unit test
* [fx] modify unit test
* [fx] modify import
* [fx] modify import
* [fx] modify import
* [fx] move meta profiler to auto parallel
* [fx] add conv metainfo class
* [fx] restore profiler
* [fx] restore meta profiler
* [autoparallel] modify unit test
* [fx] modify unit test
* [autoparallel] add batchnorm metainfo class
* [autoparallel] fix batchnorm unit test function declaration
* [fx] restore profiler
* [fx] add relu metainfo class
* [fx] restore profiler
* [autoparallel] modify metainfo input
* [autoparallel] add pooling metainfo
2022-11-18 15:13:03 +08:00
YuliangLiu0306
0da1d00399
[autoparallel] support distributed dataloader option ( #1906 )
...
* [autoparallel] support distributed dataloader option
* update output handler to support ddp dataloader
* poish code
2022-11-17 20:11:53 +08:00
Boyuan Yao
7c7921f71b
[autoparallel] add torch.nn.ReLU metainfo ( #1868 )
...
* [fx] metainfo class for auto parallel
* [fx] add unit test for linear metainfo
* [fx] fix bwd param for linear
* [fx] modify unit test
* [fx] modify unit test
* [fx] modify import
* [fx] modify import
* [fx] modify import
* [fx] move meta profiler to auto parallel
* [fx] add conv metainfo class
* [fx] restore profiler
* [fx] restore meta profiler
* [autoparallel] modify unit test
* [fx] modify unit test
* [autoparallel] add batchnorm metainfo class
* [autoparallel] fix batchnorm unit test function declaration
* [fx] restore profiler
* [fx] add relu metainfo class
* [fx] restore profiler
* [autoparallel] modify metainfo input
2022-11-16 23:12:31 +08:00
YuliangLiu0306
fea3cb661c
[autoparallel] support addmm in tracer and solver ( #1961 )
...
* [fx] patch addmm
* [autoparallel] support addmm in tracer and solver
2022-11-16 14:59:18 +08:00
YuliangLiu0306
36c0f3ea5b
[autoparallel] remove redundancy comm node ( #1893 )
2022-11-15 10:53:41 +08:00
Super Daniel
cc55ff0aa4
[autoparallel] user-friendly API for CheckpointSolver. ( #1879 )
...
Merge for SC tutorial
2022-11-10 20:59:28 +08:00
YuliangLiu0306
1b494ad73c
[autoparallel] fix linear logical convert issue ( #1857 )
2022-11-10 17:19:22 +08:00
HELSON
72c9448920
[NFC] polish colossalai/auto_parallel/tensor_shard/deprecated/op_handler/operator_handler.py code style ( #1845 )
2022-11-09 12:08:47 +08:00
Sze-qq
95ac4f88ea
[NFC] polish colossalai/auto_parallel/tensor_shard/deprecated/op_handler/conv_handler.py code style ( #1829 )
...
Co-authored-by: siqi <siqi@siqis-MacBook-Pro.local>
2022-11-09 12:08:47 +08:00
binmakeswell
3c3714fc2a
[NFC] polish strategies_constructor.py code style ( #1806 )
2022-11-09 12:08:47 +08:00
YuliangLiu0306
49216d7ab1
[autoparallel] fix bugs caused by negative dim key ( #1808 )
...
* [autoparallel] fix bugs caused by negative dim key
* fix import error
* fix matmul test issue
* fix unit test issue
2022-11-08 17:03:50 +08:00
YuliangLiu0306
f6032ddb17
[autoparallel] fix bias addition module ( #1800 )
2022-11-08 16:21:25 +08:00
Boyuan Yao
629172b319
[autoparallel] add batch norm metainfo ( #1815 )
...
* [fx] metainfo class for auto parallel
* [fx] add unit test for linear metainfo
* [fx] fix bwd param for linear
* [fx] modify unit test
* [fx] modify unit test
* [fx] modify import
* [fx] modify import
* [fx] modify import
* [fx] move meta profiler to auto parallel
* [fx] add conv metainfo class
* [fx] restore profiler
* [fx] restore meta profiler
* [autoparallel] modify unit test
* [fx] modify unit test
* [autoparallel] add batchnorm metainfo class
* [autoparallel] fix batchnorm unit test function declaration
* [fx] restore profiler
2022-11-08 15:05:26 +08:00
Boyuan Yao
327d07c44a
[autoparallel] add conv metainfo class for auto parallel ( #1796 )
...
* [fx] metainfo class for auto parallel
* [fx] add unit test for linear metainfo
* [fx] fix bwd param for linear
* [fx] modify unit test
* [fx] modify unit test
* [fx] modify import
* [fx] modify import
* [fx] modify import
* [fx] move meta profiler to auto parallel
* [fx] add conv metainfo class
* [fx] restore profiler
* [fx] restore meta profiler
* [autoparallel] modify unit test
* [fx] modify unit test
2022-11-07 16:15:35 +08:00
YuliangLiu0306
e34e850a4c
[autoparallel]add essential CommActions for broadcast oprands ( #1793 )
2022-11-04 18:36:42 +08:00
Boyuan Yao
05ce3d369f
[fx] Add linear metainfo class for auto parallel ( #1783 )
...
* [fx] metainfo class for auto parallel
* [fx] add unit test for linear metainfo
* [fx] fix bwd param for linear
* [fx] modify unit test
* [fx] modify unit test
* [fx] modify import
* [fx] modify import
* [fx] modify import
* [fx] move meta profiler to auto parallel
2022-11-04 10:55:09 +08:00
Super Daniel
e8a9bebc87
[autoparallel] refactor and add rotorc. ( #1789 )
...
* [autoparallel] refactor and add rotorc.
* [autoparallel] refactor and add rotorc.
2022-11-03 12:32:51 +08:00
YuliangLiu0306
2c4c7b3618
[autoparallel] add getattr handler ( #1767 )
...
* [autoparallel] add getattr haandler
* polish code
* add extra processes for Parameters
* add unit test for param resharding cost
* add docstring and polish test
2022-11-03 12:31:33 +08:00
Frank Lee
f3f19a5c47
[autoparallel] added matmul handler ( #1763 )
...
* [autoparallel] added matmul handler
* polish code
2022-11-01 15:14:53 +08:00
YuliangLiu0306
27de252334
[autoparallel] fix conv handler numerical test ( #1771 )
2022-11-01 10:43:44 +08:00
Super Daniel
1e88811c7a
[autoparallel] move ckpt solvers to autoparallel folder / refactor code ( #1764 )
...
* [autoparallel] first move.
* [autoparallel] add solver rotor.
* [autoparallel] add ckpt solvers.
* [autoparallel] modify codegen.
* [fx] fix annotation in test.
* [fx] remove check.
* [autoparallel] polish docstring.
* [fx] refactor MetaTensor.
2022-11-01 10:43:15 +08:00
YuliangLiu0306
b0f7c8bde8
[autoparallel] update CommSpec to CommActions ( #1768 )
...
* [autoparallel] update CommSpec to CommActions
* polish code
2022-10-28 09:57:43 +08:00
YuliangLiu0306
b4cc59b61e
[autoparallel] add numerical test for node strategies ( #1760 )
...
* [autoparallel] add numerical test for node strategies
* polish code
* polish code
2022-10-27 10:42:54 +08:00
YuliangLiu0306
314d8c497f
[autoparallel] refactor the runtime apply pass and add docstring to passes ( #1757 )
...
* [autoparallel] refactor the runtime apply pass and add doc string to passes
* fix unit test
* polish
2022-10-25 14:32:22 +08:00
Frank Lee
f9a613d660
[autoparallel] added binary elementwise node handler ( #1758 )
...
* [autoparallel] added binary elementwise node handler
* polish code
2022-10-25 14:32:01 +08:00
Frank Lee
262652c8bc
[autoparallel] added addbmm handler ( #1751 )
2022-10-21 18:55:48 +08:00
YuliangLiu0306
cdb7d5e7d2
[hotfix] autoparallel unit test ( #1752 )
2022-10-20 19:51:38 +08:00
YuliangLiu0306
a4ce180e85
[autoparallel] add sequential order to communication actions ( #1735 )
2022-10-20 18:48:18 +08:00
Frank Lee
474111ecb5
[autoparallel] fixed wrong sharding strategy in conv handler ( #1747 )
...
* [autoparallel] fixed wrong sharding strategy in conv handler
* polish code
2022-10-20 16:12:39 +08:00
Frank Lee
8b8937d901
[autoparallel] fixed wrong generated strategy for dot op ( #1746 )
...
* [autoparallel] fixed wrong generated strategy for dot op
* polish code
2022-10-20 15:18:16 +08:00
Frank Lee
993b8875b6
[autoparallel] handled illegal sharding strategy in shape consistency ( #1744 )
...
* [autoparallel] handled illegal sharding strategy in shape consistency
* polish code
2022-10-20 12:06:25 +08:00
Frank Lee
88a79814fb
[autoparallel] handled illegal strategy in node handler ( #1743 )
...
* [autoparallel] handled illegal strategy in node handler
* polish code
2022-10-19 17:08:52 +08:00
Frank Lee
eee84908d4
[autoparallel] handled illegal sharding strategy ( #1728 )
...
* [autoparallel] handled illegal sharding strategy
* polish code
2022-10-19 12:53:06 +08:00
YuliangLiu0306
d373e67b99
[hotfix] resharding cost issue ( #1742 )
2022-10-19 11:33:43 +08:00
YuliangLiu0306
845ff4a47a
[autoparallel] resnet block runtime apply ( #1709 )
...
* [autoparallel] resnet block runtime apply
* seperate buffer and parameter in MemoryCost
* polish code
* add comments and todos
* fix test issue
2022-10-17 13:37:38 +08:00
Frank Lee
22a115406b
[autoparallel] fixed broken node handler tests ( #1708 )
2022-10-14 18:25:59 +08:00
Frank Lee
6c331a5a09
[autoparallel] refactored the autoparallel module for organization ( #1706 )
...
* [autoparallel] refactored the autoparallel module for organization
* polish code
2022-10-14 13:27:00 +08:00
YuliangLiu0306
451cd72dea
[autoparallel] adapt runtime passes ( #1703 )
...
* [autoparallel] adapt runtime passes v2
* polish code
2022-10-14 10:14:07 +08:00
Frank Lee
8283e95db3
[autoparallel] collated all deprecated files ( #1700 )
...
* [autoparallel] collated all deprecated files
* polish code
2022-10-13 18:24:11 +08:00
Frank Lee
e2355d01b9
[autoparallel] init new folder structure ( #1696 )
2022-10-13 14:18:55 +08:00
YuliangLiu0306
81f7530ee7
[autoparallel] adapt solver and CostGraph with new handler ( #1695 )
...
* [autoparallel] adapt solver and CostGraph with new handler
* fix test issue
2022-10-13 14:04:15 +08:00
YuliangLiu0306
42b882ef06
[autoparallel] add output handler and placeholder handler ( #1694 )
...
* [autoparallel] add output handler and placeholder handler
* Delete test_solver_with_resnet.py
* fix test bugs
2022-10-13 13:42:36 +08:00
YuliangLiu0306
56088e6d98
[autoparallel] add pooling handler ( #1690 )
...
* [autoparallel] add pooling handler
* polish code
2022-10-13 13:42:13 +08:00
YuliangLiu0306
319d654f79
[autoparallel] where_handler_v2 ( #1688 )
...
* where generator
* [autoparallel] where_handler_v2
2022-10-13 11:02:22 +08:00
Frank Lee
4973157ad7
[autoparallel] added sharding spec conversion for linear handler ( #1687 )
2022-10-12 11:16:18 +08:00
YuliangLiu0306
af718e83f2
[autoparallel] add reshape handler v2 and fix some previous bug ( #1683 )
2022-10-11 18:12:59 +08:00
YuliangLiu0306
6878e42248
[hotfix] solver bug caused by dict type comm cost ( #1686 )
2022-10-11 17:57:03 +08:00
YuliangLiu0306
517b63939a
[autoparallel] add unary element wise handler v2 ( #1674 )
2022-10-09 17:30:42 +08:00
YuliangLiu0306
f6c6a932b8
[autoparallel] add following node generator ( #1673 )
...
* [autoparallel] add following node generator
* polish code
* polish code
* update name of arguments
2022-10-09 14:49:18 +08:00
YuliangLiu0306
52fda88796
[autoparallel] add layer norm handler v2 ( #1671 )
...
* [autoparallel] add layer norm handler v2
* polish code
* polish code
2022-10-09 14:23:22 +08:00
YuliangLiu0306
11ec070e53
[hotfix]unit test ( #1670 )
2022-09-29 12:49:28 +08:00
Frank Lee
a60024e77a
[autoparallel] added utils for broadcast operation ( #1665 )
...
* [autoparallel] added utils for broadcast operation
* polish code
2022-09-29 11:22:29 +08:00
YuliangLiu0306
3f068d1409
[autoparallel] update CommSpec ( #1667 )
2022-09-29 11:20:59 +08:00
Frank Lee
247a9dbca9
[autoparallel] added bias comm spec to matmul strategy ( #1664 )
2022-09-29 11:08:05 +08:00
YuliangLiu0306
746f8f979d
[autoparallel] add batch norm handler v2 ( #1666 )
2022-09-29 11:02:49 +08:00
YuliangLiu0306
c27e701cb2
[autoparallel] remove no strategy nodes ( #1652 )
...
* [autoparallel] remove no strategy nodes
* fix none object iteration issue
2022-09-29 10:43:25 +08:00
Frank Lee
50f16a2850
[autoparallel] added compute resharding costs for node handler ( #1662 )
2022-09-28 19:55:44 +08:00
Frank Lee
9ec401a722
[autoparallel] added new strategy constructor template ( #1661 )
...
* [autoparallel] added new strategy constructor template
* polish code
2022-09-28 14:01:36 +08:00
Frank Lee
3a4d6f63a8
[autoparallel] added node handler for bmm ( #1655 )
2022-09-28 11:32:16 +08:00
YuliangLiu0306
095854477f
[autoparallel] add conv handler v2 ( #1663 )
2022-09-28 11:24:59 +08:00
YuliangLiu0306
1e7816a460
[autoparallel] adapt solver with gpt ( #1653 )
2022-09-28 11:17:26 +08:00
Frank Lee
30e50c8b4a
[autoparallel] implemented all matmul strategy generator ( #1650 )
2022-09-27 12:06:25 +08:00
YuliangLiu0306
03978aad45
[autoparallel] change the following nodes strategies generation logic ( #1636 )
...
* [autoparallel] change the following nodes strategies generation logic
* fix unit test
2022-09-27 11:20:52 +08:00
YuliangLiu0306
59f100510a
[autoparallel] where handler ( #1651 )
...
* [autoparallel] where handler
* fix unit test
2022-09-27 11:20:43 +08:00
Frank Lee
45b39a692a
[autoparallel] implemented linear projection strategy generator ( #1639 )
2022-09-26 16:58:14 +08:00
YuliangLiu0306
b2b2a4af98
[autoparallel] adapt solver with mlp ( #1638 )
2022-09-26 15:26:14 +08:00
YuliangLiu0306
702dbc5288
[tensor] use communication autograd func ( #1617 )
...
* [tensor] use communication autograd func
* change all to all comm spec info
* rename pattern and distinguish fwd/bwd
* polish code
2022-09-23 13:31:15 +08:00
YuliangLiu0306
c7ac0f4ab2
[autoparallel] add elementwise handler ( #1622 )
...
* [autoparallel] add elementwise handler
* polish code
* polish code
* reduce skipped strategies range
* polish code
2022-09-23 13:27:31 +08:00
YuliangLiu0306
3a46215135
[autoparallel] add embedding handler ( #1620 )
2022-09-23 12:34:30 +08:00
YuliangLiu0306
69448f64c4
[autoparallel] protect bcast handler from invalid strategies ( #1631 )
2022-09-23 12:12:49 +08:00
YuliangLiu0306
0c703189b9
[autoparallel] add layernorm handler ( #1629 )
2022-09-23 12:00:25 +08:00
YuliangLiu0306
bf77d3ab65
[autoparallel] recover the merged node strategy index ( #1613 )
2022-09-23 11:52:42 +08:00
Frank Lee
d925122020
[autoparallel] added new linear module handler ( #1616 )
2022-09-21 12:23:21 +08:00