YuliangLiu0306
|
49216d7ab1
|
[autoparallel] fix bugs caused by negative dim key (#1808)
* [autoparallel] fix bugs caused by negative dim key
* fix import error
* fix matmul test issue
* fix unit test issue
|
2022-11-08 17:03:50 +08:00 |
Frank Lee
|
f3f19a5c47
|
[autoparallel] added matmul handler (#1763)
* [autoparallel] added matmul handler
* polish code
|
2022-11-01 15:14:53 +08:00 |
YuliangLiu0306
|
b4cc59b61e
|
[autoparallel] add numerical test for node strategies (#1760)
* [autoparallel] add numerical test for node strategies
* polish code
* polish code
|
2022-10-27 10:42:54 +08:00 |
Frank Lee
|
eee84908d4
|
[autoparallel] handled illegal sharding strategy (#1728)
* [autoparallel] handled illegal sharding strategy
* polish code
|
2022-10-19 12:53:06 +08:00 |
Frank Lee
|
4973157ad7
|
[autoparallel] added sharding spec conversion for linear handler (#1687)
|
2022-10-12 11:16:18 +08:00 |
YuliangLiu0306
|
26a37b5cd5
|
[autoparallel] Add conv handler to generate strategies and costs info for conv (#1467)
|
2022-08-19 14:57:23 +08:00 |
YuliangLiu0306
|
0f3042363c
|
[tensor] shape consistency generate transform path and communication cost (#1435)
* [tensor] shape consistency output transform path and communication cost
* polish code
|
2022-08-12 14:02:32 +08:00 |
Frank Lee
|
ae1b58cd16
|
[tensor] added linear implementation for the new sharding spec (#1416)
* [tensor] added linear implementation for the new sharding spec
* polish code
|
2022-08-12 11:33:09 +08:00 |
YuliangLiu0306
|
33f0744d51
|
[tensor] add shape consistency feature to support auto spec transform (#1418)
* [tensor] add shape consistency feature to supportauto sharding spec transform.
* [tensor] remove unused argument in simulator, add doc string for target pair.
|
2022-08-10 11:29:17 +08:00 |
YuliangLiu0306
|
7c96055c68
|
[tensor]build sharding spec to replace distspec in future. (#1405)
|
2022-08-08 11:15:57 +08:00 |