Frank Lee
|
f3f19a5c47
|
[autoparallel] added matmul handler (#1763)
* [autoparallel] added matmul handler
* polish code
|
2022-11-01 15:14:53 +08:00 |
YuliangLiu0306
|
b0f7c8bde8
|
[autoparallel] update CommSpec to CommActions (#1768)
* [autoparallel] update CommSpec to CommActions
* polish code
|
2022-10-28 09:57:43 +08:00 |
YuliangLiu0306
|
b4cc59b61e
|
[autoparallel] add numerical test for node strategies (#1760)
* [autoparallel] add numerical test for node strategies
* polish code
* polish code
|
2022-10-27 10:42:54 +08:00 |
YuliangLiu0306
|
980ed21723
|
[autoparallel] shard param and buffer as expected (#1753)
* [autoparallel] shard param and buffer as expected
* fix unit test issue
|
2022-10-21 15:45:13 +08:00 |
YuliangLiu0306
|
a4ce180e85
|
[autoparallel] add sequential order to communication actions (#1735)
|
2022-10-20 18:48:18 +08:00 |
Frank Lee
|
993b8875b6
|
[autoparallel] handled illegal sharding strategy in shape consistency (#1744)
* [autoparallel] handled illegal sharding strategy in shape consistency
* polish code
|
2022-10-20 12:06:25 +08:00 |
Frank Lee
|
eee84908d4
|
[autoparallel] handled illegal sharding strategy (#1728)
* [autoparallel] handled illegal sharding strategy
* polish code
|
2022-10-19 12:53:06 +08:00 |
YuliangLiu0306
|
51b89d2202
|
[autoparallel] runtime_backward_apply (#1720)
|
2022-10-18 10:44:58 +08:00 |
Frank Lee
|
4973157ad7
|
[autoparallel] added sharding spec conversion for linear handler (#1687)
|
2022-10-12 11:16:18 +08:00 |
YuliangLiu0306
|
3f068d1409
|
[autoparallel] update CommSpec (#1667)
|
2022-09-29 11:20:59 +08:00 |
Frank Lee
|
154d3ef432
|
[fix] fixed the collective pattern name for consistency (#1649)
* [fix] fixed the collective pattern name for consistency
* polish code
|
2022-09-26 16:39:37 +08:00 |
YuliangLiu0306
|
702dbc5288
|
[tensor] use communication autograd func (#1617)
* [tensor] use communication autograd func
* change all to all comm spec info
* rename pattern and distinguish fwd/bwd
* polish code
|
2022-09-23 13:31:15 +08:00 |
Frank Lee
|
27fe8af60c
|
[autoparallel] refactored shape consistency to remove redundancy (#1591)
* [autoparallel] refactored shape consistency to remove redundancy
* polish code
* polish code
* polish code
|
2022-09-13 18:30:18 +08:00 |
YuliangLiu0306
|
44c866a3e3
|
[autoparallel] change the merge node logic (#1533)
|
2022-09-07 11:18:19 +08:00 |
YuliangLiu0306
|
4b03c25f85
|
[tensor]add 1D device mesh (#1492)
|
2022-08-25 16:48:12 +08:00 |
YuliangLiu0306
|
26a37b5cd5
|
[autoparallel] Add conv handler to generate strategies and costs info for conv (#1467)
|
2022-08-19 14:57:23 +08:00 |
Jiarui Fang
|
1b491ad7de
|
[doc] update docstring in ProcessGroup (#1468)
|
2022-08-19 13:41:57 +08:00 |
YuliangLiu0306
|
b73fb7a077
|
[tensor] support runtime ShardingSpec apply (#1453)
* [tensor] support runtime ShardingSpec apply
* polish code
* polish code
|
2022-08-19 13:39:51 +08:00 |
Jiarui Fang
|
36824a304c
|
[Doc] add more doc for ColoTensor. (#1458)
|
2022-08-16 10:38:41 +08:00 |
Jiarui Fang
|
a1476ea882
|
[NFC] polish doc style for ColoTensor (#1457)
|
2022-08-16 09:21:05 +08:00 |
YuliangLiu0306
|
0f3042363c
|
[tensor] shape consistency generate transform path and communication cost (#1435)
* [tensor] shape consistency output transform path and communication cost
* polish code
|
2022-08-12 14:02:32 +08:00 |
Frank Lee
|
ae1b58cd16
|
[tensor] added linear implementation for the new sharding spec (#1416)
* [tensor] added linear implementation for the new sharding spec
* polish code
|
2022-08-12 11:33:09 +08:00 |
YuliangLiu0306
|
33f0744d51
|
[tensor] add shape consistency feature to support auto spec transform (#1418)
* [tensor] add shape consistency feature to supportauto sharding spec transform.
* [tensor] remove unused argument in simulator, add doc string for target pair.
|
2022-08-10 11:29:17 +08:00 |
YuliangLiu0306
|
7c96055c68
|
[tensor]build sharding spec to replace distspec in future. (#1405)
|
2022-08-08 11:15:57 +08:00 |
HELSON
|
c7221cb2d4
|
[hotfix] adapt ProcessGroup and Optimizer to ColoTensor (#1388)
|
2022-07-29 19:33:24 +08:00 |
ver217
|
828b9e5e0d
|
[hotfix] fix zero optim save/load state dict (#1381)
|
2022-07-28 17:19:39 +08:00 |
HELSON
|
943a96323e
|
[hotfix] fix no optimizer in save/load (#1363)
|
2022-07-26 10:53:53 +08:00 |
ver217
|
d068af81a3
|
[doc] update rst and docstring (#1351)
* update rst
* add zero docstr
* fix docstr
* remove fx.tracer.meta_patch
* fix docstr
* fix docstr
* update fx rst
* fix fx docstr
* remove useless rst
|
2022-07-21 15:54:53 +08:00 |
HELSON
|
7a8702c06d
|
[colotensor] add Tensor.view op and its unit test (#1343)
[colotensor] add megatron initialization for gpt2
|
2022-07-21 10:53:15 +08:00 |
HELSON
|
f92c100ddd
|
[checkpoint] use gather_tensor in checkpoint and update its unit test (#1339)
|
2022-07-19 14:15:28 +08:00 |
ver217
|
0c51ff2c13
|
[hotfix] ZeroDDP use new process group (#1333)
* process group supports getting ranks in group
* chunk mgr receives a process group
* update unit test
* fix unit tests
|
2022-07-18 14:14:52 +08:00 |
HELSON
|
d49708ae43
|
[hotfix] fix ddp for unit test test_gpt2 (#1326)
|
2022-07-15 18:19:52 +08:00 |
HELSON
|
1b41686461
|
[hotfix] fix unit test test_module_spec (#1321)
|
2022-07-15 14:02:32 +08:00 |
Jiarui Fang
|
85f933b58b
|
[Optimizer] Remove useless ColoOptimizer (#1312)
|
2022-07-14 16:57:48 +08:00 |
Jiarui Fang
|
9f10524313
|
[Optimizer] polish the init method of ColoOptimizer (#1310)
|
2022-07-14 16:37:33 +08:00 |
HELSON
|
260a55804a
|
[hotfix] fix shape error in backward when using ColoTensor (#1298)
|
2022-07-13 23:06:12 +08:00 |
Jiarui Fang
|
556b9b7e1a
|
[hotfix] Dist Mgr gather torch version (#1284)
* make it faster
* [hotfix] torchvison fx tests
* [hotfix] rename duplicated named test_gpt.py
* [hotfix] dist mgr torch version
|
2022-07-13 00:18:56 +08:00 |
ver217
|
7aadcbd070
|
hotfix colotensor _scan_for_pg_from_args (#1276)
|
2022-07-12 20:46:31 +08:00 |
Jiarui Fang
|
c92f84fcdb
|
[tensor] distributed checkpointing for parameters (#1240)
|
2022-07-12 15:51:06 +08:00 |
Jiarui Fang
|
1aad903c15
|
[tensor] redistribute among different process groups (#1247)
* make it faster
* [tensor] rename convert_to_dist -> redistribute
* [tensor] ShardSpec and ReplicaSpec
* [tensor] redistribute among diff pgs
* polish code
|
2022-07-12 10:24:05 +08:00 |
Jiarui Fang
|
9bcd2fd4af
|
[tensor] a shorter shard and replicate spec (#1245)
|
2022-07-11 15:51:48 +08:00 |
Jiarui Fang
|
2699dfbbfd
|
[rename] convert_to_dist -> redistribute (#1243)
|
2022-07-11 13:05:44 +08:00 |
HELSON
|
f6add9b720
|
[tensor] redirect .data.__get__ to a tensor instance (#1239)
|
2022-07-11 11:41:29 +08:00 |
Jiarui Fang
|
20da6e48c8
|
[checkpoint] save sharded optimizer states (#1237)
|
2022-07-08 16:33:13 +08:00 |
Jiarui Fang
|
4a76084dc9
|
[tensor] add zero_like colo op, important for Optimizer (#1236)
|
2022-07-08 14:55:27 +08:00 |
Jiarui Fang
|
3b500984b1
|
[tensor] fix some unittests (#1234)
|
2022-07-08 14:18:30 +08:00 |
HELSON
|
f071b500b6
|
[polish] polish __repr__ for ColoTensor, DistSpec, ProcessGroup (#1235)
|
2022-07-08 13:25:57 +08:00 |
Yi Zhao
|
04537bf83e
|
[checkpoint]support generalized scheduler (#1222)
|
2022-07-07 18:16:38 +08:00 |
Jiarui Fang
|
a98319f023
|
[tensor] torch function return colotensor (#1229)
|
2022-07-07 18:09:18 +08:00 |
HELSON
|
280a81243d
|
[tensor] improve robustness of class 'ProcessGroup' (#1223)
|
2022-07-07 13:55:24 +08:00 |