Jiarui Fang
|
1aad903c15
|
[tensor] redistribute among different process groups (#1247)
* make it faster
* [tensor] rename convert_to_dist -> redistribute
* [tensor] ShardSpec and ReplicaSpec
* [tensor] redistribute among diff pgs
* polish code
|
2022-07-12 10:24:05 +08:00 |
Jiarui Fang
|
9bcd2fd4af
|
[tensor] a shorter shard and replicate spec (#1245)
|
2022-07-11 15:51:48 +08:00 |
Jiarui Fang
|
a98319f023
|
[tensor] torch function return colotensor (#1229)
|
2022-07-07 18:09:18 +08:00 |
Jiarui Fang
|
ae7d3f4927
|
[refactor] move process group from _DistSpec to ColoTensor. (#1203)
|
2022-07-06 16:15:16 +08:00 |
Jiarui Fang
|
060b917daf
|
[refactor] remove gpc dependency in colotensor's _ops (#1189)
|
2022-07-04 18:54:37 +08:00 |
Jiarui Fang
|
4b9bba8116
|
[ColoTensor] rename APIs and add output_replicate to ComputeSpec (#1168)
|
2022-06-24 13:08:54 +08:00 |
ver217
|
ae86151968
|
[tensor] add more element-wise ops (#1155)
* add more element-wise ops
* update test_op
* polish unit test
|
2022-06-22 15:16:47 +08:00 |
ver217
|
ad536e308e
|
[tensor] refactor colo-tensor (#992)
* refactor colo-tensor and update linear op
* polish code
* polish code
* update ops and unit tests
* update unit tests
* polish code
* rename dist_spec module
* polish code
* polish code
* remove unneeded import
* fix pipelinable
|
2022-05-19 12:44:59 +08:00 |
Ziyue Jiang
|
c195d2814c
|
[Tensor] add from_pretrained support and bert pretrained test (#921)
* add from_pretrained support and test
* polish
* polish
* polish
* polish
|
2022-05-09 16:11:47 +08:00 |
Jiarui Fang
|
72cdc06875
|
[Tensor] make ColoTensor more robust for getattr (#886)
* [Tensor] make ColoTensor more robust for getattr
* polish
* polish
|
2022-04-27 10:57:49 +08:00 |
Ziyue Jiang
|
9bc5a77c31
|
[tensor] wrap function in the torch_tensor to ColoTensor (#881)
|
2022-04-26 20:13:56 +08:00 |
Jiarui Fang
|
909211453b
|
[Tensor] Add some attributes to ColoTensor (#877)
* [Tensor] add some function to ColoTensor
* torch.allclose
* rm torch.add
|
2022-04-26 15:10:47 +08:00 |
Jiarui Fang
|
96211c2cc8
|
[tensor] customized op returns ColoTensor (#875)
* [tensor] customized op returns ColoTensor
* polish
* polish code
|
2022-04-26 13:23:59 +08:00 |
Jiarui Fang
|
126ba573a8
|
[Tensor] add layer norm Op (#852)
|
2022-04-25 11:49:20 +08:00 |
Jiarui Fang
|
ea0a2ed25f
|
[hotfix] the bug of numel() in ColoTensor (#845)
|
2022-04-24 12:32:10 +08:00 |
Jiarui Fang
|
294a6060d0
|
[tensor] ZeRO use ColoTensor as the base class. (#828)
* [refactor] moving InsertPostInitMethodToModuleSubClasses to utils.
* [tensor] ZeRO use ColoTensor as the base class.
* polish
|
2022-04-22 12:00:48 +08:00 |
Ziyue Jiang
|
8e6fdb4f29
|
[tensor]fix test_linear (#826)
|
2022-04-21 17:18:56 +08:00 |
Ziyue Jiang
|
1a9e2c2dff
|
[tensor] fix kwargs in colo_tensor torch_funtion (#825)
|
2022-04-21 16:47:35 +08:00 |
Jiarui Fang
|
2ecc3d7a55
|
[tensor] lazy init (#823)
|
2022-04-21 15:40:23 +08:00 |
Jiarui Fang
|
660d2d1f1b
|
[Tensor] apply ColoTensor on Torch functions (#821)
* Revert "[zero] add ZeroTensorShardStrategy (#793)"
This reverts commit 88759e289e .
* [gemini] set cpu memory capacity
* [log] local throughput collecting
* polish
* polish
* polish
* polish code
* polish
* polish code
* add a new tensor structure and override linear for it
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* [tensor] renaming and reorganize directory structure.
* rm useless dir
* polish
* polish
* [tensor] hander the function not wrapped
|
2022-04-21 14:21:10 +08:00 |
Jiarui Fang
|
0ce8924ceb
|
[tensor] reorganize files (#820)
|
2022-04-21 14:15:48 +08:00 |