ver217
|
a3b66f6def
|
[tensor] refactor parallel action (#1007)
* refactor parallel action
* polish unit tests
|
2022-05-20 20:19:58 +08:00 |
ver217
|
ad536e308e
|
[tensor] refactor colo-tensor (#992)
* refactor colo-tensor and update linear op
* polish code
* polish code
* update ops and unit tests
* update unit tests
* polish code
* rename dist_spec module
* polish code
* polish code
* remove unneeded import
* fix pipelinable
|
2022-05-19 12:44:59 +08:00 |
Jiarui Fang
|
802ac297cc
|
[Tensor] remove useless import in tensor dir (#997)
|
2022-05-18 14:54:51 +08:00 |
ver217
|
c2fdc6a011
|
[tensor] derive compute pattern from dist spec (#971)
* derive compute pattern from dist spec
* polish code
|
2022-05-16 14:58:08 +08:00 |
Ziyue Jiang
|
797a9dc5a9
|
add DistSpec for loss and test_model (#947)
|
2022-05-13 20:29:50 +08:00 |
ver217
|
67c33f57eb
|
[tensor] design DistSpec and DistSpecManager for ColoTensor (#934)
* add dist spec
* update linear op
* polish code
* polish code
* update embedding op
* polish unit tests
* polish unit tests
* polish comments
* polish code
* add test_dist_spec_mgr
* polish code
* refactor folder structure
* polish unit tests
* add get_process_group() for TensorSpec
* polish code
|
2022-05-13 15:13:52 +08:00 |
ver217
|
4ca732349e
|
[tensor] colo tensor overrides mul (#927)
* colo tensor overrides mul
* polish code
|
2022-05-10 16:04:08 +08:00 |
ver217
|
45b9124df4
|
[tensor] hijack addmm for colo tensor (#923)
* hijack addmm for colo tensor
* fix bugs
* polish unit test
* polish comments
|
2022-05-09 18:55:49 +08:00 |
Ziyue Jiang
|
c195d2814c
|
[Tensor] add from_pretrained support and bert pretrained test (#921)
* add from_pretrained support and test
* polish
* polish
* polish
* polish
|
2022-05-09 16:11:47 +08:00 |
Jiarui Fang
|
845856ea29
|
[Graph] building computing graph with ColoTensor, Linear only (#917)
|
2022-05-07 17:10:37 +08:00 |
Ziyue Jiang
|
75d221918a
|
[Tensor] add 1d vocab loss (#918)
* add 1d vocab loss
* polish
|
2022-05-07 15:49:14 +08:00 |
Jiarui Fang
|
ab95ec9aea
|
[Tensor] init ColoParameter (#914)
|
2022-05-06 12:57:14 +08:00 |
Ziyue Jiang
|
f593a5637e
|
[Tensor] add embedding tp1d row (#904)
|
2022-04-29 14:10:05 +08:00 |
Ziyue Jiang
|
2c0d19d755
|
[Tensor] add ColoTensor TP1Dcol Embedding (#899)
|
2022-04-28 17:45:06 +08:00 |
Jiarui Fang
|
d16671da75
|
[Tensor] initialize the ColoOptimizer (#898)
* [Tensor] activation is an attr of ColoTensor
* [Tensor] add optimizer
* only detach parameters in context
* polish code
|
2022-04-28 15:23:40 +08:00 |
Jiarui Fang
|
676f191532
|
[Tensor] activation is an attr of ColoTensor (#897)
|
2022-04-28 14:43:22 +08:00 |
Ziyue Jiang
|
cb182da7c5
|
[tensor] refine linear and add gather for laynorm (#893)
* refine linear and add function to ColoTensor
* add gather for layernorm
* polish
* polish
|
2022-04-28 10:55:40 +08:00 |
Jiarui Fang
|
26c49639d8
|
[Tensor] overriding paramters() for Module using ColoTensor (#889)
|
2022-04-27 15:28:59 +08:00 |
Ziyue Jiang
|
1d0aba4153
|
[tensor] add ColoTensor 1Dcol (#888)
|
2022-04-27 14:13:55 +08:00 |
Jiarui Fang
|
72cdc06875
|
[Tensor] make ColoTensor more robust for getattr (#886)
* [Tensor] make ColoTensor more robust for getattr
* polish
* polish
|
2022-04-27 10:57:49 +08:00 |
Ziyue Jiang
|
9bc5a77c31
|
[tensor] wrap function in the torch_tensor to ColoTensor (#881)
|
2022-04-26 20:13:56 +08:00 |
Jiarui Fang
|
7f76517a85
|
[Tensor] make a simple net works with 1D row TP (#879)
|
2022-04-26 18:11:47 +08:00 |
Jiarui Fang
|
909211453b
|
[Tensor] Add some attributes to ColoTensor (#877)
* [Tensor] add some function to ColoTensor
* torch.allclose
* rm torch.add
|
2022-04-26 15:10:47 +08:00 |
Jiarui Fang
|
e43f83aa5c
|
[Tensor] get named parameters for model using ColoTensors (#874)
|
2022-04-26 14:08:01 +08:00 |
Jiarui Fang
|
96211c2cc8
|
[tensor] customized op returns ColoTensor (#875)
* [tensor] customized op returns ColoTensor
* polish
* polish code
|
2022-04-26 13:23:59 +08:00 |
Ziyue Jiang
|
26d4ab8b03
|
[Tensor] Add function to spec and update linear 1Drow and unit tests (#869)
|
2022-04-26 10:15:26 +08:00 |
Jiarui Fang
|
1190b2c4a4
|
[tensor] add cross_entrophy_loss (#868)
|
2022-04-25 16:01:52 +08:00 |
Jiarui Fang
|
d01d3b8cb0
|
colo init context add device attr. (#866)
|
2022-04-25 14:24:26 +08:00 |
Jiarui Fang
|
8af5f7423d
|
[tensor] an initial dea of tensor spec (#865)
* a initial dea of tensor spec
* polish
* polish
|
2022-04-25 13:33:52 +08:00 |
Jiarui Fang
|
126ba573a8
|
[Tensor] add layer norm Op (#852)
|
2022-04-25 11:49:20 +08:00 |
Jiarui Fang
|
29159d9b5b
|
hotfix tensor unittest bugs (#862)
|
2022-04-25 10:06:53 +08:00 |
YuliangLiu0306
|
c6930d8ddf
|
[pipelinable]use ColoTensor to replace dummy tensor. (#853)
|
2022-04-24 18:31:22 +08:00 |
Ziyue Jiang
|
bcc8655021
|
[Tensor ] Add 1Drow weight reshard by spec (#854)
|
2022-04-24 18:30:20 +08:00 |
Jiarui Fang
|
62f059251b
|
[Tensor] init a tp network training unittest (#849)
|
2022-04-24 16:43:44 +08:00 |
Ziyue Jiang
|
2a0a427e04
|
[tensor]add assert for colo_tensor 1Drow (#846)
|
2022-04-24 14:12:45 +08:00 |
Ziyue Jiang
|
05023ecfee
|
[Tensor] TP Linear 1D row (#843)
|
2022-04-24 13:43:12 +08:00 |
Jiarui Fang
|
ea0a2ed25f
|
[hotfix] the bug of numel() in ColoTensor (#845)
|
2022-04-24 12:32:10 +08:00 |
Jiarui Fang
|
8789850eea
|
Init Conext supports lazy allocate model memory (#842)
|
2022-04-22 18:03:35 +08:00 |
Jiarui Fang
|
4575a3298b
|
[hotfix] ColoTensor pin_memory (#840)
|
2022-04-22 17:07:46 +08:00 |
Jiarui Fang
|
cb5a4778e1
|
Revert "[WIP] Applying ColoTensor on TP-1D-row Linear. (#831)" (#835)
This reverts commit ac88de6dfc .
|
2022-04-22 14:45:57 +08:00 |
Jiarui Fang
|
ac88de6dfc
|
[WIP] Applying ColoTensor on TP-1D-row Linear. (#831)
* revert zero tensors back
* [tensor] init row 1d linear
|
2022-04-22 14:03:26 +08:00 |
Jiarui Fang
|
294a6060d0
|
[tensor] ZeRO use ColoTensor as the base class. (#828)
* [refactor] moving InsertPostInitMethodToModuleSubClasses to utils.
* [tensor] ZeRO use ColoTensor as the base class.
* polish
|
2022-04-22 12:00:48 +08:00 |
Ziyue Jiang
|
8e6fdb4f29
|
[tensor]fix test_linear (#826)
|
2022-04-21 17:18:56 +08:00 |
Ziyue Jiang
|
1a9e2c2dff
|
[tensor] fix kwargs in colo_tensor torch_funtion (#825)
|
2022-04-21 16:47:35 +08:00 |
Jiarui Fang
|
2ecc3d7a55
|
[tensor] lazy init (#823)
|
2022-04-21 15:40:23 +08:00 |
Jiarui Fang
|
68dcd51d41
|
[Tensor] update ColoTensor torch_function (#822)
* Revert "[zero] add ZeroTensorShardStrategy (#793)"
This reverts commit 88759e289e .
* [gemini] set cpu memory capacity
* [log] local throughput collecting
* polish
* polish
* polish
* polish code
* polish
* polish code
* add a new tensor structure and override linear for it
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* [tensor] renaming and reorganize directory structure.
* rm useless dir
* polish
* polish
* [tensor] hander the function not wrapped
* polish
|
2022-04-21 14:25:27 +08:00 |
Jiarui Fang
|
0ce8924ceb
|
[tensor] reorganize files (#820)
|
2022-04-21 14:15:48 +08:00 |