Ziyue Jiang
|
05023ecfee
|
[Tensor] TP Linear 1D row (#843)
|
2022-04-24 13:43:12 +08:00 |
Jiarui Fang
|
ea0a2ed25f
|
[hotfix] the bug of numel() in ColoTensor (#845)
|
2022-04-24 12:32:10 +08:00 |
Jiarui Fang
|
8789850eea
|
Init Conext supports lazy allocate model memory (#842)
|
2022-04-22 18:03:35 +08:00 |
Jiarui Fang
|
cb5a4778e1
|
Revert "[WIP] Applying ColoTensor on TP-1D-row Linear. (#831)" (#835)
This reverts commit ac88de6dfc .
|
2022-04-22 14:45:57 +08:00 |
Jiarui Fang
|
ac88de6dfc
|
[WIP] Applying ColoTensor on TP-1D-row Linear. (#831)
* revert zero tensors back
* [tensor] init row 1d linear
|
2022-04-22 14:03:26 +08:00 |
Jiarui Fang
|
294a6060d0
|
[tensor] ZeRO use ColoTensor as the base class. (#828)
* [refactor] moving InsertPostInitMethodToModuleSubClasses to utils.
* [tensor] ZeRO use ColoTensor as the base class.
* polish
|
2022-04-22 12:00:48 +08:00 |
Ziyue Jiang
|
8e6fdb4f29
|
[tensor]fix test_linear (#826)
|
2022-04-21 17:18:56 +08:00 |
Ziyue Jiang
|
1a9e2c2dff
|
[tensor] fix kwargs in colo_tensor torch_funtion (#825)
|
2022-04-21 16:47:35 +08:00 |
Jiarui Fang
|
2ecc3d7a55
|
[tensor] lazy init (#823)
|
2022-04-21 15:40:23 +08:00 |
Jiarui Fang
|
660d2d1f1b
|
[Tensor] apply ColoTensor on Torch functions (#821)
* Revert "[zero] add ZeroTensorShardStrategy (#793)"
This reverts commit 88759e289e .
* [gemini] set cpu memory capacity
* [log] local throughput collecting
* polish
* polish
* polish
* polish code
* polish
* polish code
* add a new tensor structure and override linear for it
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* polish
* [tensor] renaming and reorganize directory structure.
* rm useless dir
* polish
* polish
* [tensor] hander the function not wrapped
|
2022-04-21 14:21:10 +08:00 |
Jiarui Fang
|
0ce8924ceb
|
[tensor] reorganize files (#820)
|
2022-04-21 14:15:48 +08:00 |