Commit Graph

698 Commits (f99f56dff434a33a166a11358473517a7b2f4151)

Author SHA1 Message Date
ver217 f99f56dff4
fix colo parameter torch function (#1117) 2022-06-15 14:23:27 +08:00
Frank Lee e1620ddac2
[fx] added coloproxy (#1115) 2022-06-15 10:47:57 +08:00
Frank Lee 6f82ac9bcb
[pipeline] supported more flexible dataflow control for pipeline parallel training (#1108)
* [pipeline] supported more flexible dataflow control for pipeline parallel training

* polish code

* polish code

* polish code
2022-06-15 10:41:28 +08:00
Frank Lee 53297330c0
[test] fixed hybrid parallel test case on 8 GPUs (#1106) 2022-06-14 10:30:54 +08:00
github-actions[bot] 85b58093d2
Automated submodule synchronization (#1105)
Co-authored-by: github-actions <github-actions@github.com>
2022-06-14 09:53:30 +08:00
Frank Lee 74948b095c
[release] update version.txt (#1103) 2022-06-13 16:26:22 +08:00
ver217 895c1c5ee7
[tensor] refactor param op hook (#1097)
* refactor param op hook

* add docstr

* fix bug
2022-06-13 16:11:53 +08:00
YuliangLiu0306 1e9f9c227f
[hotfix]change to fit latest p2p (#1100)
* [CLI] add CLI launcher

* Revert "[CLI] add CLI launcher"

This reverts commit df7e6506d4.

* [hotfix]change to fit latest p2p

* polish

* polish
2022-06-13 14:57:25 +08:00
Frank Lee 72bd7c696b
[amp] included dict for type casting of model output (#1102) 2022-06-13 14:18:04 +08:00
Frank Lee 5a9d8ef4d5
[workflow] fixed 8-gpu test workflow (#1101) 2022-06-13 13:50:22 +08:00
Frank Lee 03e52ecba3
[workflow] added regular 8 GPU testing (#1099)
* [workflow] added regular 8 GPU testing

* polish workflow
2022-06-10 17:38:15 +08:00
Frank Lee 7f2d2b2b5b
[engine] fixed empty op hook check (#1096)
* [engine] fixed empty op hook check

* polish code
2022-06-10 17:27:27 +08:00
Frank Lee 14e5b11d7f
[zero] fixed api consistency (#1098) 2022-06-10 16:59:59 +08:00
Frank Lee cb18922c47
[doc] added documentation to chunk and chunk manager (#1094)
* [doc] added documentation to chunk and chunk manager

* polish code

* polish code

* polish code
2022-06-10 15:33:06 +08:00
ver217 1f894e033f
[gemini] zero supports gemini (#1093)
* add placement policy

* add gemini mgr

* update mem stats collector

* update zero

* update zero optim

* fix bugs

* zero optim monitor os

* polish unit test

* polish unit test

* add assert
2022-06-10 14:48:28 +08:00
Frank Lee 2b2dc1c86b
[pipeline] refactor the pipeline module (#1087)
* [pipeline] refactor the pipeline module

* polish code
2022-06-10 11:27:38 +08:00
Frank Lee bad5d4c0a1
[context] support lazy init of module (#1088)
* [context] support lazy init of module

* polish code
2022-06-10 10:09:48 +08:00
ver217 be01db37c8
[tensor] refactor chunk mgr and impl MemStatsCollectorV2 (#1077)
* polish chunk manager

* polish unit test

* impl add_extern_static_tensor for chunk mgr

* add mem stats collector v2

* polish code

* polish unit test

* polish code

* polish get chunks
2022-06-09 20:56:34 +08:00
Ziyue Jiang b3a03e4bfd
[Tensor] fix equal assert (#1091)
* fix equal assert

* polish
2022-06-09 17:36:15 +08:00
Frank Lee 50ec3a7e06
[test] skip tests when not enough GPUs are detected (#1090)
* [test] skip tests when not enough GPUs are detected

* polish code

* polish code
2022-06-09 17:19:13 +08:00
github-actions[bot] 3a7571b1d7
Automated submodule synchronization (#1081)
Co-authored-by: github-actions <github-actions@github.com>
2022-06-09 15:33:29 +08:00
Frank Lee 1bd8a72fc9
[workflow] disable p2p via shared memory on non-nvlink machine (#1086) 2022-06-09 15:24:35 +08:00
Frank Lee 65ee6dcc20
[test] ignore 8 gpu test (#1080)
* [test] ignore 8 gpu test

* polish code

* polish workflow

* polish workflow
2022-06-08 23:14:18 +08:00
Ziyue Jiang 0653c63eaa
[Tensor] 1d row embedding (#1075)
* Add CPU 1d row embedding

* polish
2022-06-08 12:04:59 +08:00
junxu d66ffb4df4
Remove duplication registry (#1078) 2022-06-08 07:47:24 +08:00
Jiarui Fang bcab249565
fix issue #1080 (#1071) 2022-06-07 17:21:11 +08:00
ver217 1b17859328
[tensor] chunk manager monitor mem usage (#1076) 2022-06-07 15:00:00 +08:00
ver217 98cdbf49c6
[hotfix] fix chunk comm src rank (#1072) 2022-06-07 11:54:56 +08:00
Frank Lee bfdc5ccb7b
[context] maintain the context object in with statement (#1073) 2022-06-07 10:48:45 +08:00
ver217 c5cd3b0f35
[zero] zero optim copy chunk rather than copy tensor (#1070) 2022-06-07 10:30:46 +08:00
Ziyue Jiang 4fc748f69b
[Tensor] fix optimizer for CPU parallel (#1069) 2022-06-06 17:36:11 +08:00
Jiarui Fang 49832b2344
[refactory] add nn.parallel module (#1068) 2022-06-06 15:34:41 +08:00
Ziyue Jiang 6754f1b77f
fix module utils bug (#1066) 2022-06-06 12:11:48 +08:00
Jiarui Fang a00644079e
reorgnize colotensor directory (#1062)
* reorgnize colotensor directory

* polish code
2022-06-03 18:04:22 +08:00
Frank Lee 3d10be33bd
[cudnn] set False to cudnn benchmark by default (#1063) 2022-06-03 17:58:06 +08:00
Ziyue Jiang df9dcbbff6
[Tensor] add hybrid device demo and fix bugs (#1059) 2022-06-03 12:09:49 +08:00
YuliangLiu0306 b167258b6a
[pipeline]refactor ppschedule to support tensor list (#1050)
* [CLI] add CLI launcher

* Revert "[CLI] add CLI launcher"

This reverts commit df7e6506d4.

* refactor ppschedule to support tensor list

* polish
2022-06-02 13:48:59 +08:00
ver217 e3fde4ee6b
fix import error in sharded model v2 (#1053) 2022-06-02 13:48:22 +08:00
ver217 e1922ea4f6
[zero] add chunk size search for chunk manager (#1052) 2022-06-02 13:20:20 +08:00
アマデウス 2c42b230f3
updated collective ops api (#1054) 2022-06-02 12:52:27 +08:00
ver217 51b9a49655
[zero] add zero optimizer for ColoTensor (#1046)
* add zero optimizer

* torch ok

* unit test ok

* polish code

* fix bugs

* polish unit test

* polish zero optim

* polish colo ddp v2

* refactor folder structure

* add comment

* polish unit test

* polish zero optim

* polish unit test
2022-06-02 12:13:15 +08:00
github-actions[bot] e32470b6de
Automated submodule synchronization (#1049)
Co-authored-by: github-actions <github-actions@github.com>
2022-06-01 11:04:32 +08:00
Frank Lee 0ccb8c6141
[release] update version.txt (#1048) 2022-05-31 22:14:14 +08:00
binmakeswell 626dd187e4
add inference submodule (#1047) 2022-05-31 19:57:39 +08:00
ver217 7faef93326
fix dist spec mgr (#1045) 2022-05-31 12:14:39 +08:00
ver217 9492a561c3
[tensor] ColoTensor supports ZeRo (#1015)
* impl chunk manager

* impl param op hook

* add reduce_chunk

* add zero hook v2

* add zero dp

* fix TensorInfo

* impl load balancing when using zero without chunk

* fix zero hook

* polish chunk

* fix bugs

* ddp ok

* zero ok

* polish code

* fix bugs about load balancing

* polish code

* polish code

* add ene-to-end test

* polish code

* polish code

* polish code

* fix typo

* add test_chunk

* fix bugs

* fix bugs

* polish code
2022-05-31 12:00:12 +08:00
Frank Lee cfa6c1b46b
[ci] fixed nightly build workflow (#1040) 2022-05-31 10:43:18 +08:00
YuliangLiu0306 9feff0f760
[titans]remove model zoo (#1042)
* [CLI] add CLI launcher

* Revert "[CLI] add CLI launcher"

This reverts commit df7e6506d4.

* rm model zoo
2022-05-31 10:40:47 +08:00
binmakeswell 0dac86866b
[NFC] add inference (#1044) 2022-05-30 23:06:49 +08:00
Ziyue Jiang 7c530b9de2
[Tensor] add Parameter inheritance for ColoParameter (#1041)
* add Parameter inheritance for ColoParameter

* remove tricks

* remove tricks

* polish

* polish
2022-05-30 17:23:44 +08:00