Jiarui Fang
4f21c9e8d9
[Gemini] polish runtime tracer tests ( #2077 )
2022-12-05 16:22:49 +08:00
YuliangLiu0306
677e1e20d4
[device] update flatten device mesh usage ( #2079 )
2022-12-05 16:16:07 +08:00
Jiarui Fang
a7adad9ccb
[Gemini] rename hooks related to runtime mem tracer ( #2076 )
2022-12-05 15:00:03 +08:00
Jiarui Fang
40b7d55bf3
[Gemini] add albert in test models. ( #2075 )
2022-12-05 14:09:34 +08:00
Jiarui Fang
616ed91ecd
[test] bert test in non-distributed way ( #2074 )
2022-12-05 13:32:16 +08:00
Jiarui Fang
223332ff7e
[Gemini] rename ParamTracerWrapper -> RuntimeMemTracer ( #2073 )
2022-12-05 12:45:11 +08:00
Jiarui Fang
9f828ef36f
[Gemini] remove not used MemtracerWrapper ( #2072 )
2022-12-05 11:57:59 +08:00
Boyuan Yao
616da17fab
[autoparallel] add binary elementwise metainfo for auto parallel ( #2058 )
...
* [fx] metainfo class for auto parallel
* [fx] add unit test for linear metainfo
* [fx] fix bwd param for linear
* [fx] modify unit test
* [fx] modify unit test
* [fx] modify import
* [fx] modify import
* [fx] modify import
* [fx] move meta profiler to auto parallel
* [fx] add conv metainfo class
* [fx] restore profiler
* [fx] restore meta profiler
* [autoparallel] modify unit test
* [fx] modify unit test
* [autoparallel] add batchnorm metainfo class
* [autoparallel] fix batchnorm unit test function declaration
* [fx] restore profiler
* [fx] add relu metainfo class
* [fx] restore profiler
* [autoparallel] modify metainfo input
* [autoparallel] add pooling metainfo
* [autoparallel] add F.linear metainfo generator
* [autoparallel] add binary elementwise metainfo
* [fx] recover profiler
* [autoparallel] fix forward memory calculation
* [autoparallel] modify constants.py
* [autoparallel] remove redundant print
2022-12-04 15:18:51 +08:00
Boyuan Yao
4b40fbd743
[autoparallel] fix forward memory calculation ( #2062 )
2022-12-04 15:00:16 +08:00
Ziyue Jiang
44ea461890
[Pipeline] Add Topo Class ( #2059 )
...
* use Topo class to rewrite DAG
* polish code
* polish code
* polish code
* add comment
* add else to unended if
Co-authored-by: Ziyue Jiang <ziyue.jiang@gmail.com>
2022-12-02 18:13:20 +08:00
YuliangLiu0306
e4293e5077
[hotfix] update test for latest version ( #2060 )
2022-12-02 18:12:30 +08:00
YuliangLiu0306
19438ea0ef
[hotfix] skip gpt tracing test ( #2064 )
2022-12-02 16:48:28 +08:00
Zihao
38ea4ba1bd
[Gemini] fix grad unreleased issue and param recovery issue ( #2052 )
2022-12-02 16:04:19 +08:00
YuliangLiu0306
edf4cd46c5
[examples] update autoparallel demo ( #2061 )
2022-12-01 18:50:58 +08:00
YuliangLiu0306
1c1fe44305
[autoparallel] adapt solver with self attention ( #2037 )
...
* [autoparallel] adapt solver with self attention
* polish code
2022-12-01 17:53:15 +08:00
Frank Lee
d3499c98d4
[release] update to 0.1.11rc5 ( #2053 )
2022-12-01 00:13:00 +08:00
Frank Lee
ea74a3b9cc
[cli] updated installation cheheck with more inforamtion ( #2050 )
...
* [cli] updated installation cheheck with more inforamtion
* polish code
* polish code
2022-11-30 17:53:55 +08:00
HELSON
f6178728a0
[gemini] fix init bugs for modules ( #2047 )
...
* [gemini] fix init bugs for modules
* fix bugs
2022-11-30 17:06:10 +08:00
Frank Lee
81e0da7fa8
[setup] supported conda-installed torch ( #2048 )
...
* [setup] supported conda-installed torch
* polish code
2022-11-30 16:45:15 +08:00
HELSON
e37f3db40c
[gemini] add arguments ( #2046 )
...
* [zero] fix testing parameters
* [gemini] add arguments
* add docstrings
2022-11-30 16:40:13 +08:00
Zihao
6a9158f1fa
[Gemini] free and allocate cuda memory by tensor.storage, add grad hook ( #2040 )
2022-11-30 15:57:45 +08:00
Jiarui Fang
1e885329f4
[test] align model name with the file name. ( #2045 )
2022-11-30 15:45:26 +08:00
Jiarui Fang
31c644027b
[hotfix] hotfix Gemini for no leaf modules bug ( #2043 )
2022-11-30 14:53:41 +08:00
HELSON
384cd26314
[zero] fix testing parameters ( #2042 )
2022-11-30 12:09:32 +08:00
HELSON
17a3c685b0
[zero] fix unit-tests ( #2039 )
2022-11-30 10:40:31 +08:00
Jiarui Fang
eb7742a4bb
[Gemini] more tests for Gemini ( #2038 )
...
* [Gemini] more tests for Gemini
* polish code
2022-11-29 17:13:10 +08:00
HELSON
537e181705
[testing] fix testing models ( #2036 )
...
* [testing] fix testing models
* roll back
2022-11-29 13:42:06 +08:00
HELSON
a1ce02d740
[zero] test gradient accumulation ( #1964 )
...
* [zero] fix memory leak for zero2
* [zero] test gradient accumulation
* [zero] remove grad clip test
2022-11-29 13:00:30 +08:00
Ziyue Jiang
b0936e4a44
[rpc] split with dag ( #2028 )
...
* add DAG to split_module
* add comment
* add test case for DAG
* remove print
* add DAG middleware in scheduler
* add test case for scheduler
* remove break
* recover old lifecycle
Co-authored-by: Ziyue Jiang <ziyue.jiang@gmail.com>
2022-11-29 11:36:28 +08:00
Jiarui Fang
96134e7be3
[hotfix] add bert test for gemini fwd bwd ( #2035 )
2022-11-29 11:19:52 +08:00
YuliangLiu0306
0dbcd4a6f5
[autoparallel] add split handler ( #2032 )
...
* [autoparallel] add split handler
* add numerical test and runtime passes
2022-11-29 11:03:51 +08:00
Jiarui Fang
28aa9a4294
[Gemini] more rigorous unit tests for run_fwd_bwd ( #2034 )
2022-11-29 09:26:06 +08:00
YuliangLiu0306
81330b0352
[autoparallel] add experimental permute handler ( #2029 )
2022-11-27 20:26:52 +08:00
Zihao
95c4532fff
[Gemini] paramWrapper paramTracerHook unitest ( #2030 )
2022-11-26 13:30:24 +08:00
Jiarui Fang
8daf1b4db1
[Gemini] patch for supporting orch.add_ function for ColoTensor ( #2003 )
2022-11-25 20:06:35 +08:00
Ziyue Jiang
632753abbc
[fx]Split partition with DAG information ( #2025 )
...
* add DAG to split_module
* add comment
* add test case for DAG
* remove print
Co-authored-by: Ziyue Jiang <ziyue.jiang@gmail.com>
2022-11-25 17:42:48 +08:00
YuliangLiu0306
ea0f6b8df9
[autoparallel] add runtime pass and numerical test for view handler ( #2018 )
2022-11-25 15:50:16 +08:00
binmakeswell
bb6245612d
[GitHub] update issue template ( #2023 )
...
* Update bug-report.yml
* Update documentation.yml
* Update bug-report.yml
* Update feature_request.yml
* Update proposal.yml
2022-11-25 09:13:27 +08:00
Zihao
a719b89a41
[gemini] param_trace_hook ( #2020 )
2022-11-24 18:08:36 +08:00
Frank Lee
254ee2c54f
[workflow] removed unused pypi release workflow ( #2022 )
2022-11-24 17:27:55 +08:00
Jiarui Fang
2e9cbfca12
[Gemini] add unitests to check gemini correctness ( #2015 )
2022-11-24 16:51:45 +08:00
Jiarui Fang
0b0d8f9e17
[hotfix] revert bug PRs ( #2016 )
2022-11-24 15:28:58 +08:00
Zihao
aba3db464d
[Gemini] ParamMemHook ( #2008 )
2022-11-24 15:22:51 +08:00
Zihao
0160a62a3c
[Gemini] param_tracer_wrapper and test case ( #2009 )
2022-11-24 14:40:33 +08:00
YuliangLiu0306
1438993113
[autoparallel] add experimental view handler ( #2011 )
...
* [autoparallel] add experimental view handler
* polish
* polish
* polish code
* rename variables
2022-11-24 11:34:41 +08:00
Genghan Zhang
d655eea515
[autoparallel] mix gather ( #1977 )
...
* Add mix-gather
* Add comments
* Add comments
* Polish comments
* Change the global rank assumption
* Add tests
* Add two-step tests
* Fix 10 and 01
* Skip test becasue the number of GPUs
2022-11-23 21:49:17 +08:00
Frank Lee
7242bffc5f
[workflow] fixed the python and cpu arch mismatch ( #2010 )
2022-11-23 17:24:17 +08:00
Frank Lee
2bab6f512c
[release] release v0.1.11rc4 ( #2007 )
2022-11-23 17:14:32 +08:00
Jiarui Fang
3d907faede
[Gemini] add an inline_op_module to common test models and polish unitests. ( #2004 )
2022-11-23 16:55:54 +08:00
Frank Lee
56a3dcdabd
[workflow] fixed the typo in condarc ( #2006 )
2022-11-23 16:05:30 +08:00