Commit Graph

1990 Commits (648183a96037a0d9e758154f98e1e1b8004eea0b)

Author SHA1 Message Date
ziyuhuang123 cf5028363c 'diffusion-typo-change' 2022-12-22 10:28:59 +08:00
アマデウス 622f863291
[hotfix] Jit type hint #2161 (#2164) 2022-12-22 10:17:03 +08:00
Jiarui Fang 27327a4c90
[example] add palm pytorch version (#2172) 2022-12-22 10:15:34 +08:00
Zihao 12e7bcd720
register meta func for rnn (#2159) 2022-12-21 23:06:18 +08:00
oahzxl ded1005667 format code 2022-12-21 15:03:08 +08:00
oahzxl d361d533e8 refactor flow tracer 2022-12-21 15:01:03 +08:00
oahzxl d734529a39 move flow tracer 2022-12-21 15:00:24 +08:00
Boyuan Yao cfe2a9bd90
[autoparallel] memory estimation for shape consistency (#2144)
* [fx] metainfo class for auto parallel

* [fx] add unit test for linear metainfo

* [fx] fix bwd param for linear

* [fx] modify unit test

* [fx] modify unit test

* [fx] modify import

* [fx] modify import

* [fx] modify import

* [fx] move meta profiler to auto parallel

* [fx] add conv metainfo class

* [fx] restore profiler

* [fx] restore meta profiler

* [autoparallel] modify unit test

* [fx] modify unit test

* [autoparallel] add batchnorm metainfo class

* [autoparallel] fix batchnorm unit test function declaration

* [fx] restore profiler

* [fx] add relu metainfo class

* [fx] restore profiler

* [autoparallel] modify metainfo input

* [autoparallel] add pooling metainfo

* [autoparallel] add F.linear metainfo generator

* [autoparallel] add binary elementwise metainfo

* [fx] recover profiler

* [autoparallel] fix forward memory calculation

* [autoparallel] modify constants.py

* [autoparallel] remove redundant print

* [autoparallel] add F.conv metainfo

* [autoparallel] linear fix

* [autoparallel] memory estimation for communication actions

* [autoparallel] fix docstring

* [autoparallel] fix variables name
2022-12-21 10:39:37 +08:00
Jiarui Fang b87496a66b
[hotfix] fix auto policy of test_sharded_optim_v2 (#2157) 2022-12-20 23:03:18 +08:00
YuliangLiu0306 16335cb537
[hotfix] fix aten default bug (#2158) 2022-12-20 22:40:46 +08:00
Jiarui Fang a4b4bb01d6
[example] update vit readme (#2155) 2022-12-20 15:56:54 +08:00
Jiarui Fang 2cfe685b9f
[exmaple] add vit missing functions (#2154) 2022-12-20 15:03:26 +08:00
HELSON a7d95b7024
[example] add zero1, zero2 example in GPT examples (#2146)
* [example] add zero1 and zero2 for GPT

* update readme in gpt example

* polish code

* change init value

* update readme
2022-12-20 14:30:27 +08:00
YuliangLiu0306 1cce6e36ca
[autoparallel] use metainfo in handler (#2149) 2022-12-20 10:31:22 +08:00
Jiarui Fang 9b39170a5c
[version] 0.1.13 (#2152) 2022-12-20 10:28:04 +08:00
Jiarui Fang e0c01d1db1
Revert "[version] version to v0.1.13 (#2139)" (#2153)
This reverts commit 6ad866b684.
2022-12-20 10:26:36 +08:00
Jiarui Fang 2827f41898
[Gemini] GeminiDPP convert to PyTorch Module. (#2151) 2022-12-20 10:19:36 +08:00
Jiarui Fang bdef9dfdbe
[NFC] remove useless graph node code (#2150) 2022-12-20 00:33:58 +08:00
BlueRum b3f73ce1c8
[Gemini] Update coloinit_ctx to support meta_tensor (#2147) 2022-12-19 22:37:07 +08:00
Jiarui Fang 6ad866b684
[version] version to v0.1.13 (#2139) 2022-12-19 15:38:58 +08:00
oahzxl 9d516fa68f fix layernorm 2022-12-18 20:37:55 +08:00
Zihao a128eec9d5
register aten._convolution.default (#2137) 2022-12-18 19:27:01 +08:00
oahzxl e66a18a0bf optimise search 2022-12-16 15:06:39 +08:00
Jiarui Fang ee287620f0
[Gemini] revert ZeROInitCtx related tracer (#2138) 2022-12-16 12:37:06 +08:00
oahzxl e83e3c6154 update memory estimate 2022-12-16 11:09:35 +08:00
アマデウス 077a66dd81
updated attention kernel (#2133) 2022-12-16 10:54:03 +08:00
github-actions[bot] 484fe62252
Automated submodule synchronization (#2131)
Co-authored-by: github-actions <github-actions@github.com>
2022-12-15 09:32:01 +08:00
YuliangLiu0306 a3c6924deb
[autoparallel] process size nodes in runtime pass (#2130)
* [autoparallel] process size nodes in runtime pass

* polish code
2022-12-14 16:10:50 +08:00
YuliangLiu0306 536560ccc0
[autoparallel] implement softmax handler (#2132) 2022-12-14 16:09:53 +08:00
Jiarui Fang c89c66a858
[Gemini] update API of the chunkmemstatscollector. (#2129) 2022-12-14 00:47:06 +08:00
Jiarui Fang 2938edf446
[Gemini] update the non model data record method in runtime memory tracer (#2128) 2022-12-13 17:11:31 +08:00
Jiarui Fang deee317b0f
[Gemini] test step-tensor mapping using repeated_computed_layers.py (#2127) 2022-12-13 16:34:10 +08:00
Jiarui Fang 8fac837679
[Gemini] update non model data calculation method (#2126) 2022-12-13 15:44:07 +08:00
Fazzie-Maqianli 6c4c6a0409
Merge pull request #2120 from Fazziekey/example/stablediffusion-v2
[example] support stable diffusion v2
2022-12-13 14:38:40 +08:00
Fazzie cea4292ae5 support stable diffusion v2 2022-12-13 14:26:49 +08:00
Jiarui Fang 5efda69735
[Gemini] hotfix the unittest bugs (#2125) 2022-12-13 14:14:55 +08:00
Jiarui Fang 05bb28aacf
[Gemini] mapping of preop timestep and param (#2124) 2022-12-13 12:50:24 +08:00
oahzxl de65e6c3e8 support output 2022-12-13 11:00:51 +08:00
oahzxl cda3e8572a support index dupilictae and update loop 2022-12-13 10:02:26 +08:00
oahzxl 1e0fd11bc1 support check_index_duplicate 2022-12-13 10:01:30 +08:00
github-actions[bot] 764bc16f3e
Automated submodule synchronization (#2123)
Co-authored-by: github-actions <github-actions@github.com>
2022-12-13 09:44:27 +08:00
oahzxl 8754fa2553 change threshold 2022-12-12 18:25:47 +08:00
oahzxl 98f9728e29 code style 2022-12-12 18:15:47 +08:00
YuliangLiu0306 cd0af9f7f6
[autoparallel] gpt2lp runtimee test (#2113) 2022-12-12 18:06:40 +08:00
Jiarui Fang 9214d1fe28
[Gemini] chunk init using runtime visited param order (#2115) 2022-12-12 18:06:16 +08:00
HELSON e7d3afc9cc
[optimizer] add div_scale for optimizers (#2117)
* [optimizer] add div_scale for optimizers

* [zero] use div_scale in zero optimizer

* fix testing error
2022-12-12 17:58:57 +08:00
oahzxl 8511d900a8 code style 2022-12-12 17:36:17 +08:00
oahzxl 5cdfcfe1d1 code style 2022-12-12 17:29:07 +08:00
oahzxl b7b67c32ad code style 2022-12-12 17:25:38 +08:00
oahzxl 31a2c5d09f work with outerproductmean and msa 2022-12-12 17:24:06 +08:00