YuliangLiu0306
|
f027ef7913
|
[hotfix] fix fp16 optimzier bug (#2273)
|
2023-01-03 16:53:43 +08:00 |
YuliangLiu0306
|
fb87322773
|
[autoparallel] fix spelling error (#2270)
|
2023-01-03 16:13:00 +08:00 |
Jiarui Fang
|
af32022f74
|
[Gemini] fix the convert_to_torch_module bug (#2269)
|
2023-01-03 15:55:35 +08:00 |
Jiarui Fang
|
879df8b943
|
[example] GPT polish readme (#2274)
|
2023-01-03 15:46:52 +08:00 |
Ziyue Jiang
|
9654df0e9a
|
Add GPT PP Example (#2272)
Co-authored-by: Ziyue Jiang <ziyue.jiang@gmail.com>
|
2023-01-03 15:17:26 +08:00 |
YuliangLiu0306
|
4b29112ab2
|
[autoparallel] gpt2 autoparallel examples (#2267)
* [autoparallel] gpt2 autoparallel examples
* polish code
* polish code
|
2023-01-03 14:23:33 +08:00 |
Ziyue Jiang
|
8b045b3c1f
|
[Pipeline Middleware] Reduce comm redundancy by getting accurate output (#2232)
* move to cpu to avoid dead lock
* get output by offsets
Co-authored-by: Ziyue Jiang <ziyue.jiang@gmail.com>
|
2023-01-03 13:43:57 +08:00 |
HELSON
|
09c0102fe6
|
[example] fix gpt example with 0.1.10 (#2265)
|
2023-01-03 13:38:14 +08:00 |
Fazzie-Maqianli
|
89f048a88a
|
[example] clear diffuser image (#2262)
|
2023-01-03 10:57:02 +08:00 |
Boyuan Yao
|
c8c79102f0
|
[autoparallel] patch torch.flatten metainfo for autoparallel (#2247)
* [autoparallel] patch torch.flatten
|
2023-01-02 15:51:03 +08:00 |
YuliangLiu0306
|
8897b8f753
|
[autoparallel] autoparallel initialize (#2238)
|
2022-12-31 01:02:14 +08:00 |
xcnick
|
85178a397a
|
[hotfix] fix error for torch 2.0 (#2243)
|
2022-12-30 23:11:55 +08:00 |
Super Daniel
|
b7d0990c61
|
[autoparallel] fix construct meta info. (#2245)
|
2022-12-30 19:56:44 +08:00 |
Frank Lee
|
89542ceb44
|
[doc] updated the stable diffussion on docker usage (#2244)
* [doc] updated the stable diffussion on docker usage
* polish doc
|
2022-12-30 18:00:20 +08:00 |
Jiarui Fang
|
50cdf5430e
|
[example] diffusion install from docker (#2239)
* [builder] builder for scaled_upper_triang_masked_softmax
* add missing files
* fix a bug
* polish code
* [example] diffusion install from docker
|
2022-12-30 16:25:24 +08:00 |
Ziyue Jiang
|
57929a6210
|
fix type of num_worker_threads (#2237)
Co-authored-by: Ziyue Jiang <ziyue.jiang@gmail.com>
|
2022-12-30 11:04:01 +08:00 |
Jiarui Fang
|
db4cbdc7fb
|
[builder] builder for scaled_upper_triang_masked_softmax (#2234)
|
2022-12-30 09:58:00 +08:00 |
HELSON
|
31fe84237b
|
[example] fix benchmark.sh for gpt example (#2229)
|
2022-12-29 23:00:14 +08:00 |
Super Daniel
|
78483a9fdd
|
[logger] hotfix, missing _FORMAT (#2231)
|
2022-12-29 22:59:39 +08:00 |
Jiarui Fang
|
2cdecc9f38
|
[example] make palm + GeminiDPP work (#2227)
|
2022-12-29 14:28:31 +08:00 |
ZijianYY
|
63cc77173b
|
[example] Palm adding gemini, still has bugs (#2221)
|
2022-12-29 14:01:09 +08:00 |
HELSON
|
7010e18134
|
[example] update gpt example (#2225)
|
2022-12-29 12:01:45 +08:00 |
Jiarui Fang
|
49c601da21
|
[example] add benchmark.sh for gpt (#2226)
|
2022-12-29 12:00:00 +08:00 |
HELSON
|
3629e611cd
|
[example] update gpt benchmark (#2219)
|
2022-12-29 10:51:42 +08:00 |
Jiarui Fang
|
54de05da5d
|
[builder] polish builder with better base class (#2216)
* [builder] polish builder
* remove print
|
2022-12-28 19:45:49 +08:00 |
YuliangLiu0306
|
3b1b91eaf4
|
[autoparallel] record parameter attribute in colotracer (#2217)
* [autoparallel] record parameter attribute in collotracer
* [autoparallel] fix construct_meta_info bug
|
2022-12-28 19:29:08 +08:00 |
ZijianYY
|
92de90dfb3
|
[examples] replace einsum with matmul (#2210)
|
2022-12-28 19:03:06 +08:00 |
Jiarui Fang
|
7675792100
|
[builder] raise Error when CUDA_HOME is not set (#2213)
|
2022-12-28 16:07:08 +08:00 |
HELSON
|
78a89d9b41
|
[diffusion] update readme (#2214)
|
2022-12-28 16:06:48 +08:00 |
Jiarui Fang
|
d96cc37e32
|
[example] update GPT example benchmark results (#2212)
|
2022-12-28 14:28:12 +08:00 |
Jiarui Fang
|
d5e3e3ec01
|
[example] update gpt example for larger model scale (#2211)
|
2022-12-28 13:54:08 +08:00 |
Boyuan Yao
|
24246f7aa5
|
[autoparallel] Attach input, buffer and output tensor to MetaInfo class (#2162)
* [fx] metainfo class for auto parallel
* [fx] add unit test for linear metainfo
* [fx] fix bwd param for linear
* [fx] modify unit test
* [fx] modify unit test
* [fx] modify import
* [fx] modify import
* [fx] modify import
* [fx] move meta profiler to auto parallel
* [fx] add conv metainfo class
* [fx] restore profiler
* [fx] restore meta profiler
* [autoparallel] modify unit test
* [fx] modify unit test
* [autoparallel] add batchnorm metainfo class
* [autoparallel] fix batchnorm unit test function declaration
* [fx] restore profiler
* [fx] add relu metainfo class
* [fx] restore profiler
* [autoparallel] modify metainfo input
* [autoparallel] add pooling metainfo
* [autoparallel] add F.linear metainfo generator
* [autoparallel] add binary elementwise metainfo
* [fx] recover profiler
* [autoparallel] fix forward memory calculation
* [autoparallel] modify constants.py
* [autoparallel] remove redundant print
* [autoparallel] add F.conv metainfo
* [autoparallel] linear fix
* [autoparallel] memory estimation for communication actions
* [autoparallel] fix docstring
* [autoparallel] fix variables name
* [autoparallel] attach tensor to metainfo class
* [autoparallel] fix dangerous try except
* [autoparallel] attach memory cost to shape consistency node
* [autoparallel] attach shape consistency node's metainfo to the node
* [autoparallel] remove todo in shape consistency memory estimation
* [autoparallel] fix the annotation
|
2022-12-28 13:37:40 +08:00 |
Boyuan Yao
|
d0bc5a1b34
|
[autoparallel] new metainfoprop based on metainfo class (#2179)
* [autoparallel] new metainfoprop to combine SPMD solver and checkpoint solver
* [autoparallel] new metainfoprop to combine SPMD solver and checkpoint solver
* [autoparallel] modify placeholder handler
* [autoparallel] modify metainfoprop
* [autoparallel] fix function typo
* [autoparallel] fix placeholder handler
|
2022-12-28 13:35:08 +08:00 |
YuliangLiu0306
|
78509124d3
|
[autoparallel] update getitem handler (#2207)
|
2022-12-27 19:58:32 +08:00 |
Jiarui Fang
|
29868a9ec1
|
[example] update gpt readme with performance (#2206)
|
2022-12-27 17:39:53 +08:00 |
Jiarui Fang
|
1cb532ffec
|
[builder] multihead attn runtime building (#2203)
* [hotfix] correcnt cpu_optim runtime compilation
* [builder] multihead attn
* fix bug
* fix a bug
|
2022-12-27 16:06:09 +08:00 |
Tongping Liu
|
8e22c38b89
|
[hotfix] Fixing the bug related to ipv6 support
Co-authored-by: ByteDance <tongping.liu@bytedance.com>
|
2022-12-27 12:42:46 +08:00 |
ziyuhuang123
|
ac85a18043
|
[example] polish doc (#2201)
|
2022-12-27 10:04:01 +08:00 |
YuliangLiu0306
|
4851f2d607
|
[autoparallel] update_getattr_handler (#2193)
|
2022-12-26 21:57:39 +08:00 |
YuliangLiu0306
|
f10ce01e31
|
[autoparallel] add gpt2 performance test code (#2194)
|
2022-12-26 21:56:58 +08:00 |
HELSON
|
a3100bd50d
|
[testing] add beit model for unit testings (#2196)
* [testing] add beit model
* [beit] fix bugs
* [beit] fix bugs
* [testing] fix bugs
|
2022-12-26 17:35:36 +08:00 |
Jiarui Fang
|
5682e6d346
|
[hotfix] correcnt cpu_optim runtime compilation (#2197)
|
2022-12-26 16:45:14 +08:00 |
BlueRum
|
6642cebdbe
|
[example] Change some training settings for diffusion (#2195)
|
2022-12-26 15:22:20 +08:00 |
HELSON
|
2458659919
|
[zero] fix error for BEiT models (#2169)
* [zero] fix error for BEiT models
* [ColoParameter] add unpack operation for tuple arguments
* fix bugs
* fix chunkv2 unit testing
* add assertion for gradient state
|
2022-12-26 15:03:54 +08:00 |
ziyuhuang123
|
4363ff3e41
|
'[NFC] fix some typos' (#2175)
|
2022-12-25 18:41:39 +08:00 |
binmakeswell
|
04a200573c
|
[NFC] update news link (#2191)
|
2022-12-24 11:53:52 +08:00 |
Jiarui Fang
|
355ffb386e
|
[builder] unified cpu_optim fused_optim inferface (#2190)
|
2022-12-23 20:57:41 +08:00 |
Jiarui Fang
|
9587b080ba
|
[builder] use runtime builder for fused_optim (#2189)
|
2022-12-23 17:07:03 +08:00 |
Fazzie-Maqianli
|
ce3c4eca7b
|
[example] support Dreamblooth (#2188)
|
2022-12-23 16:47:30 +08:00 |
BlueRum
|
1cf6d92d7c
|
[exmaple] diffuser, support quant inference for stable diffusion (#2186)
|
2022-12-23 16:06:29 +08:00 |