Jiarui Fang
|
1cb532ffec
|
[builder] multihead attn runtime building (#2203)
* [hotfix] correcnt cpu_optim runtime compilation
* [builder] multihead attn
* fix bug
* fix a bug
|
2 years ago |
Tongping Liu
|
8e22c38b89
|
[hotfix] Fixing the bug related to ipv6 support
Co-authored-by: ByteDance <tongping.liu@bytedance.com>
|
2 years ago |
ziyuhuang123
|
ac85a18043
|
[example] polish doc (#2201)
|
2 years ago |
YuliangLiu0306
|
4851f2d607
|
[autoparallel] update_getattr_handler (#2193)
|
2 years ago |
YuliangLiu0306
|
f10ce01e31
|
[autoparallel] add gpt2 performance test code (#2194)
|
2 years ago |
HELSON
|
a3100bd50d
|
[testing] add beit model for unit testings (#2196)
* [testing] add beit model
* [beit] fix bugs
* [beit] fix bugs
* [testing] fix bugs
|
2 years ago |
Jiarui Fang
|
5682e6d346
|
[hotfix] correcnt cpu_optim runtime compilation (#2197)
|
2 years ago |
BlueRum
|
6642cebdbe
|
[example] Change some training settings for diffusion (#2195)
|
2 years ago |
HELSON
|
2458659919
|
[zero] fix error for BEiT models (#2169)
* [zero] fix error for BEiT models
* [ColoParameter] add unpack operation for tuple arguments
* fix bugs
* fix chunkv2 unit testing
* add assertion for gradient state
|
2 years ago |
ziyuhuang123
|
4363ff3e41
|
'[NFC] fix some typos' (#2175)
|
2 years ago |
binmakeswell
|
04a200573c
|
[NFC] update news link (#2191)
|
2 years ago |
Jiarui Fang
|
355ffb386e
|
[builder] unified cpu_optim fused_optim inferface (#2190)
|
2 years ago |
Jiarui Fang
|
9587b080ba
|
[builder] use runtime builder for fused_optim (#2189)
|
2 years ago |
Fazzie-Maqianli
|
ce3c4eca7b
|
[example] support Dreamblooth (#2188)
|
2 years ago |
BlueRum
|
1cf6d92d7c
|
[exmaple] diffuser, support quant inference for stable diffusion (#2186)
|
2 years ago |
Jiarui Fang
|
bc0e271e71
|
[buider] use builder() for cpu adam and fused optim in setup.py (#2187)
|
2 years ago |
Jiarui Fang
|
d42afd30f8
|
[builder] runtime adam and fused_optim builder (#2184)
|
2 years ago |
YuliangLiu0306
|
550f8f8905
|
[autoparallel] integrate_gpt_related_tests (#2134)
* [autoparallel] integrate_gpt_related_tests
* polish code
* polish code
* add GPT2Model into runtime test
|
2 years ago |
Ziyue Jiang
|
59e343328d
|
[Pipeline Middleware ] Fix deadlock when num_microbatch=num_stage (#2156)
* add splitter
* polish code
* remove comment
* fix async nan by moving to cpu first
Co-authored-by: Ziyue Jiang <ziyue.jiang@gmail.com>
|
2 years ago |
github-actions[bot]
|
937f404253
|
Automated submodule synchronization (#2136)
Co-authored-by: github-actions <github-actions@github.com>
|
2 years ago |
Jiarui Fang
|
65f56f49e8
|
[example] gpt demo more accuracy tflops (#2178)
|
2 years ago |
Tongping Liu
|
ab54fed292
|
[hotfix] add kwargs for colo_addmm (#2171)
|
2 years ago |
Arsmart1
|
a110933d65
|
[NFC] fix a typo 'stable-diffusion-typo-fine-tune'
Co-authored-by: ziyuhuang123 <202476410@qq.com>
|
2 years ago |
Fazzie-Maqianli
|
9396a18361
|
Merge pull request #2174 from ziyuhuang123/main
'diffusion-typo-change'
|
2 years ago |
ziyuhuang123
|
cf5028363c
|
'diffusion-typo-change'
|
2 years ago |
アマデウス
|
622f863291
|
[hotfix] Jit type hint #2161 (#2164)
|
2 years ago |
Jiarui Fang
|
27327a4c90
|
[example] add palm pytorch version (#2172)
|
2 years ago |
Zihao
|
12e7bcd720
|
register meta func for rnn (#2159)
|
2 years ago |
Boyuan Yao
|
cfe2a9bd90
|
[autoparallel] memory estimation for shape consistency (#2144)
* [fx] metainfo class for auto parallel
* [fx] add unit test for linear metainfo
* [fx] fix bwd param for linear
* [fx] modify unit test
* [fx] modify unit test
* [fx] modify import
* [fx] modify import
* [fx] modify import
* [fx] move meta profiler to auto parallel
* [fx] add conv metainfo class
* [fx] restore profiler
* [fx] restore meta profiler
* [autoparallel] modify unit test
* [fx] modify unit test
* [autoparallel] add batchnorm metainfo class
* [autoparallel] fix batchnorm unit test function declaration
* [fx] restore profiler
* [fx] add relu metainfo class
* [fx] restore profiler
* [autoparallel] modify metainfo input
* [autoparallel] add pooling metainfo
* [autoparallel] add F.linear metainfo generator
* [autoparallel] add binary elementwise metainfo
* [fx] recover profiler
* [autoparallel] fix forward memory calculation
* [autoparallel] modify constants.py
* [autoparallel] remove redundant print
* [autoparallel] add F.conv metainfo
* [autoparallel] linear fix
* [autoparallel] memory estimation for communication actions
* [autoparallel] fix docstring
* [autoparallel] fix variables name
|
2 years ago |
Jiarui Fang
|
b87496a66b
|
[hotfix] fix auto policy of test_sharded_optim_v2 (#2157)
|
2 years ago |
YuliangLiu0306
|
16335cb537
|
[hotfix] fix aten default bug (#2158)
|
2 years ago |
Jiarui Fang
|
a4b4bb01d6
|
[example] update vit readme (#2155)
|
2 years ago |
Jiarui Fang
|
2cfe685b9f
|
[exmaple] add vit missing functions (#2154)
|
2 years ago |
HELSON
|
a7d95b7024
|
[example] add zero1, zero2 example in GPT examples (#2146)
* [example] add zero1 and zero2 for GPT
* update readme in gpt example
* polish code
* change init value
* update readme
|
2 years ago |
YuliangLiu0306
|
1cce6e36ca
|
[autoparallel] use metainfo in handler (#2149)
|
2 years ago |
Jiarui Fang
|
9b39170a5c
|
[version] 0.1.13 (#2152)
|
2 years ago |
Jiarui Fang
|
e0c01d1db1
|
Revert "[version] version to v0.1.13 (#2139)" (#2153)
This reverts commit 6ad866b684 .
|
2 years ago |
Jiarui Fang
|
2827f41898
|
[Gemini] GeminiDPP convert to PyTorch Module. (#2151)
|
2 years ago |
Jiarui Fang
|
bdef9dfdbe
|
[NFC] remove useless graph node code (#2150)
|
2 years ago |
BlueRum
|
b3f73ce1c8
|
[Gemini] Update coloinit_ctx to support meta_tensor (#2147)
|
2 years ago |
Jiarui Fang
|
6ad866b684
|
[version] version to v0.1.13 (#2139)
|
2 years ago |
Zihao
|
a128eec9d5
|
register aten._convolution.default (#2137)
|
2 years ago |
Jiarui Fang
|
ee287620f0
|
[Gemini] revert ZeROInitCtx related tracer (#2138)
|
2 years ago |
アマデウス
|
077a66dd81
|
updated attention kernel (#2133)
|
2 years ago |
github-actions[bot]
|
484fe62252
|
Automated submodule synchronization (#2131)
Co-authored-by: github-actions <github-actions@github.com>
|
2 years ago |
YuliangLiu0306
|
a3c6924deb
|
[autoparallel] process size nodes in runtime pass (#2130)
* [autoparallel] process size nodes in runtime pass
* polish code
|
2 years ago |
YuliangLiu0306
|
536560ccc0
|
[autoparallel] implement softmax handler (#2132)
|
2 years ago |
Jiarui Fang
|
c89c66a858
|
[Gemini] update API of the chunkmemstatscollector. (#2129)
|
2 years ago |
Jiarui Fang
|
2938edf446
|
[Gemini] update the non model data record method in runtime memory tracer (#2128)
|
2 years ago |
Jiarui Fang
|
deee317b0f
|
[Gemini] test step-tensor mapping using repeated_computed_layers.py (#2127)
|
2 years ago |