Commit Graph

1810 Commits (c9ec5190a076b130c72ab8a86c35626ac6e3d5e7)
 

Author SHA1 Message Date
oahzxl 884a228ea6 reorder nodes
2 years ago
Fazzie-Maqianli ce3c4eca7b
[example] support Dreamblooth (#2188)
2 years ago
BlueRum 1cf6d92d7c
[exmaple] diffuser, support quant inference for stable diffusion (#2186)
2 years ago
Jiarui Fang bc0e271e71
[buider] use builder() for cpu adam and fused optim in setup.py (#2187)
2 years ago
oahzxl e0ae68e736 code style
2 years ago
oahzxl fa5e6fbf96 code style
2 years ago
oahzxl 4f5e105af3 remove flow tracer
2 years ago
oahzxl 4d89525fc2 remove abandoned function
2 years ago
oahzxl 49ba619085 code style
2 years ago
oahzxl d309e9338b adapt codegen to prepose node
2 years ago
Jiarui Fang d42afd30f8
[builder] runtime adam and fused_optim builder (#2184)
2 years ago
oahzxl 522f017418 code style
2 years ago
oahzxl 774d34f1aa refactor flow search
2 years ago
YuliangLiu0306 550f8f8905
[autoparallel] integrate_gpt_related_tests (#2134)
2 years ago
Ziyue Jiang 59e343328d
[Pipeline Middleware ] Fix deadlock when num_microbatch=num_stage (#2156)
2 years ago
github-actions[bot] 937f404253
Automated submodule synchronization (#2136)
2 years ago
Jiarui Fang 65f56f49e8
[example] gpt demo more accuracy tflops (#2178)
2 years ago
Tongping Liu ab54fed292
[hotfix] add kwargs for colo_addmm (#2171)
2 years ago
Arsmart1 a110933d65
[NFC] fix a typo 'stable-diffusion-typo-fine-tune'
2 years ago
Fazzie-Maqianli 9396a18361
Merge pull request #2174 from ziyuhuang123/main
2 years ago
ziyuhuang123 cf5028363c 'diffusion-typo-change'
2 years ago
アマデウス 622f863291
[hotfix] Jit type hint #2161 (#2164)
2 years ago
Jiarui Fang 27327a4c90
[example] add palm pytorch version (#2172)
2 years ago
Zihao 12e7bcd720
register meta func for rnn (#2159)
2 years ago
oahzxl ded1005667 format code
2 years ago
oahzxl d361d533e8 refactor flow tracer
2 years ago
oahzxl d734529a39 move flow tracer
2 years ago
Boyuan Yao cfe2a9bd90
[autoparallel] memory estimation for shape consistency (#2144)
2 years ago
Jiarui Fang b87496a66b
[hotfix] fix auto policy of test_sharded_optim_v2 (#2157)
2 years ago
YuliangLiu0306 16335cb537
[hotfix] fix aten default bug (#2158)
2 years ago
Jiarui Fang a4b4bb01d6
[example] update vit readme (#2155)
2 years ago
Jiarui Fang 2cfe685b9f
[exmaple] add vit missing functions (#2154)
2 years ago
HELSON a7d95b7024
[example] add zero1, zero2 example in GPT examples (#2146)
2 years ago
YuliangLiu0306 1cce6e36ca
[autoparallel] use metainfo in handler (#2149)
2 years ago
Jiarui Fang 9b39170a5c
[version] 0.1.13 (#2152)
2 years ago
Jiarui Fang e0c01d1db1
Revert "[version] version to v0.1.13 (#2139)" (#2153)
2 years ago
Jiarui Fang 2827f41898
[Gemini] GeminiDPP convert to PyTorch Module. (#2151)
2 years ago
Jiarui Fang bdef9dfdbe
[NFC] remove useless graph node code (#2150)
2 years ago
BlueRum b3f73ce1c8
[Gemini] Update coloinit_ctx to support meta_tensor (#2147)
2 years ago
Jiarui Fang 6ad866b684
[version] version to v0.1.13 (#2139)
2 years ago
oahzxl 9d516fa68f fix layernorm
2 years ago
Zihao a128eec9d5
register aten._convolution.default (#2137)
2 years ago
oahzxl e66a18a0bf optimise search
2 years ago
Jiarui Fang ee287620f0
[Gemini] revert ZeROInitCtx related tracer (#2138)
2 years ago
oahzxl e83e3c6154 update memory estimate
2 years ago
アマデウス 077a66dd81
updated attention kernel (#2133)
2 years ago
github-actions[bot] 484fe62252
Automated submodule synchronization (#2131)
2 years ago
YuliangLiu0306 a3c6924deb
[autoparallel] process size nodes in runtime pass (#2130)
2 years ago
YuliangLiu0306 536560ccc0
[autoparallel] implement softmax handler (#2132)
2 years ago
Jiarui Fang c89c66a858
[Gemini] update API of the chunkmemstatscollector. (#2129)
2 years ago