142 Commits (main)

Author SHA1 Message Date
duanjunwen 1b76564e16
[test] Fix/fix testcase (#5770) 6 months ago
Hongxin Liu 7f8b16635b
[misc] refactor launch API and tensor constructor (#5666) 7 months ago
Hongxin Liu d202cc28c0
[npu] change device to accelerator api (#5239) 11 months ago
Hongxin Liu 079bf3cb26
[misc] update pre-commit and run all files (#4752) 1 year ago
Hongxin Liu b5f9e37c70
[legacy] clean up legacy code (#4743) 1 year ago
Baizhou Zhang 0bb0b481b4 [gemini] fix argument naming during chunk configuration searching 1 year ago
digger yu e61ffc77c6
fix typo tests/ (#3936) 1 year ago
Frank Lee 80eba05b0a
[test] refactor tests with spawn (#3452) 2 years ago
YuliangLiu0306 ffcdbf0f65
[autoparallel]integrate auto parallel feature with new tracer (#3408) 2 years ago
ver217 26b7aac0be
[zero] reorganize zero/gemini folder structure (#3424) 2 years ago
Frank Lee 638a07a7f9
[test] fixed gemini plugin test (#3411) 2 years ago
YuliangLiu0306 fee2af8610
[autoparallel] adapt autoparallel with new analyzer (#3261) 2 years ago
Zihao 18dbe76cae
[auto-parallel] add auto-offload feature (#3154) 2 years ago
YuliangLiu0306 4269196c79
[hotfix] skip auto checkpointing tests (#3029) 2 years ago
YuliangLiu0306 197d0bf4ed
[autoparallel] apply repeat block to reduce solving time (#2912) 2 years ago
YuliangLiu0306 819e25d8b1
[hotfix] fix autoparallel compatibility test issues (#2754) 2 years ago
YuliangLiu0306 0f392d7403
[autoparallel] find repeat blocks (#2854) 2 years ago
Boyuan Yao c7764d3f22
[autoparallel] Patch meta information of `torch.where` (#2822) 2 years ago
Boyuan Yao fcc4097efa
[autoparallel] Patch meta information of `torch.tanh()` and `torch.nn.Dropout` (#2773) 2 years ago
Boyuan Yao 7ea6bc7f69
[autoparallel] Patch tensor related operations meta information (#2789) 2 years ago
Boyuan Yao a2b43e393d
[autoparallel] Patch meta information of `torch.nn.Embedding` (#2760) 2 years ago
YuliangLiu0306 1dc003c169
[autoparallel] distinguish different parallel strategies (#2699) 2 years ago
YuliangLiu0306 21d6a48f4d
[autoparallel] add shard option (#2696) 2 years ago
YuliangLiu0306 cb2c6a2415
[autoparallel] refactor runtime pass (#2644) 2 years ago
YuliangLiu0306 0b2a738393
[autoparallel] remove deprecated codes (#2664) 2 years ago
YuliangLiu0306 7fa6be49d2
[autoparallel] test compatibility for gemini and auto parallel (#2700) 2 years ago
Boyuan Yao 40c916b192
[autoparallel] Patch meta information of `torch.nn.functional.softmax` and `torch.nn.Softmax` (#2674) 2 years ago
Boyuan Yao 0385b26ebf
[autoparallel] Patch meta information of `torch.nn.LayerNorm` (#2647) 2 years ago
YuliangLiu0306 37df666f38
[autoparallel] refactor handlers which reshape input tensors (#2615) 2 years ago
YuliangLiu0306 cb3d1bef62
[autoparallel] adapt autoparallel tests with latest api (#2626) 2 years ago
Boyuan Yao 90a9fdd91d
[autoparallel] Patch meta information of `torch.matmul` (#2584) 2 years ago
YuliangLiu0306 67e1912b59
[autoparallel] support origin activation ckpt on autoprallel system (#2468) 2 years ago
YuliangLiu0306 8221fd7485
[autoparallel] update binary elementwise handler (#2451) 2 years ago
YuliangLiu0306 41429b9b28
[autoparallel] add shard option (#2423) 2 years ago
YuliangLiu0306 fb87322773
[autoparallel] fix spelling error (#2270) 2 years ago
YuliangLiu0306 8897b8f753
[autoparallel] autoparallel initialize (#2238) 2 years ago
YuliangLiu0306 3b1b91eaf4
[autoparallel] record parameter attribute in colotracer (#2217) 2 years ago
Boyuan Yao 24246f7aa5
[autoparallel] Attach input, buffer and output tensor to MetaInfo class (#2162) 2 years ago
YuliangLiu0306 78509124d3
[autoparallel] update getitem handler (#2207) 2 years ago
YuliangLiu0306 4851f2d607
[autoparallel] update_getattr_handler (#2193) 2 years ago
YuliangLiu0306 f10ce01e31
[autoparallel] add gpt2 performance test code (#2194) 2 years ago
YuliangLiu0306 550f8f8905
[autoparallel] integrate_gpt_related_tests (#2134) 2 years ago
YuliangLiu0306 16335cb537
[hotfix] fix aten default bug (#2158) 2 years ago
YuliangLiu0306 536560ccc0
[autoparallel] implement softmax handler (#2132) 2 years ago
YuliangLiu0306 cd0af9f7f6
[autoparallel] gpt2lp runtimee test (#2113) 2 years ago
YuliangLiu0306 d87baa85d9
[autoparallel] support linear function bias addition (#2104) 2 years ago
YuliangLiu0306 0fecbb9e20
[autoparallel] support addbmm computation (#2102) 2 years ago
YuliangLiu0306 d3d4630495
[autoparallel] add sum handler (#2101) 2 years ago
YuliangLiu0306 b175e6d58e
[autoparallel] add bias addtion function class (#2098) 2 years ago
YuliangLiu0306 3af7e65dea
[autoparallel] complete gpt related module search (#2097) 2 years ago