Commit Graph

17 Commits (pre-commit-ci-update-config)

Author SHA1 Message Date
Edenzzzz 43995ee436
[Feature] Distributed optimizers: Lamb, Galore, CAME and Adafactor (#5694)
7 months ago
Xuanlei Zhao 3acbf6d496
[npu] add npu support for hybrid plugin and llama (#5090)
1 year ago
Hongxin Liu 079bf3cb26
[misc] update pre-commit and run all files (#4752)
1 year ago
digger yu 9c2feb2f0b
fix some typo with colossalai/device colossalai/tensor/ etc. (#4171)
1 year ago
github-actions[bot] c77b3b19be
[format] applied code formatting on changed files in pull request 4152 (#4157)
1 year ago
Frank Lee 611971248c [device] support init device mesh from process group (#3990)
1 year ago
Frank Lee ddcf58cacf
Revert "[sync] sync feature/shardformer with develop"
1 year ago
Frank Lee eb39154d40
[dtensor] updated api and doc (#3845)
1 year ago
YuliangLiu0306 2059fdd6b0
[hotfix] add copyright for solver and device mesh (#2803)
2 years ago
YuliangLiu0306 aa0f6686f9
[autoparallel] accelerate gpt2 training (#2495)
2 years ago
YuliangLiu0306 2731531bc2
[autoparallel] integrate device mesh initialization into autoparallelize (#2393)
2 years ago
YuliangLiu0306 677e1e20d4
[device] update flatten device mesh usage (#2079)
2 years ago
Genghan Zhang d655eea515
[autoparallel] mix gather (#1977)
2 years ago
YuliangLiu0306 b4cc59b61e
[autoparallel] add numerical test for node strategies (#1760)
2 years ago
YuliangLiu0306 4b03c25f85
[tensor]add 1D device mesh (#1492)
2 years ago
YuliangLiu0306 b73fb7a077
[tensor] support runtime ShardingSpec apply (#1453)
2 years ago
YuliangLiu0306 0442f940f0
[device] add DeviceMesh class to support logical device layout (#1394)
2 years ago