Commit Graph

2073 Commits (cb01c0d5cee868e81edd33d2ba8fbb3d2b9da932)

Author SHA1 Message Date
hxwang cb01c0d5ce [moe] refactor mesh assignment 2024-08-01 10:06:59 +08:00
haze188 034020bd04 [misc] remove debug/print code 2024-08-01 10:06:59 +08:00
hxwang c3dc9b4dba [deepseek] replace attn (a workaround for bug in transformers) 2024-08-01 10:06:59 +08:00
hxwang 6c39f0b144 [test] add check 2024-08-01 10:06:59 +08:00
haze188 b2952a5982 [moe] deepseek moe sp support 2024-08-01 10:06:59 +08:00
botbw 96d0fbc531 [bug] fix: somehow logger hangs the program 2024-08-01 10:06:59 +08:00
hxwang 067e18f7e9 [test] fix test: test_zero1_2 2024-08-01 10:06:59 +08:00
hxwang 74b03de3f9 [moe] remove ops 2024-08-01 10:06:59 +08:00
hxwang 70c9924d0d [chore] solve moe ckpt test failure and some other arg pass failure 2024-08-01 10:06:59 +08:00
hxwang 46037c2ccd [chore] minor fix after rebase 2024-08-01 10:06:59 +08:00
hxwang 803878b2fd [moe] full test for deepseek and mixtral (pp + sp to fix) 2024-08-01 10:06:59 +08:00
hxwang 7077d38d5a [moe] finalize test (no pp) 2024-08-01 10:06:59 +08:00
haze188 2cddeac717 moe sp + ep bug fix 2024-08-01 10:06:59 +08:00
hxwang 877d94bb8c [moe] init moe plugin comm setting with sp 2024-08-01 10:06:59 +08:00
hxwang 09d6280d3e [chore] minor fix 2024-08-01 10:06:59 +08:00
Haze188 404b16faf3 [Feature] MoE Ulysses Support (#5918)
* moe sp support

* moe sp bug solve

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-08-01 10:06:59 +08:00
hxwang 3e2b6132b7 [moe] clean legacy code 2024-08-01 10:06:59 +08:00
hxwang 74eccac0db [moe] test deepseek 2024-08-01 10:06:59 +08:00
botbw dc583aa576 [moe] implement tp 2024-08-01 10:06:59 +08:00
hxwang 102b784a10 [chore] arg pass & remove drop token 2024-08-01 10:06:59 +08:00
botbw 8dbb86899d [chore] trivial fix 2024-08-01 10:06:59 +08:00
botbw 014faf6c5a [chore] manually revert unintended commit 2024-08-01 10:06:59 +08:00
botbw 9b9b76bdcd [moe] add mixtral dp grad scaling when not all experts are activated 2024-08-01 10:06:59 +08:00
botbw e28e05345b [moe] implement submesh initialization 2024-08-01 10:06:59 +08:00
haze188 5ed5e8cfba solve hang when parallel mode = pp + dp 2024-08-01 10:06:59 +08:00
botbw 13b48ac0aa [zero] solve hang 2024-08-01 10:06:59 +08:00
botbw b5bfeb2efd [moe] implement transit between non moe tp and ep 2024-08-01 10:06:59 +08:00
botbw 37443cc7e4 [test] pass mixtral shardformer test 2024-08-01 10:06:59 +08:00
hxwang 46c069b0db [zero] solve hang 2024-08-01 10:06:59 +08:00
hxwang 0fad23c691 [chore] handle non member group 2024-08-01 10:06:59 +08:00
hxwang a249e71946 [test] mixtra pp shard test 2024-08-01 10:06:59 +08:00
hxwang 8ae8525bdf [moe] fix plugin 2024-08-01 10:06:59 +08:00
hxwang 0b76b57cd6 [test] add mixtral transformer test 2024-08-01 10:06:59 +08:00
hxwang f9b6fcf81f [test] add mixtral for sequence classification 2024-08-01 10:06:59 +08:00
Hongxin Liu 060892162a
[zero] hotfix update master params (#5951) 2024-07-30 13:36:00 +08:00
Runyu Lu bcf0181ecd
[Feat] Distrifusion Acceleration Support for Diffusion Inference (#5895)
* Distrifusion Support source

* comp comm overlap optimization

* sd3 benchmark

* pixart distrifusion bug fix

* sd3 bug fix and benchmark

* generation bug fix

* naming fix

* add docstring, fix counter and shape error

* add reference

* readme and requirement
2024-07-30 10:43:26 +08:00
Hongxin Liu 7b38964e3a
[shardformer] hotfix attn mask (#5947) 2024-07-29 19:10:06 +08:00
Hongxin Liu 9664b1bc19
[shardformer] hotfix attn mask (#5945) 2024-07-29 13:58:27 +08:00
Edenzzzz 2069472e96
[Hotfix] Fix ZeRO typo #5936
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
2024-07-25 09:59:58 +08:00
Gao, Ruiyuan 5fb958cc83
[FIX BUG] convert env param to int in (#5934) 2024-07-24 10:30:40 +08:00
Insu Jang a521ffc9f8
Add n_fused as an input from native_module (#5894) 2024-07-23 23:15:39 +08:00
Hongxin Liu e86127925a
[plugin] support all-gather overlap for hybrid parallel (#5919)
* [plugin] fixed all-gather overlap support for hybrid parallel
2024-07-18 15:33:03 +08:00
アマデウス 530283dba0 fix object_to_tensor usage when torch>=2.3.0 (#5820) 2024-07-16 13:59:25 +08:00
Guangyao Zhang 2e28c793ce [compatibility] support torch 2.2 (#5875)
* Support Pytorch 2.2.2

* keep build_on_pr file and update .compatibility
2024-07-16 13:59:25 +08:00
Guangyao Zhang 1c961b20f3
[ShardFormer] fix qwen2 sp (#5903) 2024-07-15 13:58:06 +08:00
Stephan Kö 45c49dde96
[Auto Parallel]: Speed up intra-op plan generation by 44% (#5446)
* Remove unnecessary calls to deepcopy

* Build DimSpec's difference dict only once

This change considerably speeds up construction speed of DimSpec objects. The difference_dict is the same for each DimSpec object, so a single copy of it is enough.

* Fix documentation of DimSpec's difference method
2024-07-15 12:05:06 +08:00
Hongxin Liu c068ef0fa0
[zero] support all-gather overlap (#5898)
* [zero] support all-gather overlap

* [zero] add overlap all-gather flag

* [misc] fix typo

* [zero] update api
2024-07-11 18:59:59 +08:00
Guangyao Zhang 669849d74b
[ShardFormer] Add Ulysses Sequence Parallelism support for Command-R, Qwen2 and ChatGLM (#5897) 2024-07-10 11:34:25 +08:00
Edenzzzz fbf33ecd01
[Feature] Enable PP + SP for llama (#5868)
* fix cross-PP-stage position id length diff bug

* fix typo

* fix typo

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* use a one cross entropy func for all shardformer models

---------

Co-authored-by: Edenzzzz <wtan45@wisc.edu>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-07-09 18:05:20 +08:00
Runyu Lu 66abf1c6e8
[HotFix] CI,import,requirements-test for #5838 (#5892)
* [Hot Fix] CI,import,requirements-test

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-07-08 22:32:06 +08:00