Commit Graph

10 Commits (8acf8455fe6f50565d6017536c8b52a555d64c18)

Author SHA1 Message Date
Wenwen Qu 8acf8455fe use regester to get moe impl 2024-01-12 13:20:17 +08:00
Wenwen Qu 13f3eeb994 add moe_type assert 2024-01-10 17:13:21 +08:00
Wenwen Qu 7cec7e985f refactor moe layer 2024-01-10 15:39:16 +08:00
Wenwen Qu c423f1159b add moe_type to model config 2024-01-09 15:56:59 +08:00
Wenwen Qu fe0c342f9d get moe setting from gpc 2024-01-09 15:26:13 +08:00
Wenwen Qu 41f8283a3e refactor code 2024-01-08 16:03:55 +08:00
Wenwen Qu c3854f924a refactor code 2024-01-08 14:33:19 +08:00
Wenwen Qu 196514d87f refactor code 2024-01-03 17:39:37 +08:00
Wenwen Qu 2c5395fdfd
Doc(moe): add documentation for moe training (#411)
* add doc for moe

* fix moe and zero1 check in args_sanity_check

* restore moe config file
2023-10-19 10:01:12 +08:00
Wenwen Qu 136d55ec30
feat(moe): add moe module (#182)
* feat(XXX): add moe

* reformat code

* modified:   .pre-commit-config.yaml
	modified:   internlm/model/moe.py
	modified:   internlm/model/modeling_internlm.py

* modified:   internlm/model/modeling_internlm.py

* modified:   internlm/core/context/process_group_initializer.py
	modified:   internlm/core/scheduler/no_pipeline_scheduler.py
	modified:   internlm/solver/optimizer/hybrid_zero_optim.py

* modified:   internlm/model/moe.py
	modified:   internlm/moe/sharded_moe.py
	modified:   internlm/utils/parallel.py

* rollback .pre-commit-config.yaml

* add residual and other moe features

* modify grad clipping due to moe

* add param arguments

* reformat code

* add expert data support and fix bugs

* Update .pre-commit-config.yaml

* modified:   internlm/model/modeling_internlm.py

* add no-interleaved & no-overlapped moe pp support

* support zero_overlap_communication

* avoid moe parameter partition in zero optimizer

* fix the moe_loss_coeff bug

* suppport interleaved pp

* fix moe bugs in zero optimizer

* fix more moe bugs in zero optimizer

* fix moe bugs in zero optimizer

* add logger for moe_loss

* fix bugs with merge

* fix the pp moe bugs

* fix bug on logger

* update moe training cfg on real-dataset

* refactor code

* refactor code

* fix bugs with compute moe norm

* optimize code with moe norm computing

* fix the bug that missing scale the latent moe loss

* refactor code

* fix moe loss logger for the interleaved pp

* change the scale position for latent moe_loss

* Update 7B_sft.py

* add support for moe checkpoint

* add comments for moe

* reformat code

* fix bugs

* fix bugs

* Update .pre-commit-config.yaml

* remove moe_loss_coeff parameter passing

* fix group_norms computing in hybrid_zero_optim

* use dummy mode to generate random numbers in model construction

* replace flashatten experts by feedforward experts

* fix bugs with _compute_norm_with_moe_group

* merge upstream/develop into feature_add_moe

* merge upstream/develop into feature_add_moe

* change float16 to bfloat16

* fix interface for dense pipeline

* refactor split_moe_group code

* fix precision inconsistency

* refactor code

* Update 7B_sft.py

* refactor code

* refactor code

* refactor code

* refactor code

* refactor code for split group

* refactor code for log

* fix logger for moe

* refactor code for split param group

* fix the moe_loss for ci and val

* refactor

* fix bugs with split group

* fix bugs in save/load moe checkpoint

* add moe module to `__init__.py`

* add compatible code for old version

* update moe config file

* modify moe config file

* fix merge bugs

* update moe config file

* change condition for compatibility

---------

Co-authored-by: zhanglei <ryancheung98@163.com>
Co-authored-by: Ryan (张磊) <leizhang.real@gmail.com>
2023-09-27 15:54:53 +08:00