yingtongxiong
6408b944c2
support fine grained
2023-10-17 15:14:39 +08:00
huangting4201
d1af0d6aee
feat(model/linear.py): block-grained backward
2023-10-17 10:13:56 +08:00
huangting4201
0d1fa037dd
feat(model/linear.py): set block 0 full weight
2023-10-16 20:13:59 +08:00
yingtongxiong
82204eea59
support hybrid overlap
2023-10-16 16:35:14 +08:00
huangting4201
d0f0c22cac
feat(model/linear.py): change pre backward from wqkv to block
2023-10-13 11:10:23 +08:00
huangting4201
d0b1346993
feat(model/linear.py): support block allgather overlap
2023-10-12 19:42:08 +08:00
yingtongxiong
5fd5a8a32b
support fine-grained overlap
2023-10-11 17:36:41 +08:00
yingtongxiong
792b066f15
communication overlap
2023-10-11 10:57:12 +08:00
yingtongxiong
c94be64fd2
merge origin
2023-10-10 17:13:46 +08:00
yingtongxiong
0fac845c36
overlap grad_input computation and grad_weight reduce_scatter
2023-10-10 17:06:13 +08:00
huangting4201
5fb6d99c11
feat(configs/7B_sft.py): update parallel config comment
2023-10-10 11:45:11 +08:00
yingtongxiong
db637542a6
fix lint
2023-10-09 22:19:21 +08:00
yingtongxiong
dd67ab948d
merge develop
2023-10-09 21:40:02 +08:00
yingtongxiong
1b7935dd98
merge upstream develop
2023-10-09 21:35:52 +08:00
yingtongxiong
a8dea6313f
fix the ci incompatible in config
2023-10-09 21:33:26 +08:00
Pryest
b3645b0244
fix(model): fix errant inference_forward ( #396 )
...
* Fix errant inference_forward.
* Recover use_dynamic_ntk_rope.
* Fix bugs.
* Fit to flash attention 1.0
* Fit to flash attention 1.0
* Fit to flash attention 1.0.5.
* Fit to flash attention 1.0.5.
2023-10-09 08:29:11 -05:00
yingtongxiong
007e58a4af
merge upstream develop
2023-10-09 20:54:26 +08:00
yingtongxiong
f191853bf4
fix lint
2023-10-09 20:39:57 +08:00
yingtongxiong
29df765f65
refactor code
2023-10-09 20:23:32 +08:00
yingtongxiong
5d39c332fe
restore train.py
2023-10-09 20:08:49 +08:00
yingtongxiong
ef9e7cc622
modify the config
2023-10-09 20:05:39 +08:00
yingtongxiong
144731c35c
fix evaluation bug in pp
2023-10-09 20:04:27 +08:00
zaglc
a075153adf
feat(train): add fsdp training option ( #293 )
...
* feat(fsdp): add training option for fsdp
* fix(fsdp): add mix-precision training
* fix failure in lint-check
* fix format problem
* restore 7B_sft
* fix load ckpt bug
* fix load ckpt bug2
* feat(solver/optimizer): add new file fsdp_optimizer.py
* fix(train.py): fix ci lint error
* fix(fsdp_optimizer.py): wait grad async
* fix bug for loading ckpts when zero1 < dp_size
* fix(context/parallel_context.py): only log warning for fsdp
* change ckpt name
* fix(model/modeling_internlm.py): fix checkpoint=False runtime error
* more wrap
* add support for FSDP with tp
* modify args_sanity_check for fsdp with pipeline and fsdp with moe
* fix(internlm/utils/parallel.py): fix circular import
* fix(internlm/train/training_internlm.py): remove set IS_TENSOR_PARALLEL attr
* fix(internlm/train/training_internlm.py): update wrap class and fix lint error
* fix(internlm/model): reset dropout_selective_checkpoint=True
* feat(configs/7B_sft.py): move fsdp config to parallel zero1
* feat(configs/7B_sft.py): adapt to old version config
---------
Co-authored-by: huangting4201 <1538303371@qq.com>
2023-10-09 18:59:31 +08:00
yingtongxiong
54e561665e
remove useless code for no-pp
2023-10-09 18:08:15 +08:00
yingtongxiong
0fa1083780
Merge remote-tracking branch 'upstream/develop' into feat/fstp
...
merge upstream develop
2023-10-09 18:06:57 +08:00
yingtongxiong
949431f228
modify the config
2023-10-09 18:06:22 +08:00
yingtongxiong
21c1a7fa47
support evaluation with fstp
2023-10-09 18:01:06 +08:00
Wenwen Qu
582ee000bd
feat(moe):support zero for expert local dp ( #404 )
...
* support zero for expert local dp
* fix above codes:
*treat optim.zero_world_size and optim.zero_local_rank as list in model_checkpoint.py and test_model_checkpoint.py
*add overlap and zero check for moe in args_sanity_check(.)
2023-10-09 17:45:26 +08:00
yingtongxiong
189a313da6
support fstp and refactor code
2023-10-09 17:26:20 +08:00
Wenwen Qu
916647c0a1
fix(pipeline): fix bugs for pipeline when enable mixed precision ( #382 )
...
* fix bugs for pipeline
* restore logic for empty fp32 group
* move optim.dtype to each param group
2023-10-09 14:01:15 +08:00
ytxiong
9aef11e89c
make seed in different tensor rank different ( #405 )
2023-10-09 13:53:52 +08:00
yingtongxiong
bd4af3a31f
modify the all2all
2023-10-08 17:21:17 +08:00
yingtongxiong
bf475b6940
debug
2023-10-08 13:20:29 +08:00
Guoteng
8b65e2e3c4
fix(doc): fix huggingface url ( #392 )
2023-10-07 02:03:44 -05:00
yingtongxiong
e5a2909af0
Merge remote-tracking branch 'upstream/develop' into feat/deepspeed_sp
...
merge upstream/develop
2023-10-07 14:04:00 +08:00
yingtongxiong
10aa63f0e1
support optimized sp
2023-10-07 14:03:47 +08:00
Guoteng
4f9e8cd70d
Doc(config): add auto_resume annotation into example config ( #380 )
...
* doc(config): add auto_resume related comments
* update auto_resume 7B_sft.py
* Update 7B_sft.py
* Update 7B_sft.py
2023-09-28 13:39:02 +08:00
Wenwen Qu
375240e039
feat(moe): add local data parallel support for experts ( #376 )
...
* add local data parallel support for experts
* fix model checkpoint for local dp mode of expert
* do not set ep size from config
2023-09-28 13:38:02 +08:00
Ryan (张磊)
c8242572f2
fix the moe loss as none for panel_metrics ( #379 )
2023-09-27 20:29:50 +08:00
ytxiong
e34e7307c9
docs(doc): add tf32 docs ( #374 )
...
* add zh docs for tf32
* add english docs
* add docs for tf32 in mix precision
* add english doc
* modify the gitignore
2023-09-27 15:55:44 +08:00
Wenwen Qu
136d55ec30
feat(moe): add moe module ( #182 )
...
* feat(XXX): add moe
* reformat code
* modified: .pre-commit-config.yaml
modified: internlm/model/moe.py
modified: internlm/model/modeling_internlm.py
* modified: internlm/model/modeling_internlm.py
* modified: internlm/core/context/process_group_initializer.py
modified: internlm/core/scheduler/no_pipeline_scheduler.py
modified: internlm/solver/optimizer/hybrid_zero_optim.py
* modified: internlm/model/moe.py
modified: internlm/moe/sharded_moe.py
modified: internlm/utils/parallel.py
* rollback .pre-commit-config.yaml
* add residual and other moe features
* modify grad clipping due to moe
* add param arguments
* reformat code
* add expert data support and fix bugs
* Update .pre-commit-config.yaml
* modified: internlm/model/modeling_internlm.py
* add no-interleaved & no-overlapped moe pp support
* support zero_overlap_communication
* avoid moe parameter partition in zero optimizer
* fix the moe_loss_coeff bug
* suppport interleaved pp
* fix moe bugs in zero optimizer
* fix more moe bugs in zero optimizer
* fix moe bugs in zero optimizer
* add logger for moe_loss
* fix bugs with merge
* fix the pp moe bugs
* fix bug on logger
* update moe training cfg on real-dataset
* refactor code
* refactor code
* fix bugs with compute moe norm
* optimize code with moe norm computing
* fix the bug that missing scale the latent moe loss
* refactor code
* fix moe loss logger for the interleaved pp
* change the scale position for latent moe_loss
* Update 7B_sft.py
* add support for moe checkpoint
* add comments for moe
* reformat code
* fix bugs
* fix bugs
* Update .pre-commit-config.yaml
* remove moe_loss_coeff parameter passing
* fix group_norms computing in hybrid_zero_optim
* use dummy mode to generate random numbers in model construction
* replace flashatten experts by feedforward experts
* fix bugs with _compute_norm_with_moe_group
* merge upstream/develop into feature_add_moe
* merge upstream/develop into feature_add_moe
* change float16 to bfloat16
* fix interface for dense pipeline
* refactor split_moe_group code
* fix precision inconsistency
* refactor code
* Update 7B_sft.py
* refactor code
* refactor code
* refactor code
* refactor code
* refactor code for split group
* refactor code for log
* fix logger for moe
* refactor code for split param group
* fix the moe_loss for ci and val
* refactor
* fix bugs with split group
* fix bugs in save/load moe checkpoint
* add moe module to `__init__.py`
* add compatible code for old version
* update moe config file
* modify moe config file
* fix merge bugs
* update moe config file
* change condition for compatibility
---------
Co-authored-by: zhanglei <ryancheung98@163.com>
Co-authored-by: Ryan (张磊) <leizhang.real@gmail.com>
2023-09-27 15:54:53 +08:00
Season
07038d1224
docs(doc/code-docs): update document image for InternLM parallel architecture ( #373 )
...
* docs(doc/imgs): update image for internlm parallel architecture
* docs(doc/code-docs): remove fuzzy translation in sphinx files
* update english translation in readthedocs
2023-09-27 11:50:22 +08:00
Wenwen Qu
655e9dae40
Feat(norm)/support fused precision ( #319 )
...
* add fused precision support for norm
* refactor code
* refactor code
* change the granularity of hook
* fix bugs if self.model is ModuleList
* add dtype condition for post hook
* refactor code for split group
* refactor code for pre/post hook
* refactor code for split group
* remove fp32 hook for norm
* unit tests for fused precision
* add doc for fused precision
* add doc for En. version
* reformat docs
* Update mixed_precision.rst
* Update mixed_precision.po
* update mixed_precision.po
2023-09-26 20:39:55 +08:00
YWMditto
96b20cd43f
doc(usage): add dynamic ntk into doc ( #367 )
...
* add long text generation in doc/usage.md
* add long text generation in doc/usage.md
* add long text generation in doc/usage.md
---------
Co-authored-by: YWMditto <862779238@qq.com>
2023-09-26 16:58:46 +08:00
jiaxingli
c1e30cff2c
feat(numa): bind numa if possible ( #320 )
...
* feat:add numa
* feat:add bind numa
* feat:add bind numa
* feat:add bind numa
* feat: bind numa
* feat: bind numa
* feat: add numa
* feat:add numa
* feat:add numa
* try_bind_numa should not raise exception
---------
Co-authored-by: 877825076@qq.com <877825076@qq.com>
2023-09-25 19:34:52 +08:00
jiaopenglong
9284303a6d
doc(monitor): add light monitoring doc ( #352 )
...
* add light monitoring doc
* update light monitoring doc
* update light monitoring doc
* update light monitoring doc
* update light monitoring doc continue
* update light monitoring doc continue
* update monitor config doc
* update monitor config doc continue
* update monitor config doc continue
2023-09-25 19:28:09 +08:00
jiaopenglong
847cc819dd
fix(monitor): add volc and aliyun jobid ( #338 )
...
* add volc and aliyun jobid
* rm workspaceid
2023-09-25 17:58:32 +08:00
jiaopenglong
064965527b
fix(config): monitor config key error when args_check is False ( #362 )
...
* add monitor switch
* add switch to light monitor
* fix alert_address is empty
* fix light monitor heartbeat
* init light_monitor on rank_log only
* add comments to the monitoring config
* optimize config
* fix monitor config key error when args_check is False
2023-09-25 17:30:36 +08:00
Guoteng
26a7397752
fix(storage): fix try_get_storage_backend ( #359 )
...
* fix(storage): fix try_get_storage_backend
* fix typo and print infos only in log rank
* fix typo and print infos only in log rank
---------
Co-authored-by: gaoyang07 <Gary1546308416AL@gmail.com>
2023-09-25 15:16:25 +08:00
huangting4201
a86c4bbbfd
Merge branch 'main' into develop
2023-09-22 19:24:03 +08:00