huangting4201
610e011133
Merge branch 'feature_add_fsdp3' of https://github.com/zaglc/InternLM into feature_add_fsdp3
2023-10-08 17:16:06 +08:00
zaglc
132a841d42
modify args_sanity_check for fsdp with pipeline and fsdp with moe
2023-10-08 16:27:14 +08:00
huangting4201
36b687c882
Merge branch 'feature_add_fsdp3' of https://github.com/zaglc/InternLM into feature_add_fsdp3
2023-10-08 16:11:03 +08:00
zaglc
eb14dae005
fix conflicts
2023-10-08 15:49:47 +08:00
zaglc
7d52276c13
add support for FSDP with tp
2023-10-08 15:33:31 +08:00
Wenwen Qu
375240e039
feat(moe): add local data parallel support for experts ( #376 )
...
* add local data parallel support for experts
* fix model checkpoint for local dp mode of expert
* do not set ep size from config
2023-09-28 13:38:02 +08:00
Ryan (张磊)
c8242572f2
fix the moe loss as none for panel_metrics ( #379 )
2023-09-27 20:29:50 +08:00
zaglc
80f1eb9a36
more wrap
2023-09-27 17:35:28 +08:00
Wenwen Qu
136d55ec30
feat(moe): add moe module ( #182 )
...
* feat(XXX): add moe
* reformat code
* modified: .pre-commit-config.yaml
modified: internlm/model/moe.py
modified: internlm/model/modeling_internlm.py
* modified: internlm/model/modeling_internlm.py
* modified: internlm/core/context/process_group_initializer.py
modified: internlm/core/scheduler/no_pipeline_scheduler.py
modified: internlm/solver/optimizer/hybrid_zero_optim.py
* modified: internlm/model/moe.py
modified: internlm/moe/sharded_moe.py
modified: internlm/utils/parallel.py
* rollback .pre-commit-config.yaml
* add residual and other moe features
* modify grad clipping due to moe
* add param arguments
* reformat code
* add expert data support and fix bugs
* Update .pre-commit-config.yaml
* modified: internlm/model/modeling_internlm.py
* add no-interleaved & no-overlapped moe pp support
* support zero_overlap_communication
* avoid moe parameter partition in zero optimizer
* fix the moe_loss_coeff bug
* suppport interleaved pp
* fix moe bugs in zero optimizer
* fix more moe bugs in zero optimizer
* fix moe bugs in zero optimizer
* add logger for moe_loss
* fix bugs with merge
* fix the pp moe bugs
* fix bug on logger
* update moe training cfg on real-dataset
* refactor code
* refactor code
* fix bugs with compute moe norm
* optimize code with moe norm computing
* fix the bug that missing scale the latent moe loss
* refactor code
* fix moe loss logger for the interleaved pp
* change the scale position for latent moe_loss
* Update 7B_sft.py
* add support for moe checkpoint
* add comments for moe
* reformat code
* fix bugs
* fix bugs
* Update .pre-commit-config.yaml
* remove moe_loss_coeff parameter passing
* fix group_norms computing in hybrid_zero_optim
* use dummy mode to generate random numbers in model construction
* replace flashatten experts by feedforward experts
* fix bugs with _compute_norm_with_moe_group
* merge upstream/develop into feature_add_moe
* merge upstream/develop into feature_add_moe
* change float16 to bfloat16
* fix interface for dense pipeline
* refactor split_moe_group code
* fix precision inconsistency
* refactor code
* Update 7B_sft.py
* refactor code
* refactor code
* refactor code
* refactor code
* refactor code for split group
* refactor code for log
* fix logger for moe
* refactor code for split param group
* fix the moe_loss for ci and val
* refactor
* fix bugs with split group
* fix bugs in save/load moe checkpoint
* add moe module to `__init__.py`
* add compatible code for old version
* update moe config file
* modify moe config file
* fix merge bugs
* update moe config file
* change condition for compatibility
---------
Co-authored-by: zhanglei <ryancheung98@163.com>
Co-authored-by: Ryan (张磊) <leizhang.real@gmail.com>
2023-09-27 15:54:53 +08:00
huangting4201
59b7530129
fix(model/modeling_internlm.py): fix checkpoint=False runtime error
2023-09-27 11:18:04 +08:00
Wenwen Qu
655e9dae40
Feat(norm)/support fused precision ( #319 )
...
* add fused precision support for norm
* refactor code
* refactor code
* change the granularity of hook
* fix bugs if self.model is ModuleList
* add dtype condition for post hook
* refactor code for split group
* refactor code for pre/post hook
* refactor code for split group
* remove fp32 hook for norm
* unit tests for fused precision
* add doc for fused precision
* add doc for En. version
* reformat docs
* Update mixed_precision.rst
* Update mixed_precision.po
* update mixed_precision.po
2023-09-26 20:39:55 +08:00
zaglc
c703938fb3
change ckpt name
2023-09-26 19:16:16 +08:00
huangting4201
83bd11f2b2
fix(context/parallel_context.py): only log warning for fsdp
2023-09-26 18:59:54 +08:00
zaglc
96171d5f28
fix bug for loading ckpts when zero1 < dp_size
2023-09-26 17:36:59 +08:00
huangting4201
056996f8b3
fix(fsdp_optimizer.py): wait grad async
2023-09-26 16:54:29 +08:00
huangting4201
f3f2511e74
feat(solver/optimizer): add new file fsdp_optimizer.py
2023-09-26 15:46:47 +08:00
jiaxingli
c1e30cff2c
feat(numa): bind numa if possible ( #320 )
...
* feat:add numa
* feat:add bind numa
* feat:add bind numa
* feat:add bind numa
* feat: bind numa
* feat: bind numa
* feat: add numa
* feat:add numa
* feat:add numa
* try_bind_numa should not raise exception
---------
Co-authored-by: 877825076@qq.com <877825076@qq.com>
2023-09-25 19:34:52 +08:00
jiaopenglong
9284303a6d
doc(monitor): add light monitoring doc ( #352 )
...
* add light monitoring doc
* update light monitoring doc
* update light monitoring doc
* update light monitoring doc
* update light monitoring doc continue
* update light monitoring doc continue
* update monitor config doc
* update monitor config doc continue
* update monitor config doc continue
2023-09-25 19:28:09 +08:00
jiaopenglong
847cc819dd
fix(monitor): add volc and aliyun jobid ( #338 )
...
* add volc and aliyun jobid
* rm workspaceid
2023-09-25 17:58:32 +08:00
jiaopenglong
064965527b
fix(config): monitor config key error when args_check is False ( #362 )
...
* add monitor switch
* add switch to light monitor
* fix alert_address is empty
* fix light monitor heartbeat
* init light_monitor on rank_log only
* add comments to the monitoring config
* optimize config
* fix monitor config key error when args_check is False
2023-09-25 17:30:36 +08:00
zaglc
6b7ca1c6b3
fix load ckpt bug2
2023-09-25 16:11:50 +08:00
zaglc
5b62a3957a
fix load ckpt bug
2023-09-25 16:08:40 +08:00
Guoteng
26a7397752
fix(storage): fix try_get_storage_backend ( #359 )
...
* fix(storage): fix try_get_storage_backend
* fix typo and print infos only in log rank
* fix typo and print infos only in log rank
---------
Co-authored-by: gaoyang07 <Gary1546308416AL@gmail.com>
2023-09-25 15:16:25 +08:00
jiaxingli
f5337f6e02
Feat(PythonGC): Do garbage collection manually ( #326 )
...
* feat:add gc control
* feat:add gc control
* feat:add gc control
* feat:add gc
* re-lint
2023-09-22 13:52:25 +08:00
huangting4201
3b0eff0c8a
fix(model/embedding.py): ci lint check error ( #345 )
...
* fix(ci): fix ci lint error
* fix(ci): fix ci lint error
2023-09-21 14:46:22 +08:00
YWMditto
8464425a7b
feat(mdoel): add DynamicNTKScalingRotaryEmbedding ( #339 )
...
* add dynamic ntk rope
* update dynamic ntk rope
* fix lint check
* fix lint check
* add more desc
---------
Co-authored-by: YWMditto <862779238@qq.com>
2023-09-20 23:31:47 +08:00
ytxiong
6a5915bf0d
feat(linear): optimize mlp by using jit ( #321 )
...
* fuse silu op
* refactor code
* fix lint
* fix lint
2023-09-19 14:57:43 +08:00
huangting4201
025ca55dfe
test(tests/test_training): add training e2e tests for loss spike and loss accuracy ( #304 )
...
* tests(test_training): add test case for loss accuracy
* tests(test_training): update test cases
* ci(.github/workflows/e2e_test.yaml): remove pull submodule
* ci(.github/workflows/e2e_test.yaml): update ci env and remove useless env var
* test(tests/test_training): add 16 GPUs test cases
* test(tests/test_training): fix training_16GPU_8DP2PP test case error
* test(tests/test_training): add new case for interleaved pp
* test(tests/test_training): remove redundant code
* test(tests/test_training): update ci job timeout minutes to 30m
* feat(initialize/launch.py): check num_chunks and interleaved_overlap
---------
Co-authored-by: huangting4201 <huangting3@sensetime.com>
2023-09-19 14:55:40 +08:00
jiaxingli
794a484666
feat: more tgs ( #310 )
...
* feat:more tgs
* feat:add more tgs
* feat:more tgs
2023-09-15 18:56:11 +08:00
huangting4201
607f691e16
Merge main to develop ( #312 )
...
* fix(chat): fix stream_chat to return generator (#123 )
* fix(configs/7B_sft.py): model dtype float16 to bfloat16 (#302 )
* fix(convert2hf.py): fix the rotary_emb.inv_freq KeyError (#299 )
* docs(doc/code-docs): update quickstart usage (#301 )
* docs(usage.md): update usage.md
* docs(doc/code-docs): update en usage
---------
Co-authored-by: huangting4201 <huangting3@sensetime.com>
* docs(doc/code-docs): update en usage
---------
Co-authored-by: yingtongxiong <974106207@qq.com>
Co-authored-by: zhjunqin <zhjunqin@users.noreply.github.com>
Co-authored-by: jiangtann <39088437+jiangtann@users.noreply.github.com>
Co-authored-by: huangting4201 <huangting3@sensetime.com>
2023-09-15 16:19:26 +08:00
zaglc
9b1b0c5c20
fix format problem
2023-09-14 17:03:36 +08:00
Guoteng
85e39aae67
fix(ckpt): fix snapshot none load error and remove file lock ( #298 )
2023-09-08 20:41:53 +08:00
Sun Peng
1ee31ff9b1
feat: add runtime diag ( #297 )
...
* feat: add runtime diag
* add diag_outlier_ratio
---------
Co-authored-by: yingtongxiong <974106207@qq.com>
2023-09-08 17:56:46 +08:00
zaglc
aedd88e5a7
fix(fsdp): fix conflicts
2023-09-08 16:31:38 +08:00
Sun Peng
0423426c4c
fix: fix the bug to do bcast in a stream ( #294 )
...
* fix: fix the bug to do bcast in a stream
* fix: fix the bug to do bcast in a stream
---------
Co-authored-by: yingtongxiong <974106207@qq.com>
2023-09-08 13:53:40 +08:00
zaglc
1d60f90ed9
fix failure in lint-check
2023-09-08 13:19:42 +08:00
yingtongxiong
0c276d8de2
Merge remote-tracking branch 'origin/main' into develop
2023-09-08 10:19:54 +08:00
zaglc
31d2a2916d
feat(fsdp) add mix-precision 2
2023-09-08 10:16:58 +08:00
Sun Peng
b7a8af8133
Feat/sync grad use async op ( #277 )
...
* fix/brocast should not in commu stream
* fix/brocast should not in commu stream
* feat: support allreduce grad using async op
* fix bug of async op
* use reduceop.avg
* use torch flat
* delete unused stream
* delete unused stream
* feat: overap allreduce with memcapy
---------
Co-authored-by: yingtongxiong <974106207@qq.com>
2023-09-07 21:51:30 +08:00
jiaopenglong
7c99e01ca7
fix(monitor): add alert switch and refactor monitor config ( #285 )
...
* add monitor switch
* add switch to light monitor
* fix alert_address is empty
* fix light monitor heartbeat
* init light_monitor on rank_log only
* add comments to the monitoring config
* optimize config
2023-09-07 21:49:05 +08:00
Guoteng
37b8c6684e
feat(utils): add timeout warpper for key functions ( #286 )
2023-09-07 17:26:17 +08:00
zaglc
420e883d76
fix(fsdp): add mix-precision training
2023-09-07 17:11:01 +08:00
Season
b6d909d43e
docs(*): add documentation and reST files for readthedocs ( #272 )
...
* add initial reST files for readthedocs
* fix typos
* docs refine and minor fix
* add references for parallel training section
* fix reST format
* fix reST format
* fix reST format
* add comments for trainer API
* add link to step-by-step quickstart guide
* docs(code-docs/source/parallel.rst): add paper link url
* docs(code-docs/source/parallel.rst): add paper link url
* use MyST to render markdown
* docs(code-docs/source/initialize.rst): update model init
* add requirements for myst-parser
* reuse install and usage markdown
* docs(code-docs/source/index.rst): add example and q&a
* docs(doc/code-docs/*): docs refine
* docs(code-docs/source/parallel.rst): update docs for zero config
* docs(code-docs/source/example.rst): fix typos for example.rst
* docs(code-docs/source/example.rst): refine docs
* docs(code-docs/source/example): update example
* docs(code-docs/source/example): delete useless example
* docs(code-docs/source/*): fix image display issue
* docs(code-docs/source/parallel.rst): add docs for communication overlap
* docs(code-docs/source/conf.py): update conf.py
* docs(code-docs/source/example): update example 30B demo
* docs(code-docs/source/parallel.rst): update pipeline parallel
* docs(code-docs/source/parallel.rst): update pipeline parallel
* docs(code-docs/source/parallel.rst): update pipeline parallel
* docs(code-docs/source/parallel.rst): update pipeline parallel
* docs(code-docs/source/parallel.rst): update ZeRO1.5
* docs(code-docs/source/parallel.rst): update ZeRO1.5
* docs(code-docs/source): fix word spelling error
---------
Co-authored-by: huangting4201 <huangting3@sensetime.com>
2023-09-06 15:36:03 +08:00
Wenwen Qu
7f687bf4b3
fix(core/context): use dummy mode to generate random numbers in model construction ( #266 )
...
* change mode to dummy in model construction and restore to data when done
* add comments
* move set_mode(.DATA) to initialize_model(.)
2023-09-06 14:34:11 +08:00
Guoteng
ff181bc5f8
fix(ckpt): fix checkpoint reload bug ( #282 )
...
1. fix only_load tuple convert bug.
2. fix reload_zero_fp32_buff copy bug
2023-09-06 04:05:04 +08:00
Guoteng
8acf823a04
fix(storage): fix and refactor storage api ( #281 )
2023-09-06 01:15:09 +08:00
jiaopenglong
8d8d811e10
feat(monitor): add light monitor ( #275 )
...
* add light monitor
* filter key of metrics dict
* test no light_monitor case
* mv init_light_monitor to initialize_distributed_env
2023-09-05 19:24:01 +08:00
ytxiong
9445faf5be
fix(model): set tensor parallel attribute for mlp ( #271 )
...
* set is_tensor_parallel attribute for mlp
* fix lint
2023-09-05 19:03:02 +08:00
yingtongxiong
0fb8d4141f
Merge remote-tracking branch 'origin/main' into develop
2023-09-05 17:50:35 +08:00
Sun Peng
7f61505fa0
fix/broadcast should not in commu stream ( #276 )
...
* fix/brocast should not in commu stream
* fix/brocast should not in commu stream
---------
Co-authored-by: yingtongxiong <974106207@qq.com>
2023-09-05 17:47:50 +08:00