Commit Graph

246 Commits (eba2b859fceeb225b241790094aaba1f7ed8211d)

Author SHA1 Message Date
Pryest b3645b0244
fix(model): fix errant inference_forward (#396)
* Fix errant inference_forward.

* Recover use_dynamic_ntk_rope.

* Fix bugs.

* Fit to flash attention 1.0

* Fit to flash attention 1.0

* Fit to flash attention 1.0.5.

* Fit to flash attention 1.0.5.
2023-10-09 08:29:11 -05:00
zaglc a075153adf
feat(train): add fsdp training option (#293)
* feat(fsdp): add training option for fsdp

* fix(fsdp): add mix-precision training

* fix failure in lint-check

* fix format problem

* restore 7B_sft

* fix load ckpt bug

* fix load ckpt bug2

* feat(solver/optimizer): add new file fsdp_optimizer.py

* fix(train.py): fix ci lint error

* fix(fsdp_optimizer.py): wait grad async

* fix bug for loading ckpts when zero1 < dp_size

* fix(context/parallel_context.py): only log warning for fsdp

* change ckpt name

* fix(model/modeling_internlm.py): fix checkpoint=False runtime error

* more wrap

* add support for FSDP with tp

* modify args_sanity_check for fsdp with pipeline and fsdp with moe

* fix(internlm/utils/parallel.py): fix circular import

* fix(internlm/train/training_internlm.py): remove set IS_TENSOR_PARALLEL attr

* fix(internlm/train/training_internlm.py): update wrap class and fix lint error

* fix(internlm/model): reset dropout_selective_checkpoint=True

* feat(configs/7B_sft.py): move fsdp config to parallel zero1

* feat(configs/7B_sft.py): adapt to old version config

---------

Co-authored-by: huangting4201 <1538303371@qq.com>
2023-10-09 18:59:31 +08:00
Wenwen Qu 582ee000bd
feat(moe):support zero for expert local dp (#404)
* support zero for expert local dp

* fix above codes:
    *treat optim.zero_world_size and optim.zero_local_rank as list in model_checkpoint.py and test_model_checkpoint.py
    *add overlap and zero check for moe in args_sanity_check(.)
2023-10-09 17:45:26 +08:00
Wenwen Qu 916647c0a1
fix(pipeline): fix bugs for pipeline when enable mixed precision (#382)
* fix bugs for pipeline

* restore logic for empty fp32 group

* move optim.dtype to each param group
2023-10-09 14:01:15 +08:00
ytxiong 9aef11e89c
make seed in different tensor rank different (#405) 2023-10-09 13:53:52 +08:00
Guoteng 8b65e2e3c4
fix(doc): fix huggingface url (#392) 2023-10-07 02:03:44 -05:00
Guoteng 4f9e8cd70d
Doc(config): add auto_resume annotation into example config (#380)
* doc(config): add auto_resume related comments

* update auto_resume 7B_sft.py

* Update 7B_sft.py

* Update 7B_sft.py
2023-09-28 13:39:02 +08:00
Wenwen Qu 375240e039
feat(moe): add local data parallel support for experts (#376)
* add local data parallel support for experts

* fix model checkpoint for local dp mode of expert

* do not set ep size from config
2023-09-28 13:38:02 +08:00
Ryan (张磊) c8242572f2
fix the moe loss as none for panel_metrics (#379) 2023-09-27 20:29:50 +08:00
ytxiong e34e7307c9
docs(doc): add tf32 docs (#374)
* add zh docs for tf32

* add english docs

* add docs for tf32 in mix precision

* add english doc

* modify the gitignore
2023-09-27 15:55:44 +08:00
Wenwen Qu 136d55ec30
feat(moe): add moe module (#182)
* feat(XXX): add moe

* reformat code

* modified:   .pre-commit-config.yaml
	modified:   internlm/model/moe.py
	modified:   internlm/model/modeling_internlm.py

* modified:   internlm/model/modeling_internlm.py

* modified:   internlm/core/context/process_group_initializer.py
	modified:   internlm/core/scheduler/no_pipeline_scheduler.py
	modified:   internlm/solver/optimizer/hybrid_zero_optim.py

* modified:   internlm/model/moe.py
	modified:   internlm/moe/sharded_moe.py
	modified:   internlm/utils/parallel.py

* rollback .pre-commit-config.yaml

* add residual and other moe features

* modify grad clipping due to moe

* add param arguments

* reformat code

* add expert data support and fix bugs

* Update .pre-commit-config.yaml

* modified:   internlm/model/modeling_internlm.py

* add no-interleaved & no-overlapped moe pp support

* support zero_overlap_communication

* avoid moe parameter partition in zero optimizer

* fix the moe_loss_coeff bug

* suppport interleaved pp

* fix moe bugs in zero optimizer

* fix more moe bugs in zero optimizer

* fix moe bugs in zero optimizer

* add logger for moe_loss

* fix bugs with merge

* fix the pp moe bugs

* fix bug on logger

* update moe training cfg on real-dataset

* refactor code

* refactor code

* fix bugs with compute moe norm

* optimize code with moe norm computing

* fix the bug that missing scale the latent moe loss

* refactor code

* fix moe loss logger for the interleaved pp

* change the scale position for latent moe_loss

* Update 7B_sft.py

* add support for moe checkpoint

* add comments for moe

* reformat code

* fix bugs

* fix bugs

* Update .pre-commit-config.yaml

* remove moe_loss_coeff parameter passing

* fix group_norms computing in hybrid_zero_optim

* use dummy mode to generate random numbers in model construction

* replace flashatten experts by feedforward experts

* fix bugs with _compute_norm_with_moe_group

* merge upstream/develop into feature_add_moe

* merge upstream/develop into feature_add_moe

* change float16 to bfloat16

* fix interface for dense pipeline

* refactor split_moe_group code

* fix precision inconsistency

* refactor code

* Update 7B_sft.py

* refactor code

* refactor code

* refactor code

* refactor code

* refactor code for split group

* refactor code for log

* fix logger for moe

* refactor code for split param group

* fix the moe_loss for ci and val

* refactor

* fix bugs with split group

* fix bugs in save/load moe checkpoint

* add moe module to `__init__.py`

* add compatible code for old version

* update moe config file

* modify moe config file

* fix merge bugs

* update moe config file

* change condition for compatibility

---------

Co-authored-by: zhanglei <ryancheung98@163.com>
Co-authored-by: Ryan (张磊) <leizhang.real@gmail.com>
2023-09-27 15:54:53 +08:00
Season 07038d1224
docs(doc/code-docs): update document image for InternLM parallel architecture (#373)
* docs(doc/imgs): update image for internlm parallel architecture

* docs(doc/code-docs): remove fuzzy translation in sphinx files

* update english translation in readthedocs
2023-09-27 11:50:22 +08:00
Wenwen Qu 655e9dae40
Feat(norm)/support fused precision (#319)
* add fused precision support for norm

* refactor code

* refactor code

* change the granularity of hook

* fix bugs if self.model is ModuleList

* add dtype condition for post hook

* refactor code for split group

* refactor code for pre/post hook

* refactor code for split group

* remove fp32 hook for norm

* unit tests for fused precision

* add doc for fused precision

* add doc for En. version

* reformat docs

* Update mixed_precision.rst

* Update mixed_precision.po

* update mixed_precision.po
2023-09-26 20:39:55 +08:00
YWMditto 96b20cd43f
doc(usage): add dynamic ntk into doc (#367)
* add long text generation in doc/usage.md

* add long text generation in doc/usage.md

* add long text generation in doc/usage.md

---------

Co-authored-by: YWMditto <862779238@qq.com>
2023-09-26 16:58:46 +08:00
jiaxingli c1e30cff2c
feat(numa): bind numa if possible (#320)
* feat:add numa

* feat:add bind numa

* feat:add bind numa

* feat:add bind numa

* feat: bind numa

* feat: bind numa

* feat: add numa

* feat:add numa

* feat:add numa

* try_bind_numa should not raise exception

---------

Co-authored-by: 877825076@qq.com <877825076@qq.com>
2023-09-25 19:34:52 +08:00
jiaopenglong 9284303a6d
doc(monitor): add light monitoring doc (#352)
* add light monitoring doc

* update light monitoring doc

* update light monitoring doc

* update light monitoring doc

* update light monitoring doc continue

* update light monitoring doc continue

* update monitor config doc

* update monitor config doc continue

* update monitor config doc continue
2023-09-25 19:28:09 +08:00
jiaopenglong 847cc819dd
fix(monitor): add volc and aliyun jobid (#338)
* add volc and aliyun jobid

* rm workspaceid
2023-09-25 17:58:32 +08:00
jiaopenglong 064965527b
fix(config): monitor config key error when args_check is False (#362)
* add monitor switch

* add switch to light monitor

* fix alert_address is empty

* fix light monitor heartbeat

* init light_monitor on rank_log only

* add comments to the monitoring config

* optimize config

* fix monitor config key error when args_check is False
2023-09-25 17:30:36 +08:00
Guoteng 26a7397752
fix(storage): fix try_get_storage_backend (#359)
* fix(storage): fix try_get_storage_backend

* fix typo and print infos only in log rank

* fix typo and print infos only in log rank

---------

Co-authored-by: gaoyang07 <Gary1546308416AL@gmail.com>
2023-09-25 15:16:25 +08:00
huangting4201 a86c4bbbfd Merge branch 'main' into develop 2023-09-22 19:24:03 +08:00
Guoteng d1e52f0c03
feat(doc/code-docs): add checkpoint save/load usage doc (#311)
* feat(doc): add checkpoint doc

* fix checkpoint doc

* fix comment

* fix(doc/code-docs): remove fuzzy

* fix(doc/code-docs): fix some errors

* fix(doc/code-docs): minor fix

---------

Co-authored-by: li126com <li126com2@126.com>
Co-authored-by: huangting4201 <1538303371@qq.com>
2023-09-22 18:45:33 +08:00
huangting4201 1ed36754df
feat(.github/workflows): update ci e2e tests and add ci unit tests (#324)
* feat(.github/workflows/e2e_test.yaml): update e2e yaml

* feat(.github/workflows/e2e_test.yaml): update e2e yaml

* test e2e

* test e2e

* test e2e

* test e2e

* test e2e

* fix(ci): test ci

* fix(ci): test ci

* fix(ci): test ci

* fix(ci): test ci

* fix(ci): test ci

* fix(ci): add weekly tests

---------

Co-authored-by: huangting4201 <huangting3@sensetime.com>
2023-09-22 14:07:14 +08:00
jiaxingli f5337f6e02
Feat(PythonGC): Do garbage collection manually (#326)
* feat:add gc control

* feat:add gc control

* feat:add gc control

* feat:add gc

* re-lint
2023-09-22 13:52:25 +08:00
huangting4201 3b0eff0c8a
fix(model/embedding.py): ci lint check error (#345)
* fix(ci): fix ci lint error

* fix(ci): fix ci lint error
2023-09-21 14:46:22 +08:00
YWMditto 8464425a7b
feat(mdoel): add DynamicNTKScalingRotaryEmbedding (#339)
* add dynamic ntk rope

* update dynamic ntk rope

* fix lint check

* fix lint check

* add more desc

---------

Co-authored-by: YWMditto <862779238@qq.com>
2023-09-20 23:31:47 +08:00
Shuo Zhang e611817442
fix(doc): add 20b releasing info to readme (#330)
* fix(eval): StreamingDataset does not have an __len__ method.

* doc(readme): update readme

* update readme

* update readme

* update readme

* update readme

* update readme

* update readme
2023-09-20 16:46:45 +08:00
Shuo Zhang 5e5d160685
fix(readme): fix readme about 20B releasing (#329)
* fix(eval): StreamingDataset does not have an __len__ method.

* doc(readme): update readme

* update readme

* update readme

* update readme

* update readme

* update readme
2023-09-20 16:26:43 +08:00
Shuo Zhang 2a09ebd5c1
doc(readme): update readme, add 20B releasing info (#328)
* fix(eval): StreamingDataset does not have an __len__ method.

* doc(readme): update readme

* update readme
2023-09-20 16:04:43 +08:00
huangting4201 67eda4cbe1 fix(.github/workflows/e2e_test.yaml): update ci runner name 2023-09-19 18:13:20 +08:00
yingtongxiong 30b21075e8 merge main 2023-09-19 18:04:47 +08:00
ytxiong 6a5915bf0d
feat(linear): optimize mlp by using jit (#321)
* fuse silu op

* refactor code

* fix lint

* fix lint
2023-09-19 14:57:43 +08:00
huangting4201 025ca55dfe
test(tests/test_training): add training e2e tests for loss spike and loss accuracy (#304)
* tests(test_training): add test case for loss accuracy

* tests(test_training): update test cases

* ci(.github/workflows/e2e_test.yaml): remove pull submodule

* ci(.github/workflows/e2e_test.yaml): update ci env and remove useless env var

* test(tests/test_training): add 16 GPUs test cases

* test(tests/test_training): fix training_16GPU_8DP2PP test case error

* test(tests/test_training): add new case for interleaved pp

* test(tests/test_training): remove redundant code

* test(tests/test_training): update ci job timeout minutes to 30m

* feat(initialize/launch.py): check num_chunks and interleaved_overlap

---------

Co-authored-by: huangting4201 <huangting3@sensetime.com>
2023-09-19 14:55:40 +08:00
kkscilife bfefc4ea3c
test(ci_scripts): move ci env (#317)
* change partition and runner label

* change rm action to mv

* use spot

* use rsync to move test files

* remove *

* remove *

* change into llm_s partition

---------

Co-authored-by: wangmengke <wangmengke@pjlab.org.cn>
2023-09-19 14:52:32 +08:00
x54-729 b9824fab89
fix(tools): fix yield bug in stream_chat (#315) 2023-09-19 14:18:02 +08:00
x54-729 cd6426a249
feat(tools): support openai api (#313)
* fix(chat): fix stream_chat to return generator (#123)

* fix(configs/7B_sft.py): model dtype float16 to bfloat16 (#302)

* fix(convert2hf.py): fix the rotary_emb.inv_freq KeyError (#299)

* support openai api to deploy internlm

* update README for information os openai_api.py

* change example in README_EN.md to English

* delete unnecessary print; fix model card typo; fix chat epoch

---------

Co-authored-by: yingtongxiong <974106207@qq.com>
Co-authored-by: zhjunqin <zhjunqin@users.noreply.github.com>
Co-authored-by: huangting4201 <1538303371@qq.com>
Co-authored-by: jiangtann <39088437+jiangtann@users.noreply.github.com>
2023-09-19 13:49:48 +08:00
huangting4201 2710fa7343
Merge develop to main (#314)
* feat: add unitest for model (#300)

* feat: add unitest for model

* feat:add model test

* Merge main to develop (#309)

* fix(chat): fix stream_chat to return generator (#123)

* fix(configs/7B_sft.py): model dtype float16 to bfloat16 (#302)

* fix(convert2hf.py): fix the rotary_emb.inv_freq KeyError (#299)

---------

Co-authored-by: yingtongxiong <974106207@qq.com>
Co-authored-by: zhjunqin <zhjunqin@users.noreply.github.com>
Co-authored-by: jiangtann <39088437+jiangtann@users.noreply.github.com>

* docs(doc/code-docs): add figure for training docs (#307)

* add training image for docs

* docs(doc/code-docs): add training img for en doc

* docs(doc/code-docs): fix en docs for initialize

* docs(doc/code-docs): update conf file for readthedocs

* docs(doc/code-docs): fix typos

* docs(doc/code-docs): fix typos for reathedocs

* docs(doc/code-docs): minor typo fix for reathedocs

* docs(doc/code-docs): fix readthedocs conf file

* docs(doc/code-docs): update training image

* docs(doc/code-docs): fix typos

* docs(doc/code-docs): update training image

* docs(doc/code-docs): move training image to section initialize

* docs(doc/code-docs): fix lint

* add badge about reathedocs status

* Merge main to develop (#312)

* fix(chat): fix stream_chat to return generator (#123)

* fix(configs/7B_sft.py): model dtype float16 to bfloat16 (#302)

* fix(convert2hf.py): fix the rotary_emb.inv_freq KeyError (#299)

* docs(doc/code-docs): update quickstart usage (#301)

* docs(usage.md): update usage.md

* docs(doc/code-docs): update en usage

---------

Co-authored-by: huangting4201 <huangting3@sensetime.com>

* docs(doc/code-docs): update en usage

---------

Co-authored-by: yingtongxiong <974106207@qq.com>
Co-authored-by: zhjunqin <zhjunqin@users.noreply.github.com>
Co-authored-by: jiangtann <39088437+jiangtann@users.noreply.github.com>
Co-authored-by: huangting4201 <huangting3@sensetime.com>

* feat: more tgs (#310)

* feat:more tgs

* feat:add more tgs

* feat:more tgs

* feat: add optimizer_unitest (#303)

* feat: add optimizer_unitest

* feat: add optimizer test

* feat: add optimizer test

* feat:add optimizer test

* fianl change

* feat:add optimizer test

* feat:add optimizer test

* feat:add optimizer test

---------

Co-authored-by: jiaxingli <43110891+li126com@users.noreply.github.com>
Co-authored-by: yingtongxiong <974106207@qq.com>
Co-authored-by: zhjunqin <zhjunqin@users.noreply.github.com>
Co-authored-by: jiangtann <39088437+jiangtann@users.noreply.github.com>
Co-authored-by: Season <caizheng@pjlab.org.cn>
Co-authored-by: huangting4201 <huangting3@sensetime.com>
2023-09-15 19:12:38 +08:00
jiaxingli ab513e1ddd
feat: add optimizer_unitest (#303)
* feat: add optimizer_unitest

* feat: add optimizer test

* feat: add optimizer test

* feat:add optimizer test

* fianl change

* feat:add optimizer test

* feat:add optimizer test

* feat:add optimizer test
2023-09-15 18:56:56 +08:00
jiaxingli 794a484666
feat: more tgs (#310)
* feat:more tgs

* feat:add more tgs

* feat:more tgs
2023-09-15 18:56:11 +08:00
huangting4201 607f691e16
Merge main to develop (#312)
* fix(chat): fix stream_chat to return generator (#123)

* fix(configs/7B_sft.py): model dtype float16 to bfloat16 (#302)

* fix(convert2hf.py): fix the rotary_emb.inv_freq KeyError (#299)

* docs(doc/code-docs): update quickstart usage (#301)

* docs(usage.md): update usage.md

* docs(doc/code-docs): update en usage

---------

Co-authored-by: huangting4201 <huangting3@sensetime.com>

* docs(doc/code-docs): update en usage

---------

Co-authored-by: yingtongxiong <974106207@qq.com>
Co-authored-by: zhjunqin <zhjunqin@users.noreply.github.com>
Co-authored-by: jiangtann <39088437+jiangtann@users.noreply.github.com>
Co-authored-by: huangting4201 <huangting3@sensetime.com>
2023-09-15 16:19:26 +08:00
huangting4201 42802a2b31
docs(doc/code-docs): update quickstart usage (#301)
* docs(usage.md): update usage.md

* docs(doc/code-docs): update en usage

---------

Co-authored-by: huangting4201 <huangting3@sensetime.com>
2023-09-15 15:29:58 +08:00
Season de68cc5007
docs(doc/code-docs): add figure for training docs (#307)
* add training image for docs

* docs(doc/code-docs): add training img for en doc

* docs(doc/code-docs): fix en docs for initialize

* docs(doc/code-docs): update conf file for readthedocs

* docs(doc/code-docs): fix typos

* docs(doc/code-docs): fix typos for reathedocs

* docs(doc/code-docs): minor typo fix for reathedocs

* docs(doc/code-docs): fix readthedocs conf file

* docs(doc/code-docs): update training image

* docs(doc/code-docs): fix typos

* docs(doc/code-docs): update training image

* docs(doc/code-docs): move training image to section initialize

* docs(doc/code-docs): fix lint

* add badge about reathedocs status
2023-09-15 15:22:22 +08:00
huangting4201 07fc5f674a
Merge main to develop (#309)
* fix(chat): fix stream_chat to return generator (#123)

* fix(configs/7B_sft.py): model dtype float16 to bfloat16 (#302)

* fix(convert2hf.py): fix the rotary_emb.inv_freq KeyError (#299)

---------

Co-authored-by: yingtongxiong <974106207@qq.com>
Co-authored-by: zhjunqin <zhjunqin@users.noreply.github.com>
Co-authored-by: jiangtann <39088437+jiangtann@users.noreply.github.com>
2023-09-14 16:32:15 +08:00
jiaxingli 882a07011c
feat: add unitest for model (#300)
* feat: add unitest for model

* feat:add model test
2023-09-14 13:18:34 +08:00
jiangtann 09e71cebf3
fix(convert2hf.py): fix the rotary_emb.inv_freq KeyError (#299) 2023-09-11 20:17:11 +08:00
huangting4201 e354410bd2
fix(configs/7B_sft.py): model dtype float16 to bfloat16 (#302) 2023-09-11 20:06:22 +08:00
zhjunqin 8420115b5e
fix(chat): fix stream_chat to return generator (#123) 2023-09-10 23:46:45 +08:00
yingtongxiong 2ec20707d0 Merge remote-tracking branch 'origin/develop' 2023-09-08 20:42:55 +08:00
Guoteng 85e39aae67
fix(ckpt): fix snapshot none load error and remove file lock (#298) 2023-09-08 20:41:53 +08:00
yingtongxiong 9481df976f Merge remote-tracking branch 'origin/develop' 2023-09-08 17:58:04 +08:00
Sun Peng 1ee31ff9b1
feat: add runtime diag (#297)
* feat: add runtime diag

* add diag_outlier_ratio

---------

Co-authored-by: yingtongxiong <974106207@qq.com>
2023-09-08 17:56:46 +08:00