Wenwen Qu
3443ab1f5b
merge operand if noisy_gate_policy is not used
2023-12-01 15:05:12 +08:00
jiaxingli
eba2b859fc
feat(seed): set global seed for every model initialization ( #496 )
...
* bind seed
* bind seed
2023-11-17 14:42:50 +08:00
Guoteng
0bfc86205e
feat(train): support_rampup_batch_size and fix bugs ( #493 )
2023-11-16 19:51:01 +08:00
jiaopenglong
3418898cbe
fix(alert): send exception of all ranks ( #491 )
...
* catch exception of all ranks
* monitor task only if DO_ALERT is True
2023-11-10 19:04:31 +08:00
jiaopenglong
0763bf3972
init light monitoring on all ranks ( #462 )
2023-11-09 20:04:21 +08:00
Wenwen Qu
21624f6f81
fix(moe): remove norm&gate force sync ( #448 )
...
* add zero broadcast_sync
* delete old sync logic
* fix merged error
* refactor code
* remove some unused function (is norm/gate group)
2023-11-01 11:29:55 +08:00
Wenwen Qu
2c5395fdfd
Doc(moe): add documentation for moe training ( #411 )
...
* add doc for moe
* fix moe and zero1 check in args_sanity_check
* restore moe config file
2023-10-19 10:01:12 +08:00
Wenwen Qu
eeef07934a
fix(moe): fix moe compatibility for fsdp and memory profiling ( #417 )
...
* fix moe compatibility for fsdp and memory profiling
* update moe config
2023-10-17 14:13:48 +08:00
zaglc
a075153adf
feat(train): add fsdp training option ( #293 )
...
* feat(fsdp): add training option for fsdp
* fix(fsdp): add mix-precision training
* fix failure in lint-check
* fix format problem
* restore 7B_sft
* fix load ckpt bug
* fix load ckpt bug2
* feat(solver/optimizer): add new file fsdp_optimizer.py
* fix(train.py): fix ci lint error
* fix(fsdp_optimizer.py): wait grad async
* fix bug for loading ckpts when zero1 < dp_size
* fix(context/parallel_context.py): only log warning for fsdp
* change ckpt name
* fix(model/modeling_internlm.py): fix checkpoint=False runtime error
* more wrap
* add support for FSDP with tp
* modify args_sanity_check for fsdp with pipeline and fsdp with moe
* fix(internlm/utils/parallel.py): fix circular import
* fix(internlm/train/training_internlm.py): remove set IS_TENSOR_PARALLEL attr
* fix(internlm/train/training_internlm.py): update wrap class and fix lint error
* fix(internlm/model): reset dropout_selective_checkpoint=True
* feat(configs/7B_sft.py): move fsdp config to parallel zero1
* feat(configs/7B_sft.py): adapt to old version config
---------
Co-authored-by: huangting4201 <1538303371@qq.com>
2023-10-09 18:59:31 +08:00
Wenwen Qu
582ee000bd
feat(moe):support zero for expert local dp ( #404 )
...
* support zero for expert local dp
* fix above codes:
*treat optim.zero_world_size and optim.zero_local_rank as list in model_checkpoint.py and test_model_checkpoint.py
*add overlap and zero check for moe in args_sanity_check(.)
2023-10-09 17:45:26 +08:00
Wenwen Qu
136d55ec30
feat(moe): add moe module ( #182 )
...
* feat(XXX): add moe
* reformat code
* modified: .pre-commit-config.yaml
modified: internlm/model/moe.py
modified: internlm/model/modeling_internlm.py
* modified: internlm/model/modeling_internlm.py
* modified: internlm/core/context/process_group_initializer.py
modified: internlm/core/scheduler/no_pipeline_scheduler.py
modified: internlm/solver/optimizer/hybrid_zero_optim.py
* modified: internlm/model/moe.py
modified: internlm/moe/sharded_moe.py
modified: internlm/utils/parallel.py
* rollback .pre-commit-config.yaml
* add residual and other moe features
* modify grad clipping due to moe
* add param arguments
* reformat code
* add expert data support and fix bugs
* Update .pre-commit-config.yaml
* modified: internlm/model/modeling_internlm.py
* add no-interleaved & no-overlapped moe pp support
* support zero_overlap_communication
* avoid moe parameter partition in zero optimizer
* fix the moe_loss_coeff bug
* suppport interleaved pp
* fix moe bugs in zero optimizer
* fix more moe bugs in zero optimizer
* fix moe bugs in zero optimizer
* add logger for moe_loss
* fix bugs with merge
* fix the pp moe bugs
* fix bug on logger
* update moe training cfg on real-dataset
* refactor code
* refactor code
* fix bugs with compute moe norm
* optimize code with moe norm computing
* fix the bug that missing scale the latent moe loss
* refactor code
* fix moe loss logger for the interleaved pp
* change the scale position for latent moe_loss
* Update 7B_sft.py
* add support for moe checkpoint
* add comments for moe
* reformat code
* fix bugs
* fix bugs
* Update .pre-commit-config.yaml
* remove moe_loss_coeff parameter passing
* fix group_norms computing in hybrid_zero_optim
* use dummy mode to generate random numbers in model construction
* replace flashatten experts by feedforward experts
* fix bugs with _compute_norm_with_moe_group
* merge upstream/develop into feature_add_moe
* merge upstream/develop into feature_add_moe
* change float16 to bfloat16
* fix interface for dense pipeline
* refactor split_moe_group code
* fix precision inconsistency
* refactor code
* Update 7B_sft.py
* refactor code
* refactor code
* refactor code
* refactor code
* refactor code for split group
* refactor code for log
* fix logger for moe
* refactor code for split param group
* fix the moe_loss for ci and val
* refactor
* fix bugs with split group
* fix bugs in save/load moe checkpoint
* add moe module to `__init__.py`
* add compatible code for old version
* update moe config file
* modify moe config file
* fix merge bugs
* update moe config file
* change condition for compatibility
---------
Co-authored-by: zhanglei <ryancheung98@163.com>
Co-authored-by: Ryan (张磊) <leizhang.real@gmail.com>
2023-09-27 15:54:53 +08:00
jiaxingli
c1e30cff2c
feat(numa): bind numa if possible ( #320 )
...
* feat:add numa
* feat:add bind numa
* feat:add bind numa
* feat:add bind numa
* feat: bind numa
* feat: bind numa
* feat: add numa
* feat:add numa
* feat:add numa
* try_bind_numa should not raise exception
---------
Co-authored-by: 877825076@qq.com <877825076@qq.com>
2023-09-25 19:34:52 +08:00
jiaopenglong
064965527b
fix(config): monitor config key error when args_check is False ( #362 )
...
* add monitor switch
* add switch to light monitor
* fix alert_address is empty
* fix light monitor heartbeat
* init light_monitor on rank_log only
* add comments to the monitoring config
* optimize config
* fix monitor config key error when args_check is False
2023-09-25 17:30:36 +08:00
jiaxingli
f5337f6e02
Feat(PythonGC): Do garbage collection manually ( #326 )
...
* feat:add gc control
* feat:add gc control
* feat:add gc control
* feat:add gc
* re-lint
2023-09-22 13:52:25 +08:00
huangting4201
025ca55dfe
test(tests/test_training): add training e2e tests for loss spike and loss accuracy ( #304 )
...
* tests(test_training): add test case for loss accuracy
* tests(test_training): update test cases
* ci(.github/workflows/e2e_test.yaml): remove pull submodule
* ci(.github/workflows/e2e_test.yaml): update ci env and remove useless env var
* test(tests/test_training): add 16 GPUs test cases
* test(tests/test_training): fix training_16GPU_8DP2PP test case error
* test(tests/test_training): add new case for interleaved pp
* test(tests/test_training): remove redundant code
* test(tests/test_training): update ci job timeout minutes to 30m
* feat(initialize/launch.py): check num_chunks and interleaved_overlap
---------
Co-authored-by: huangting4201 <huangting3@sensetime.com>
2023-09-19 14:55:40 +08:00
Sun Peng
1ee31ff9b1
feat: add runtime diag ( #297 )
...
* feat: add runtime diag
* add diag_outlier_ratio
---------
Co-authored-by: yingtongxiong <974106207@qq.com>
2023-09-08 17:56:46 +08:00
yingtongxiong
0c276d8de2
Merge remote-tracking branch 'origin/main' into develop
2023-09-08 10:19:54 +08:00
jiaopenglong
7c99e01ca7
fix(monitor): add alert switch and refactor monitor config ( #285 )
...
* add monitor switch
* add switch to light monitor
* fix alert_address is empty
* fix light monitor heartbeat
* init light_monitor on rank_log only
* add comments to the monitoring config
* optimize config
2023-09-07 21:49:05 +08:00
Guoteng
37b8c6684e
feat(utils): add timeout warpper for key functions ( #286 )
2023-09-07 17:26:17 +08:00
Season
b6d909d43e
docs(*): add documentation and reST files for readthedocs ( #272 )
...
* add initial reST files for readthedocs
* fix typos
* docs refine and minor fix
* add references for parallel training section
* fix reST format
* fix reST format
* fix reST format
* add comments for trainer API
* add link to step-by-step quickstart guide
* docs(code-docs/source/parallel.rst): add paper link url
* docs(code-docs/source/parallel.rst): add paper link url
* use MyST to render markdown
* docs(code-docs/source/initialize.rst): update model init
* add requirements for myst-parser
* reuse install and usage markdown
* docs(code-docs/source/index.rst): add example and q&a
* docs(doc/code-docs/*): docs refine
* docs(code-docs/source/parallel.rst): update docs for zero config
* docs(code-docs/source/example.rst): fix typos for example.rst
* docs(code-docs/source/example.rst): refine docs
* docs(code-docs/source/example): update example
* docs(code-docs/source/example): delete useless example
* docs(code-docs/source/*): fix image display issue
* docs(code-docs/source/parallel.rst): add docs for communication overlap
* docs(code-docs/source/conf.py): update conf.py
* docs(code-docs/source/example): update example 30B demo
* docs(code-docs/source/parallel.rst): update pipeline parallel
* docs(code-docs/source/parallel.rst): update pipeline parallel
* docs(code-docs/source/parallel.rst): update pipeline parallel
* docs(code-docs/source/parallel.rst): update pipeline parallel
* docs(code-docs/source/parallel.rst): update ZeRO1.5
* docs(code-docs/source/parallel.rst): update ZeRO1.5
* docs(code-docs/source): fix word spelling error
---------
Co-authored-by: huangting4201 <huangting3@sensetime.com>
2023-09-06 15:36:03 +08:00
jiaopenglong
8d8d811e10
feat(monitor): add light monitor ( #275 )
...
* add light monitor
* filter key of metrics dict
* test no light_monitor case
* mv init_light_monitor to initialize_distributed_env
2023-09-05 19:24:01 +08:00
Guoteng
f6e007f95b
feat(ckpt): fix checkpoint bugs and add feature enhancements. ( #259 )
...
* fix(ckpt): ckpt bug fix and api refactor
1. fix latest ckpt query bug
2. add ckpt unit test
3. fix storage manager boto3/local client get_fns bug
4. fix only model load case zero fp32 buffer overwrite model weights bug.
5. add ckpt_type and add zero reload ci-test
* fix(ckpt): fix ckpt and trainer bug
* fix and refactor
* fix base on comment
* feat: add legacy api
2023-09-05 17:40:48 +08:00
huangting4201
54f85a6e9a
Merge develop to main ( #233 )
...
* feat(utils/writer.py): support tensorboard writer (#63 )
* feat(utils/writer.py): support tensorboard writer
* feat(utils/writer.py): add class comment
---------
Co-authored-by: 黄婷 <huangting3@CN0014010744M.local>
* [Develop] Pull Main Branch (#121 )
* fix/fix_submodule_err (#61 )
* fix/fix_submodule_err
---------
Co-authored-by: ChenQiaoling00 <qiaoling_chen@u.nus.edu>
* fix issue templates (#65 )
* fix(tokenizer): refactor tokenizer and update usage in readme (#51 )
* update tokenizer example
* fix(readme, requirements): fix typo at Chinese readme and select a lower version of transformers (#73 )
* fix a typo in readme
* in order to find InternLMTokenizer, select a lower version of Transformers
---------
Co-authored-by: gouhchangjiang <gouhchangjiang@gmail.com>
* [Doc] Add wechat and discord link in readme (#78 )
* Doc:add wechat and discord link
* Doc:update wechat and discord link
* Doc:update wechat and discord link
* Doc:update wechat and discord link
* Doc:update wechat and discord link
* Doc:update wechat and discord link
* Doc:update wechat and discord link
* Doc:update wechat and discord link
* Doc:update wechat and discord link
* Doc:update wechat and discord link
* Doc:update wechat and discord link
* [Docs]: add Japanese README (#43 )
* Add Japanese README
* Update README-ja-JP.md
replace message
* Update README-ja-JP.md
* add repetition_penalty in GenerationConfig in web_demo.py (#48 )
Co-authored-by: YWMditto <862779238@qq.com>
* use fp16 in instruction (#80 )
* [Enchancement] add more options for issue template (#77 )
* [Enchancement] add more options for issue template
* update qustion icon
* fix link
* Use tempfile for convert2hf.py (#23 )
Fix https://github.com/InternLM/InternLM/issues/50
* delete torch_dtype of README's example code (#100 )
* set the value of repetition_penalty to 1.0 to avoid random outputs (#99 )
* Update web_demo.py (#97 )
Remove meaningless log.
* [Fix]Fix wrong string cutoff in the script for sft text tokenizing (#106 )
---------
Co-authored-by: ChenQiaoling00 <qiaoling_chen@u.nus.edu>
Co-authored-by: Kai Chen <chenkaidev@gmail.com>
Co-authored-by: Yang Gao <Gary1546308416AL@gmail.com>
Co-authored-by: Changjiang GOU <gouchangjiang@gmail.com>
Co-authored-by: gouhchangjiang <gouhchangjiang@gmail.com>
Co-authored-by: vansin <msnode@163.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Co-authored-by: YWMditto <46778265+YWMditto@users.noreply.github.com>
Co-authored-by: YWMditto <862779238@qq.com>
Co-authored-by: WRH <12756472+wangruohui@users.noreply.github.com>
Co-authored-by: liukuikun <24622904+Harold-lkk@users.noreply.github.com>
Co-authored-by: x54-729 <45304952+x54-729@users.noreply.github.com>
Co-authored-by: Shuo Zhang <zhangshuolove@live.com>
Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com>
* feat(core/scheduler): support pipeline parallel (#98 )
* feat(utils/writer.py): support tensorboard writer
* feat(utils/writer.py): add class comment
* feat(core): support pipeline parallel
* fix(core): fix demo running error
* feat(solver/optimizer): add pp zero optimizer
* fix(solver/optimizer): fix word spelling error
* feat(core/scheduler): add new dir scheduler in core/
* fix(core): fix ci lint error
* feat(solver/optimizer): merge pp and nopp optimizer
* doc(usage.md): update usage doc
* feat(core/scheduler): support post func
* feat(core/scheduler): add dtype para in pp sche and update func get_tensor_shape
* feat(core/scheduler): add _load_micro_batch in base scheduler
* feat(core/scheduler): support optimizer overlap communication in pp scheduler
* feat(core/scheduler): delete data process func code
* feat(core/trainer): schedule pre processing for all schedule
---------
Co-authored-by: 黄婷 <huangting3@CN0014010744M.local>
Co-authored-by: huangting.p <huangting@sensetime.com>
* refactor(rotaryEmbedding): refactor forward (#120 )
* use fp16 in instruction (#80 )
* delete torch_dtype of README's example code (#100 )
* refactor the forward for rotary embedding
---------
Co-authored-by: WRH <12756472+wangruohui@users.noreply.github.com>
Co-authored-by: x54-729 <45304952+x54-729@users.noreply.github.com>
* feat(model/metrics.py): support calculating accuracy and perplexity m… (#91 )
* feat(model/metrics.py): support calculating accuracy and perplexity metrics
* fix(model/metrics.py): fix import error
* feat(train.py): minor update
---------
Co-authored-by: 黄婷 <huangting3@CN0014010744M.local>
Co-authored-by: huangting.p <huangting@sensetime.com>
* fix(optimizer/util.py) change inf defination
* [Dev] Pull Main (#139 )
* fix/fix_submodule_err (#61 )
* fix/fix_submodule_err
---------
Co-authored-by: ChenQiaoling00 <qiaoling_chen@u.nus.edu>
* fix issue templates (#65 )
* fix(tokenizer): refactor tokenizer and update usage in readme (#51 )
* update tokenizer example
* fix(readme, requirements): fix typo at Chinese readme and select a lower version of transformers (#73 )
* fix a typo in readme
* in order to find InternLMTokenizer, select a lower version of Transformers
---------
Co-authored-by: gouhchangjiang <gouhchangjiang@gmail.com>
* [Doc] Add wechat and discord link in readme (#78 )
* Doc:add wechat and discord link
* Doc:update wechat and discord link
* Doc:update wechat and discord link
* Doc:update wechat and discord link
* Doc:update wechat and discord link
* Doc:update wechat and discord link
* Doc:update wechat and discord link
* Doc:update wechat and discord link
* Doc:update wechat and discord link
* Doc:update wechat and discord link
* Doc:update wechat and discord link
* [Docs]: add Japanese README (#43 )
* Add Japanese README
* Update README-ja-JP.md
replace message
* Update README-ja-JP.md
* add repetition_penalty in GenerationConfig in web_demo.py (#48 )
Co-authored-by: YWMditto <862779238@qq.com>
* use fp16 in instruction (#80 )
* [Enchancement] add more options for issue template (#77 )
* [Enchancement] add more options for issue template
* update qustion icon
* fix link
* Use tempfile for convert2hf.py (#23 )
Fix https://github.com/InternLM/InternLM/issues/50
* delete torch_dtype of README's example code (#100 )
* set the value of repetition_penalty to 1.0 to avoid random outputs (#99 )
* Update web_demo.py (#97 )
Remove meaningless log.
* [Fix]Fix wrong string cutoff in the script for sft text tokenizing (#106 )
* docs(install.md): update dependency package transformers version to >= 4.28.0 (#124 )
Co-authored-by: 黄婷 <huangting3@CN0014010744M.local>
* docs(LICENSE): add license (#125 )
* add license of colossalai and flash-attn
* fix lint
* modify the name
* fix AutoModel map in convert2hf.py (#116 )
* variables are not printly as expect (#114 )
* feat(solver): fix code to adapt to torch2.0 and provide docker images (#128 )
* feat(solver): fix code to adapt to torch2.0
* docs(install.md): publish internlm environment image
* docs(install.md): update dependency packages version
* docs(install.md): update default image
---------
Co-authored-by: 黄婷 <huangting3@CN0014010744M.local>
* add demo test (#132 )
Co-authored-by: qa-caif-cicd <qa-caif-cicd@pjlab.org.cn>
* fix web_demo cache accelerate (#133 )
* fix(hybrid_zero_optim.py): delete math import
* Update embedding.py
---------
Co-authored-by: ChenQiaoling00 <qiaoling_chen@u.nus.edu>
Co-authored-by: Kai Chen <chenkaidev@gmail.com>
Co-authored-by: Yang Gao <Gary1546308416AL@gmail.com>
Co-authored-by: Changjiang GOU <gouchangjiang@gmail.com>
Co-authored-by: gouhchangjiang <gouhchangjiang@gmail.com>
Co-authored-by: vansin <msnode@163.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Co-authored-by: YWMditto <46778265+YWMditto@users.noreply.github.com>
Co-authored-by: YWMditto <862779238@qq.com>
Co-authored-by: WRH <12756472+wangruohui@users.noreply.github.com>
Co-authored-by: liukuikun <24622904+Harold-lkk@users.noreply.github.com>
Co-authored-by: x54-729 <45304952+x54-729@users.noreply.github.com>
Co-authored-by: Shuo Zhang <zhangshuolove@live.com>
Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com>
Co-authored-by: huangting4201 <1538303371@qq.com>
Co-authored-by: 黄婷 <huangting3@CN0014010744M.local>
Co-authored-by: ytxiong <45058324+yingtongxiong@users.noreply.github.com>
Co-authored-by: Zaida Zhou <58739961+zhouzaida@users.noreply.github.com>
Co-authored-by: kkscilife <126147887+kkscilife@users.noreply.github.com>
Co-authored-by: qa-caif-cicd <qa-caif-cicd@pjlab.org.cn>
Co-authored-by: hw <45089338+MorningForest@users.noreply.github.com>
* style(solver/optimizer/utils.py): fix lint error (#147 )
Co-authored-by: huangting.p <huangting@sensetime.com>
* feat(*): support not-flash-attn for pp and no-pp (#145 )
* support not flash attention for no-pp
* support pipeline
* modify the config
* refactor the code
* refactor the code
* remove some unnecessary code
* fix(initialize/launch.py): set default value for use_flash_attn (#158 )
* add default for use_flash_attn
* fix lint
* feat(utils/logger.py): support uniscale logger (#152 )
* style(internlm): fix lint error
* feat(utils/logger.py): support uniscale logger
* fix(utils/logger.py): fix import circular error
* feat(train.py): support dashboard metric panel and fix ci train config
* fix(ci_scripts/train/slurm_train.sh): fix ci train error
* fix(ci_scripts/train/torchrun.sh): fix ci train error
* fix(ci_scripts/train): restore ci update
* fix(config.json): delete alert webhook
* feat(train.py): optimize func init logger
* feat(config.json): delete config.json
---------
Co-authored-by: 黄婷 <huangting3@CN0014010744M.local>
Co-authored-by: huangting.p <huangting@sensetime.com>
* feat(utils/evaluation.py): support evaluate (#154 )
* style(internlm): fix lint error
* feat(utils/logger.py): support uniscale logger
* fix(utils/logger.py): fix import circular error
* feat(train.py): support dashboard metric panel and fix ci train config
* fix(ci_scripts/train/slurm_train.sh): fix ci train error
* fix(ci_scripts/train/torchrun.sh): fix ci train error
* feat(utils/evaluation.py): support evaluate on validation dataset
* fix(utils/evaluation.py): fix demo error
* fix(ci_scripts/train/ci_7B_sft.py): fix ci train error
* feat(initialize/launch.py): set default value for valid_bsz and valid_every
* fix(ci_scripts/train): restore ci update
* docs(configs/7B_sft.py): update comment for config
* fix(config.json): delete config.json
* fix evaluation bug in scheduler when use_flash_attn=False
* feat(scheduler/no_pipeline_scheduler.py): support micro_bsz>1 in no pp
* modify the jugement in pp and no-pp scheduler
* modify the data_process_func in evaluation
* fix bugs when use_flash_attn=False
* rename symbol
* feat(configs/7B_sft.py): change para valid_bsz to valid_micro_num
* feat(scheduler/no_pipeline_scheduler.py): update para set _grad_accum_batch_size
---------
Co-authored-by: 黄婷 <huangting3@CN0014010744M.local>
Co-authored-by: huangting.p <huangting@sensetime.com>
Co-authored-by: yingtongxiong <974106207@qq.com>
* feat(*): support no apex (#166 )
* support no-apex
* add default for use_apex
* fix lint
* modify the RMSNormTorch
* remove some comments
* remove use_apex parameter
* remove some unnecessary code
* refactor(*): refactor the code with no-apex (#170 )
* support no-apex
* add default for use_apex
* fix lint
* modify the RMSNormTorch
* remove some comments
* remove use_apex parameter
* remove some unnecessary code
* optimize the code including import
* remove the import RMSNorm
* remove warnings
* refactor(scheduler): rewrite pipeline scheduler (#138 )
* refactor(scheduler): rewrite pipeline scheduler
* fix(*): fix pipeline scheduler bugs
* fix(*): fix merge bug
* feat(*): update codes with todo tag
* feat(*): add comments
* feat(internlm/core/scheduler): update recv_prev/next logic
* feat(utils/evaluation.py): update sche metric hook for valid
---------
Co-authored-by: huangting.p <huangting@sensetime.com>
* feat(*): support fp32 training (#155 )
* support float32 training
* fix lint
* add adaptation in model/utils.py
* remove some unnecessary code
* fix lint
* feat(optim): add support for fp32 zero
* Revert "Merge pull request #2 from SolenoidWGT/fp32_zero"
This reverts commit 53fc50b0e5
, reversing
changes made to 40f24d0a73
.
revert commit
* merge develop
* Update utils.py
* support fp32 in zero optimizer
* modify the dtype
---------
Co-authored-by: wangguoteng.p <wangguoteng925@qq.com>
* feat(*): support sequence_parallel (#180 )
* support sequence_parallel for no pipeline
* sequence_parallel does not support no-flash-attn
* support sequence parallel for pipeline
* add memory profiler
* Update 13B.py
* add memory profiler
* fix evaluation bug
* remove some unnecessary code
* remove some unnecessary code
* Update parallel_context.py
* modify the config
* remove memory profiler
* modify the config
* support selective dropout
* feat(monitor): support monitor and alert (#175 )
* feat(monitor): support monitor and alert
* feat(monitor.py): fix demo error
* feat(monitor.py): move cmd monitor args to config file
* feat(hybrid_zero_optim.py): if overflow occurs send alert msg
* feat(monitor.py): remove alert msg filter
* feat(monitor.py): optimize class MonitorTracker
* feat(monitor.py): optimize code
* feat(monitor.py): optimize code
* feat(monitor.py): optimize code
* feat(monitor.py): optimize code
* feat(train.py): update print to log
* style(ci): fix lint error
* fix(utils/evaluation.py): remove useless code
* fix(model/modeling_internlm.py): fix lint error
---------
Co-authored-by: huangting4201 <huangting3@sensetime.com>
* feat(ckpt): add async upload and ckpt snapshot (#161 )
* use fp16 in instruction (#80 )
* delete torch_dtype of README's example code (#100 )
* feat(ckpt): support async ckpt upload and ckpt snapshot
---------
Co-authored-by: WRH <12756472+wangruohui@users.noreply.github.com>
Co-authored-by: x54-729 <45304952+x54-729@users.noreply.github.com>
Co-authored-by: wangguoteng.p <wangguoteng925@qq.com>
* feat(ckpt): add auto ckpt load and singal quit (#189 )
Co-authored-by: wangguoteng.p <wangguoteng925@qq.com>
* Revert "feat(ckpt): add auto ckpt load and singal quit (#189 )" (#192 )
This reverts commit a45a91bb84
.
* refactor(solver/optimizer): improve optimizer memory (#193 )
* refactor(solver/optimizer): improve optimizer memory
* feat(data): remove useless dataset type ids map
* Feat/optimizer (#194 )
* feat(optimier.py): reduce memory footprint and avoid _check_overflow call
* feat(optimier.py): reduce memory footprint and avoid _check_overflow call
* feat(optimizer.py): overlap compute norm with allreduce
* update var and function name
* update function compute norm (#197 )
Co-authored-by: ChenQiaoling00 <qiaoling_chen@u.nus.edu>
* feat(optimizer/hybrid_zero_optim.py): overlap gradients last bucket allreduce and compute norm (#196 )
* support gradients allreduce and compute norm overlap
* fix para set error
* remove timer cal_norm for testing
* feat(optimizer/hybrid_zero_optim.py): support group global norm
* format(lint): fix lint error
* feat(optimizer/store.py): update code based on comment
---------
Co-authored-by: ChenQiaoling00 <qiaoling_chen@u.nus.edu>
Co-authored-by: huangting4201 <1538303371@qq.com>
* fix(ci): fix ci train error (#199 )
* fix/ci train error (#200 )
* fix(ci): fix ci train error
* fix(ci): fix ci train error
* fix(ci): fix ci train error
* fix(train.py): fix scheduler metric hook skip error (#204 )
* Merge main to develop (#203 )
* fix/fix_submodule_err (#61 )
* fix/fix_submodule_err
---------
Co-authored-by: ChenQiaoling00 <qiaoling_chen@u.nus.edu>
* fix issue templates (#65 )
* fix(tokenizer): refactor tokenizer and update usage in readme (#51 )
* update tokenizer example
* fix(readme, requirements): fix typo at Chinese readme and select a lower version of transformers (#73 )
* fix a typo in readme
* in order to find InternLMTokenizer, select a lower version of Transformers
---------
Co-authored-by: gouhchangjiang <gouhchangjiang@gmail.com>
* [Doc] Add wechat and discord link in readme (#78 )
* Doc:add wechat and discord link
* Doc:update wechat and discord link
* Doc:update wechat and discord link
* Doc:update wechat and discord link
* Doc:update wechat and discord link
* Doc:update wechat and discord link
* Doc:update wechat and discord link
* Doc:update wechat and discord link
* Doc:update wechat and discord link
* Doc:update wechat and discord link
* Doc:update wechat and discord link
* [Docs]: add Japanese README (#43 )
* Add Japanese README
* Update README-ja-JP.md
replace message
* Update README-ja-JP.md
* add repetition_penalty in GenerationConfig in web_demo.py (#48 )
Co-authored-by: YWMditto <862779238@qq.com>
* use fp16 in instruction (#80 )
* [Enchancement] add more options for issue template (#77 )
* [Enchancement] add more options for issue template
* update qustion icon
* fix link
* Use tempfile for convert2hf.py (#23 )
Fix https://github.com/InternLM/InternLM/issues/50
* delete torch_dtype of README's example code (#100 )
* set the value of repetition_penalty to 1.0 to avoid random outputs (#99 )
* Update web_demo.py (#97 )
Remove meaningless log.
* [Fix]Fix wrong string cutoff in the script for sft text tokenizing (#106 )
* docs(install.md): update dependency package transformers version to >= 4.28.0 (#124 )
Co-authored-by: 黄婷 <huangting3@CN0014010744M.local>
* docs(LICENSE): add license (#125 )
* add license of colossalai and flash-attn
* fix lint
* modify the name
* fix AutoModel map in convert2hf.py (#116 )
* variables are not printly as expect (#114 )
* feat(solver): fix code to adapt to torch2.0 and provide docker images (#128 )
* feat(solver): fix code to adapt to torch2.0
* docs(install.md): publish internlm environment image
* docs(install.md): update dependency packages version
* docs(install.md): update default image
---------
Co-authored-by: 黄婷 <huangting3@CN0014010744M.local>
* add demo test (#132 )
Co-authored-by: qa-caif-cicd <qa-caif-cicd@pjlab.org.cn>
* fix web_demo cache accelerate (#133 )
* Doc: add twitter link (#141 )
* Feat add checkpoint fraction (#151 )
* feat(config): add checkpoint_fraction into config
* feat: remove checkpoint_fraction from configs/7B_sft.py
---------
Co-authored-by: wangguoteng.p <wangguoteng925@qq.com>
* [Doc] update deployment guide to keep consistency with lmdeploy (#136 )
* update deployment guide
* fix error
* use llm partition (#159 )
Co-authored-by: qa-caif-cicd <qa-caif-cicd@pjlab.org.cn>
* test(ci_scripts): clean test data after test, remove unnecessary global variables, and other optimizations (#165 )
* test: optimization of ci scripts(variables, test data cleaning, etc).
* chore(workflows): disable ci job on push.
* fix: update partition
* test(ci_scripts): add install requirements automaticlly,trigger event about lint check and other optimizations (#174 )
* add pull_request in lint check
* use default variables in ci_scripts
* fix format
* check and install requirements automaticlly
* fix format
---------
Co-authored-by: qa-caif-cicd <qa-caif-cicd@pjlab.org.cn>
* feat(profiling): add a simple memory profiler (#89 )
* feat(profiling): add simple memory profiler
* feat(profiling): add profiling argument
* feat(CI_workflow): Add PR & Issue auto remove workflow (#184 )
* feat(ci_workflow): Add PR & Issue auto remove workflow
Add a workflow for stale PR & Issue auto remove
- pr & issue well be labeled as stale for inactive in 7 days
- staled PR & Issue well be remove in 7 days
- run this workflow every day on 1:30 a.m.
* Update stale.yml
* feat(bot): Create .owners.yml for Auto Assign (#176 )
* Create .owners.yml: for issue/pr assign automatically
* Update .owners.yml
* Update .owners.yml
fix typo
* [feat]: add pal reasoning script (#163 )
* [Feat] Add PAL inference script
* Update README.md
* Update tools/README.md
Co-authored-by: BigDong <yudongwang1226@gmail.com>
* Update tools/pal_inference.py
Co-authored-by: BigDong <yudongwang1226@gmail.com>
* Update pal script
* Update README.md
* restore .ore-commit-config.yaml
* Update tools/README.md
Co-authored-by: BigDong <yudongwang1226@gmail.com>
* Update tools/README.md
Co-authored-by: BigDong <yudongwang1226@gmail.com>
* Update pal inference script
* Update READMD.md
* Update internlm/utils/interface.py
Co-authored-by: Wenwei Zhang <40779233+ZwwWayne@users.noreply.github.com>
* Update pal script
* Update pal script
* Update script
* Add docstring
* Update format
* Update script
* Update script
* Update script
---------
Co-authored-by: BigDong <yudongwang1226@gmail.com>
Co-authored-by: Wenwei Zhang <40779233+ZwwWayne@users.noreply.github.com>
* test(ci_scripts): add timeout settings and clean work after the slurm job (#185 )
* restore pr test on develop branch
* add mask
* add post action to cancel slurm job
* remove readonly attribute on job log
* add debug info
* debug job log
* try stdin
* use stdin
* set default value avoid error
* try setting readonly on job log
* performance echo
* remove debug info
* use squeue to check slurm job status
* restore the lossed parm
* litmit retry times
* use exclusive to avoid port already in use
* optimize loop body
* remove partition
* add {} for variables
* set env variable for slurm partition
---------
Co-authored-by: qa-caif-cicd <qa-caif-cicd@pjlab.org.cn>
* refactor(tools): move interface.py and import it to web_demo (#195 )
* move interface.py and import it to web_demo
* typo
* fix(ci): fix lint error
* fix(ci): fix lint error
---------
Co-authored-by: Sun Peng <sunpengsdu@gmail.com>
Co-authored-by: ChenQiaoling00 <qiaoling_chen@u.nus.edu>
Co-authored-by: Kai Chen <chenkaidev@gmail.com>
Co-authored-by: Yang Gao <Gary1546308416AL@gmail.com>
Co-authored-by: Changjiang GOU <gouchangjiang@gmail.com>
Co-authored-by: gouhchangjiang <gouhchangjiang@gmail.com>
Co-authored-by: vansin <msnode@163.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Co-authored-by: YWMditto <46778265+YWMditto@users.noreply.github.com>
Co-authored-by: YWMditto <862779238@qq.com>
Co-authored-by: WRH <12756472+wangruohui@users.noreply.github.com>
Co-authored-by: liukuikun <24622904+Harold-lkk@users.noreply.github.com>
Co-authored-by: x54-729 <45304952+x54-729@users.noreply.github.com>
Co-authored-by: Shuo Zhang <zhangshuolove@live.com>
Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com>
Co-authored-by: 黄婷 <huangting3@CN0014010744M.local>
Co-authored-by: ytxiong <45058324+yingtongxiong@users.noreply.github.com>
Co-authored-by: Zaida Zhou <58739961+zhouzaida@users.noreply.github.com>
Co-authored-by: kkscilife <126147887+kkscilife@users.noreply.github.com>
Co-authored-by: qa-caif-cicd <qa-caif-cicd@pjlab.org.cn>
Co-authored-by: hw <45089338+MorningForest@users.noreply.github.com>
Co-authored-by: Guoteng <32697156+SolenoidWGT@users.noreply.github.com>
Co-authored-by: wangguoteng.p <wangguoteng925@qq.com>
Co-authored-by: lvhan028 <lvhan_028@163.com>
Co-authored-by: zachtzy <141206206+zachtzy@users.noreply.github.com>
Co-authored-by: cx <759046501@qq.com>
Co-authored-by: Jaylin Lee <61487970+APX103@users.noreply.github.com>
Co-authored-by: del-zhenwu <dele.zhenwu@gmail.com>
Co-authored-by: Shaoyuan Xie <66255889+Daniel-xsy@users.noreply.github.com>
Co-authored-by: BigDong <yudongwang1226@gmail.com>
Co-authored-by: Wenwei Zhang <40779233+ZwwWayne@users.noreply.github.com>
Co-authored-by: huangting4201 <huangting3@sensetime.com>
* fix(pipeline_scheduler.py): fix tensor shape err and comm block (#210 )
* feat(train.py): support torch profiler (#201 )
* feat(train.py): support torch profiling
* feat(train.py): optimize initialize_llm_profile
* feat(train.py): profiling with tp0 and dp0
* move sequence parallel context manager to evalation func
* fix lint
* move the process for type_ids to load_new_batch
* fix lint
---------
Co-authored-by: yingtongxiong <974106207@qq.com>
* feat(ckpt): add auto ckpt load and singal quit (#216 )
Co-authored-by: wangguoteng.p <wangguoteng925@qq.com>
* feat(memory_profiler): improve memory profiler (#217 )
* Feat/overlap_bcast_forward (#218 )
* feat/support bcast forward overlao
* feat/optimize the bcast call
* feat/optimize the bcast call
* feat/optimize the bcast call
* fix lint
* fix lint
* fix lint
* fix lint
* add torch.cuda.synchronize in save_checkpoint
---------
Co-authored-by: sunpeng <sunpengsdu@gmail.com>
* fix(*): move sequence_parallel to parallel config (#224 )
* move sequence_parallel to parallel config
* set the sequece_parallel default value is False
* fix lint
* fix lint
* fix lint
* Feat/example training internlm (#212 )
* feat(train/training_internlm.py): move common init funcs to internlm/train
* feat(train/training_internlm.py): update some public funcs
* feat(train/training_internlm.py): update some public funcs
* feat(evaluation.py): adapt evaluate to streaming dataset
* feat(train/training_internlm.py): minor update based on comments
* fix(training_internlm.py): set train dataloader persistent_workers true only when num_worker>0
* fix(training_internlm.py): fix demo error
* feat(data/utils.py): add new dataset type code for streaming dataset (#225 )
* test(model): support fp32 with flash_attn (#223 )
* support tf32 with flash
* move autocast to attention
* fix lint
* fix lint
* fix lint
* fix lint
* fix some bugs in model
* modify the convert dtype
* fix(pipeline): modify the sequence_parallel in pipeline (#227 )
* move sequence_parallel to parallel config
* set the sequece_parallel default value is False
* fix lint
* fix lint
* fix lint
* modify the sequence_parallel in pp
* feat(init): add skip args check flag and add zero overlap flag (#222 )
* feat(init): add skip args check flag
* fix(optim): add param overlap enable flag
* fix(ci): fix train error (#228 )
Co-authored-by: huangting4201 <huangting3@sensetime.com>
* fix(writer): fix tensorboard resume bug (#229 )
* fix(train.py): fix overflow grad norm error (#230 )
* feat(ckpt): add train config into ckpt (#231 )
---------
Co-authored-by: 黄婷 <huangting3@CN0014010744M.local>
Co-authored-by: Sun Peng <sunpengsdu@gmail.com>
Co-authored-by: ChenQiaoling00 <qiaoling_chen@u.nus.edu>
Co-authored-by: Kai Chen <chenkaidev@gmail.com>
Co-authored-by: Yang Gao <Gary1546308416AL@gmail.com>
Co-authored-by: Changjiang GOU <gouchangjiang@gmail.com>
Co-authored-by: gouhchangjiang <gouhchangjiang@gmail.com>
Co-authored-by: vansin <msnode@163.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Co-authored-by: YWMditto <46778265+YWMditto@users.noreply.github.com>
Co-authored-by: YWMditto <862779238@qq.com>
Co-authored-by: WRH <12756472+wangruohui@users.noreply.github.com>
Co-authored-by: liukuikun <24622904+Harold-lkk@users.noreply.github.com>
Co-authored-by: x54-729 <45304952+x54-729@users.noreply.github.com>
Co-authored-by: Shuo Zhang <zhangshuolove@live.com>
Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com>
Co-authored-by: huangting.p <huangting@sensetime.com>
Co-authored-by: ytxiong <45058324+yingtongxiong@users.noreply.github.com>
Co-authored-by: Zaida Zhou <58739961+zhouzaida@users.noreply.github.com>
Co-authored-by: kkscilife <126147887+kkscilife@users.noreply.github.com>
Co-authored-by: qa-caif-cicd <qa-caif-cicd@pjlab.org.cn>
Co-authored-by: hw <45089338+MorningForest@users.noreply.github.com>
Co-authored-by: yingtongxiong <974106207@qq.com>
Co-authored-by: cx <759046501@qq.com>
Co-authored-by: wangguoteng.p <wangguoteng925@qq.com>
Co-authored-by: huangting4201 <huangting3@sensetime.com>
Co-authored-by: Guoteng <32697156+SolenoidWGT@users.noreply.github.com>
Co-authored-by: lvhan028 <lvhan_028@163.com>
Co-authored-by: zachtzy <141206206+zachtzy@users.noreply.github.com>
Co-authored-by: Jaylin Lee <61487970+APX103@users.noreply.github.com>
Co-authored-by: del-zhenwu <dele.zhenwu@gmail.com>
Co-authored-by: Shaoyuan Xie <66255889+Daniel-xsy@users.noreply.github.com>
Co-authored-by: BigDong <yudongwang1226@gmail.com>
Co-authored-by: Wenwei Zhang <40779233+ZwwWayne@users.noreply.github.com>
2023-08-24 22:03:04 +08:00
cx
f1a7949185
feat(profiling): add a simple memory profiler ( #89 )
...
* feat(profiling): add simple memory profiler
* feat(profiling): add profiling argument
2023-08-08 13:10:01 +08:00
Guoteng
6b6295aea3
Feat add checkpoint fraction ( #151 )
...
* feat(config): add checkpoint_fraction into config
* feat: remove checkpoint_fraction from configs/7B_sft.py
---------
Co-authored-by: wangguoteng.p <wangguoteng925@qq.com>
2023-07-31 13:57:01 +08:00
Sun Peng
912fc8f8aa
doc: update the training examples ( #27 )
...
* doc: update the training examples
* update README
* change all "++++" log
* Update pylint
* solve lint err
2023-07-07 15:54:09 +08:00
Sun Peng
fa7337b37b
initial commit
2023-07-06 12:55:23 +08:00