Commit Graph

31 Commits (03cc7f9b80bc94c4b3234da8d32674189c66aa5f)

Author SHA1 Message Date
huangting4201 03cc7f9b80 feat(model/overlap_handler.py): fix lint error 2023-10-23 15:28:34 +08:00
huangting4201 0d693cf3a1 feat(model/overlap_handler.py): fix lint error 2023-10-23 15:22:03 +08:00
yingtongxiong f6a5086fe4 support bias 2023-10-23 14:51:27 +08:00
huangting4201 e7f9f1d208 feat(model/overlap_handler.py): optimize reduce scatter mem pool 2023-10-23 13:31:23 +08:00
huangting4201 b20f47a1fe feat(model/overlap_handler.py): move handler to gpc 2023-10-23 12:02:32 +08:00
huangting4201 85ad917ae4 feat(model/overlap_handler.py): refactor overlap hook handle 2023-10-20 21:50:32 +08:00
yingtongxiong 1804d01bb3 merge reduce-scatter 2023-10-20 18:11:00 +08:00
yingtongxiong dcd89ed304 refactor linear 2023-10-20 17:50:56 +08:00
huangting4201 eac382ad0a feat(optimizer/hybrid_zero_optim.py): fix lint error 2023-10-20 16:22:29 +08:00
huangting4201 815a584930 feat(model/linear.py): remove useless code 2023-10-20 11:27:59 +08:00
yingtongxiong ed7232777a support reduce scatter memory pool 2023-10-20 10:35:45 +08:00
yingtongxiong 4742271154 add memory pool 2023-10-19 13:21:33 +08:00
chenxun.p 6682f5d92a fix reduce scatter async bug 2023-10-17 15:10:07 +08:00
chenxun.p 229cc5c68c impl reduce scatter async 2023-10-17 11:15:54 +08:00
huangting4201 d0b1346993 feat(model/linear.py): support block allgather overlap 2023-10-12 19:42:08 +08:00
yingtongxiong 5fd5a8a32b support fine-grained overlap 2023-10-11 17:36:41 +08:00
yingtongxiong 792b066f15 communication overlap 2023-10-11 10:57:12 +08:00
yingtongxiong 0fac845c36 overlap grad_input computation and grad_weight reduce_scatter 2023-10-10 17:06:13 +08:00
yingtongxiong f191853bf4 fix lint 2023-10-09 20:39:57 +08:00
yingtongxiong 29df765f65 refactor code 2023-10-09 20:23:32 +08:00
yingtongxiong 21c1a7fa47 support evaluation with fstp 2023-10-09 18:01:06 +08:00
Wenwen Qu 136d55ec30
feat(moe): add moe module (#182)
* feat(XXX): add moe

* reformat code

* modified:   .pre-commit-config.yaml
	modified:   internlm/model/moe.py
	modified:   internlm/model/modeling_internlm.py

* modified:   internlm/model/modeling_internlm.py

* modified:   internlm/core/context/process_group_initializer.py
	modified:   internlm/core/scheduler/no_pipeline_scheduler.py
	modified:   internlm/solver/optimizer/hybrid_zero_optim.py

* modified:   internlm/model/moe.py
	modified:   internlm/moe/sharded_moe.py
	modified:   internlm/utils/parallel.py

* rollback .pre-commit-config.yaml

* add residual and other moe features

* modify grad clipping due to moe

* add param arguments

* reformat code

* add expert data support and fix bugs

* Update .pre-commit-config.yaml

* modified:   internlm/model/modeling_internlm.py

* add no-interleaved & no-overlapped moe pp support

* support zero_overlap_communication

* avoid moe parameter partition in zero optimizer

* fix the moe_loss_coeff bug

* suppport interleaved pp

* fix moe bugs in zero optimizer

* fix more moe bugs in zero optimizer

* fix moe bugs in zero optimizer

* add logger for moe_loss

* fix bugs with merge

* fix the pp moe bugs

* fix bug on logger

* update moe training cfg on real-dataset

* refactor code

* refactor code

* fix bugs with compute moe norm

* optimize code with moe norm computing

* fix the bug that missing scale the latent moe loss

* refactor code

* fix moe loss logger for the interleaved pp

* change the scale position for latent moe_loss

* Update 7B_sft.py

* add support for moe checkpoint

* add comments for moe

* reformat code

* fix bugs

* fix bugs

* Update .pre-commit-config.yaml

* remove moe_loss_coeff parameter passing

* fix group_norms computing in hybrid_zero_optim

* use dummy mode to generate random numbers in model construction

* replace flashatten experts by feedforward experts

* fix bugs with _compute_norm_with_moe_group

* merge upstream/develop into feature_add_moe

* merge upstream/develop into feature_add_moe

* change float16 to bfloat16

* fix interface for dense pipeline

* refactor split_moe_group code

* fix precision inconsistency

* refactor code

* Update 7B_sft.py

* refactor code

* refactor code

* refactor code

* refactor code

* refactor code for split group

* refactor code for log

* fix logger for moe

* refactor code for split param group

* fix the moe_loss for ci and val

* refactor

* fix bugs with split group

* fix bugs in save/load moe checkpoint

* add moe module to `__init__.py`

* add compatible code for old version

* update moe config file

* modify moe config file

* fix merge bugs

* update moe config file

* change condition for compatibility

---------

Co-authored-by: zhanglei <ryancheung98@163.com>
Co-authored-by: Ryan (张磊) <leizhang.real@gmail.com>
2023-09-27 15:54:53 +08:00
ytxiong 6a5915bf0d
feat(linear): optimize mlp by using jit (#321)
* fuse silu op

* refactor code

* fix lint

* fix lint
2023-09-19 14:57:43 +08:00
huangting4201 f5f5446560
Merge main to develop (#203)
* fix/fix_submodule_err (#61)

* fix/fix_submodule_err

---------

Co-authored-by: ChenQiaoling00 <qiaoling_chen@u.nus.edu>

* fix issue templates (#65)

* fix(tokenizer): refactor tokenizer and update usage in readme (#51)

* update tokenizer example

* fix(readme, requirements): fix typo at Chinese readme and select a lower version of transformers (#73)

* fix a typo in readme

* in order to find InternLMTokenizer, select a lower version of Transformers

---------

Co-authored-by: gouhchangjiang <gouhchangjiang@gmail.com>

* [Doc] Add wechat and discord link in readme (#78)

* Doc:add wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* [Docs]: add Japanese README (#43)

* Add Japanese README

* Update README-ja-JP.md

replace message

* Update README-ja-JP.md

* add repetition_penalty in GenerationConfig in web_demo.py (#48)

Co-authored-by: YWMditto <862779238@qq.com>

* use fp16 in instruction (#80)

* [Enchancement] add more options for issue template (#77)

* [Enchancement] add more options for issue template

* update qustion icon

* fix link

* Use tempfile for convert2hf.py (#23)

Fix https://github.com/InternLM/InternLM/issues/50

* delete torch_dtype of README's example code (#100)

* set the value of repetition_penalty to 1.0 to avoid random outputs (#99)

* Update web_demo.py (#97)

Remove meaningless log.

* [Fix]Fix wrong string cutoff in the script for sft text tokenizing (#106)

* docs(install.md): update dependency package transformers version to >= 4.28.0 (#124)

Co-authored-by: 黄婷 <huangting3@CN0014010744M.local>

* docs(LICENSE): add license (#125)

* add license of colossalai and flash-attn

* fix lint

* modify the name

* fix AutoModel map in convert2hf.py (#116)

* variables are not printly as expect (#114)

* feat(solver): fix code to adapt to torch2.0 and provide docker images (#128)

* feat(solver): fix code to adapt to torch2.0

* docs(install.md): publish internlm environment image

* docs(install.md): update dependency packages version

* docs(install.md): update default image

---------

Co-authored-by: 黄婷 <huangting3@CN0014010744M.local>

* add demo test (#132)

Co-authored-by: qa-caif-cicd <qa-caif-cicd@pjlab.org.cn>

* fix web_demo cache accelerate (#133)

* Doc: add twitter link (#141)

* Feat add checkpoint fraction (#151)

* feat(config): add checkpoint_fraction into config

* feat: remove checkpoint_fraction from configs/7B_sft.py

---------

Co-authored-by: wangguoteng.p <wangguoteng925@qq.com>

* [Doc] update deployment guide to keep consistency with lmdeploy (#136)

* update deployment guide

* fix error

* use llm partition (#159)

Co-authored-by: qa-caif-cicd <qa-caif-cicd@pjlab.org.cn>

* test(ci_scripts): clean test data after test, remove unnecessary global variables, and other optimizations (#165)

* test: optimization of ci scripts(variables, test data cleaning, etc).

* chore(workflows): disable ci job on push.

* fix: update partition

* test(ci_scripts): add install requirements automaticlly,trigger event about lint check and other optimizations (#174)

* add pull_request in lint check

* use default variables in ci_scripts

* fix format

* check and install requirements automaticlly

* fix format

---------

Co-authored-by: qa-caif-cicd <qa-caif-cicd@pjlab.org.cn>

* feat(profiling): add a simple memory profiler (#89)

* feat(profiling): add simple memory profiler

* feat(profiling): add profiling argument

* feat(CI_workflow): Add PR & Issue auto remove workflow (#184)

* feat(ci_workflow): Add PR & Issue auto remove workflow

Add a workflow for stale PR & Issue  auto remove
- pr & issue well be labeled as stale for inactive in 7 days
- staled PR & Issue  well be remove in 7 days
- run this workflow every day on 1:30 a.m.

* Update stale.yml

* feat(bot): Create .owners.yml for Auto Assign (#176)

* Create .owners.yml: for issue/pr assign automatically

* Update .owners.yml

* Update .owners.yml

fix typo

* [feat]: add pal reasoning script (#163)

* [Feat] Add PAL inference script

* Update README.md

* Update tools/README.md

Co-authored-by: BigDong <yudongwang1226@gmail.com>

* Update tools/pal_inference.py

Co-authored-by: BigDong <yudongwang1226@gmail.com>

* Update pal script

* Update README.md

* restore .ore-commit-config.yaml

* Update tools/README.md

Co-authored-by: BigDong <yudongwang1226@gmail.com>

* Update tools/README.md

Co-authored-by: BigDong <yudongwang1226@gmail.com>

* Update pal inference script

* Update READMD.md

* Update internlm/utils/interface.py

Co-authored-by: Wenwei Zhang <40779233+ZwwWayne@users.noreply.github.com>

* Update pal script

* Update pal script

* Update script

* Add docstring

* Update format

* Update script

* Update script

* Update script

---------

Co-authored-by: BigDong <yudongwang1226@gmail.com>
Co-authored-by: Wenwei Zhang <40779233+ZwwWayne@users.noreply.github.com>

* test(ci_scripts): add timeout settings and clean work after the slurm job (#185)

* restore pr test on develop branch

* add mask

* add post action to cancel slurm job

* remove readonly attribute on job log

* add debug info

* debug job log

* try stdin

* use stdin

* set default value avoid error

* try setting readonly on job log

* performance echo

* remove debug info

* use squeue to check slurm job status

* restore the lossed parm

* litmit retry times

* use exclusive to avoid port already in use

* optimize loop body

* remove partition

* add {} for variables

* set env variable for slurm partition

---------

Co-authored-by: qa-caif-cicd <qa-caif-cicd@pjlab.org.cn>

* refactor(tools): move interface.py and import it to web_demo (#195)

* move interface.py and import it to web_demo

* typo

* fix(ci): fix lint error

* fix(ci): fix lint error

---------

Co-authored-by: Sun Peng <sunpengsdu@gmail.com>
Co-authored-by: ChenQiaoling00 <qiaoling_chen@u.nus.edu>
Co-authored-by: Kai Chen <chenkaidev@gmail.com>
Co-authored-by: Yang Gao <Gary1546308416AL@gmail.com>
Co-authored-by: Changjiang GOU <gouchangjiang@gmail.com>
Co-authored-by: gouhchangjiang <gouhchangjiang@gmail.com>
Co-authored-by: vansin <msnode@163.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Co-authored-by: YWMditto <46778265+YWMditto@users.noreply.github.com>
Co-authored-by: YWMditto <862779238@qq.com>
Co-authored-by: WRH <12756472+wangruohui@users.noreply.github.com>
Co-authored-by: liukuikun <24622904+Harold-lkk@users.noreply.github.com>
Co-authored-by: x54-729 <45304952+x54-729@users.noreply.github.com>
Co-authored-by: Shuo Zhang <zhangshuolove@live.com>
Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com>
Co-authored-by: 黄婷 <huangting3@CN0014010744M.local>
Co-authored-by: ytxiong <45058324+yingtongxiong@users.noreply.github.com>
Co-authored-by: Zaida Zhou <58739961+zhouzaida@users.noreply.github.com>
Co-authored-by: kkscilife <126147887+kkscilife@users.noreply.github.com>
Co-authored-by: qa-caif-cicd <qa-caif-cicd@pjlab.org.cn>
Co-authored-by: hw <45089338+MorningForest@users.noreply.github.com>
Co-authored-by: Guoteng <32697156+SolenoidWGT@users.noreply.github.com>
Co-authored-by: wangguoteng.p <wangguoteng925@qq.com>
Co-authored-by: lvhan028 <lvhan_028@163.com>
Co-authored-by: zachtzy <141206206+zachtzy@users.noreply.github.com>
Co-authored-by: cx <759046501@qq.com>
Co-authored-by: Jaylin Lee <61487970+APX103@users.noreply.github.com>
Co-authored-by: del-zhenwu <dele.zhenwu@gmail.com>
Co-authored-by: Shaoyuan Xie <66255889+Daniel-xsy@users.noreply.github.com>
Co-authored-by: BigDong <yudongwang1226@gmail.com>
Co-authored-by: Wenwei Zhang <40779233+ZwwWayne@users.noreply.github.com>
Co-authored-by: huangting4201 <huangting3@sensetime.com>
2023-08-16 15:57:26 +08:00
Sun Peng ef851d16c6
Feat/optimizer (#194)
* feat(optimier.py): reduce memory footprint and avoid _check_overflow call

* feat(optimier.py): reduce memory footprint and avoid _check_overflow call

* feat(optimizer.py): overlap compute norm with allreduce

* update var and function name

* update function compute norm (#197)

Co-authored-by: ChenQiaoling00 <qiaoling_chen@u.nus.edu>

* feat(optimizer/hybrid_zero_optim.py): overlap gradients last bucket allreduce and compute norm (#196)

* support gradients allreduce and compute norm overlap

* fix para set error

* remove timer cal_norm for testing

* feat(optimizer/hybrid_zero_optim.py): support group global norm

* format(lint): fix lint error

* feat(optimizer/store.py): update code based on comment

---------

Co-authored-by: ChenQiaoling00 <qiaoling_chen@u.nus.edu>
Co-authored-by: huangting4201 <1538303371@qq.com>
2023-08-15 18:55:10 +08:00
huangting4201 ff0fa7659f
feat(monitor): support monitor and alert (#175)
* feat(monitor): support monitor and alert

* feat(monitor.py): fix demo error

* feat(monitor.py): move cmd monitor args to config file

* feat(hybrid_zero_optim.py): if overflow occurs send alert msg

* feat(monitor.py): remove alert msg filter

* feat(monitor.py): optimize class MonitorTracker

* feat(monitor.py): optimize code

* feat(monitor.py): optimize code

* feat(monitor.py): optimize code

* feat(monitor.py): optimize code

* feat(train.py): update print to log

* style(ci): fix lint error

* fix(utils/evaluation.py): remove useless code

* fix(model/modeling_internlm.py): fix lint error

---------

Co-authored-by: huangting4201 <huangting3@sensetime.com>
2023-08-08 11:18:15 +08:00
ytxiong c219065348
feat(*): support sequence_parallel (#180)
* support sequence_parallel for no pipeline

* sequence_parallel does not support no-flash-attn

* support sequence parallel for pipeline

* add memory profiler

* Update 13B.py

* add memory profiler

* fix evaluation bug

* remove some unnecessary code

* remove some unnecessary code

* Update parallel_context.py

* modify the config

* remove memory profiler

* modify the config

* support selective dropout
2023-08-07 16:42:52 +08:00
ytxiong 853becfb6e
feat(*): support fp32 training (#155)
* support float32 training

* fix lint

* add adaptation in model/utils.py

* remove some unnecessary code

* fix lint

* feat(optim): add support for fp32 zero

* Revert "Merge pull request #2 from SolenoidWGT/fp32_zero"

This reverts commit 53fc50b0e5, reversing
changes made to 40f24d0a73.

revert commit

* merge develop

* Update utils.py

* support fp32 in zero optimizer

* modify the dtype

---------

Co-authored-by: wangguoteng.p <wangguoteng925@qq.com>
2023-08-04 16:05:30 +08:00
ytxiong d67be17f96
refactor(*): refactor the code with no-apex (#170)
* support no-apex

* add default for use_apex

* fix lint

* modify the RMSNormTorch

* remove some comments

* remove use_apex parameter

* remove some unnecessary code

* optimize the code including import

* remove the import RMSNorm

* remove warnings
2023-08-03 11:24:12 +08:00
ytxiong 1c397f523f
feat(*): support no apex (#166)
* support no-apex

* add default for use_apex

* fix lint

* modify the RMSNormTorch

* remove some comments

* remove use_apex parameter

* remove some unnecessary code
2023-08-02 20:32:38 +08:00
Sun Peng fa7337b37b initial commit 2023-07-06 12:55:23 +08:00