Commit Graph

95 Commits (b10e5132fe2bf54883b250c91c806f2a76a32588)

Author SHA1 Message Date
Wenwen Qu b10e5132fe fix bugs with _compute_norm_with_moe_group 2023-09-08 18:09:13 +08:00
Wenwen Qu 6cf0fec314 replace flashatten experts by feedforward experts 2023-09-08 18:04:57 +08:00
Wenwen Qu cd6b28b073 use dummy mode to generate random numbers in model construction 2023-09-08 17:56:42 +08:00
Wenwen Qu 7ca5da27e8 fix group_norms computing in hybrid_zero_optim 2023-08-31 18:46:13 +08:00
Wenwen Qu 2ad5f512b5 remove moe_loss_coeff parameter passing 2023-08-31 18:44:58 +08:00
Wenwen Qu e498f9262e fix bugs 2023-08-30 16:22:35 +08:00
Wenwen Qu b021995199 fix bugs 2023-08-30 16:14:33 +08:00
Wenwen Qu f3da80a7ca reformat code 2023-08-28 14:46:03 +08:00
Wenwen Qu 629e6a5ad1 add comments for moe 2023-08-25 19:03:31 +08:00
Wenwen Qu aa2612edc4
Merge branch 'develop' into feature_add_moe 2023-08-25 13:35:56 +08:00
Guoteng 42851be36b
feat(ckpt): add train config into ckpt (#231) 2023-08-24 19:57:32 +08:00
huangting4201 29dd401071
fix(train.py): fix overflow grad norm error (#230) 2023-08-24 17:46:27 +08:00
Guoteng 2acb278e1f
fix(writer): fix tensorboard resume bug (#229) 2023-08-24 17:38:39 +08:00
Wenwen Qu 0e6b1f856c add support for moe checkpoint 2023-08-24 17:01:14 +08:00
Guoteng 7c820cfa40
feat(init): add skip args check flag and add zero overlap flag (#222)
* feat(init): add skip args check flag

* fix(optim): add param overlap enable flag
2023-08-24 16:44:18 +08:00
Wenwen Qu 409f139ba5 merge 2023-08-24 16:38:36 +08:00
ytxiong 9cd1e0314e
fix(pipeline): modify the sequence_parallel in pipeline (#227)
* move sequence_parallel to parallel config

* set the sequece_parallel default value is False

* fix lint

* fix lint

* fix lint

* modify the sequence_parallel in pp
2023-08-24 14:45:40 +08:00
ytxiong eee93b5a68
test(model): support fp32 with flash_attn (#223)
* support tf32 with flash

* move autocast to attention

* fix lint

* fix lint

* fix lint

* fix lint

* fix some bugs in model

* modify the convert dtype
2023-08-24 13:54:44 +08:00
huangting4201 fd28bcab58
feat(data/utils.py): add new dataset type code for streaming dataset (#225) 2023-08-24 13:46:18 +08:00
huangting4201 94b2aa28fc
Feat/example training internlm (#212)
* feat(train/training_internlm.py): move common init funcs to internlm/train

* feat(train/training_internlm.py): update some public funcs

* feat(train/training_internlm.py): update some public funcs

* feat(evaluation.py): adapt evaluate to streaming dataset

* feat(train/training_internlm.py): minor update based on comments

* fix(training_internlm.py): set train dataloader persistent_workers true only when num_worker>0

* fix(training_internlm.py): fix demo error
2023-08-24 10:00:15 +08:00
ytxiong a017cab4b3
fix(*): move sequence_parallel to parallel config (#224)
* move sequence_parallel to parallel config

* set the sequece_parallel default value is False

* fix lint

* fix lint

* fix lint
2023-08-24 09:49:04 +08:00
Sun Peng 32664328e7
Feat/overlap_bcast_forward (#218)
* feat/support bcast forward overlao

* feat/optimize the bcast call

* feat/optimize the bcast call

* feat/optimize the bcast call

* fix lint

* fix lint

* fix lint

* fix lint

* add torch.cuda.synchronize in save_checkpoint

---------

Co-authored-by: sunpeng <sunpengsdu@gmail.com>
2023-08-23 16:59:59 +08:00
cx a48210f1f3
feat(memory_profiler): improve memory profiler (#217) 2023-08-23 14:18:33 +08:00
Guoteng 29779c75f0
feat(ckpt): add auto ckpt load and singal quit (#216)
Co-authored-by: wangguoteng.p <wangguoteng925@qq.com>
2023-08-23 14:17:45 +08:00
Wenwen Qu a1f99b64bc Merge branch 'feature_add_moe' of https://github.com/blankde/InternLM into feature_add_moe 2023-08-23 13:52:29 +08:00
zhanglei 72e3b1afd5 change the scale position for latent moe_loss 2023-08-23 13:25:20 +08:00
zhanglei 3a3ca71459 fix moe loss logger for the interleaved pp 2023-08-23 13:03:21 +08:00
zhanglei d1d21546d9 refactor code 2023-08-23 11:03:08 +08:00
zhanglei 3f32ee31bb fix the bug that missing scale the latent moe loss 2023-08-23 10:53:36 +08:00
zhanglei 12b739e83b Merge branch 'feature_add_moe' of github.com:blankde/InternLM into feature_add_moe_pp_zl 2023-08-22 18:56:29 +08:00
Wenwen Qu 94b8b18a49 optimize code with moe norm computing 2023-08-22 14:30:13 +08:00
Wenwen Qu 0ab3de8994 fix bugs with compute moe norm 2023-08-22 14:00:07 +08:00
zhanglei 8407c203a3 refactor code 2023-08-22 10:53:21 +08:00
zhanglei ac243e5b33 refactor code 2023-08-22 10:42:39 +08:00
zhanglei a8dd77ce76 fix bug on logger 2023-08-22 10:35:17 +08:00
huangting4201 53648dc0e9
feat(train.py): support torch profiler (#201)
* feat(train.py): support torch profiling

* feat(train.py): optimize initialize_llm_profile

* feat(train.py): profiling with tp0 and dp0

* move sequence parallel context manager to evalation func

* fix lint

* move the process for type_ids to load_new_batch

* fix lint

---------

Co-authored-by: yingtongxiong <974106207@qq.com>
2023-08-21 15:23:38 +08:00
huangting4201 4832671abe
fix(pipeline_scheduler.py): fix tensor shape err and comm block (#210) 2023-08-21 12:09:27 +08:00
zhanglei 05a3b2a3be Merge branch 'feature_add_moe' of github.com:blankde/InternLM into feature_add_moe_pp_zl 2023-08-21 10:00:43 +08:00
zhanglei db685e8a31 fix the pp moe bugs 2023-08-21 09:59:58 +08:00
Wenwen Qu 08532dc20b fix bugs with merge 2023-08-18 18:03:44 +08:00
zhanglei 7b1709a7ff Merge branch 'feature_add_moe' of github.com:blankde/InternLM into feature_add_moe_pp_zl
Conflicts:
	train.py
2023-08-17 17:00:04 +08:00
zhanglei 2983076d89 add logger for moe_loss 2023-08-17 16:52:11 +08:00
Wenwen Qu c76182b2d6
Merge branch 'develop' into feature_add_moe 2023-08-17 16:37:06 +08:00
Wenwen Qu 754f1d961a fix moe bugs in zero optimizer 2023-08-17 16:11:34 +08:00
huangting4201 f5f5446560
Merge main to develop (#203)
* fix/fix_submodule_err (#61)

* fix/fix_submodule_err

---------

Co-authored-by: ChenQiaoling00 <qiaoling_chen@u.nus.edu>

* fix issue templates (#65)

* fix(tokenizer): refactor tokenizer and update usage in readme (#51)

* update tokenizer example

* fix(readme, requirements): fix typo at Chinese readme and select a lower version of transformers (#73)

* fix a typo in readme

* in order to find InternLMTokenizer, select a lower version of Transformers

---------

Co-authored-by: gouhchangjiang <gouhchangjiang@gmail.com>

* [Doc] Add wechat and discord link in readme (#78)

* Doc:add wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* [Docs]: add Japanese README (#43)

* Add Japanese README

* Update README-ja-JP.md

replace message

* Update README-ja-JP.md

* add repetition_penalty in GenerationConfig in web_demo.py (#48)

Co-authored-by: YWMditto <862779238@qq.com>

* use fp16 in instruction (#80)

* [Enchancement] add more options for issue template (#77)

* [Enchancement] add more options for issue template

* update qustion icon

* fix link

* Use tempfile for convert2hf.py (#23)

Fix https://github.com/InternLM/InternLM/issues/50

* delete torch_dtype of README's example code (#100)

* set the value of repetition_penalty to 1.0 to avoid random outputs (#99)

* Update web_demo.py (#97)

Remove meaningless log.

* [Fix]Fix wrong string cutoff in the script for sft text tokenizing (#106)

* docs(install.md): update dependency package transformers version to >= 4.28.0 (#124)

Co-authored-by: 黄婷 <huangting3@CN0014010744M.local>

* docs(LICENSE): add license (#125)

* add license of colossalai and flash-attn

* fix lint

* modify the name

* fix AutoModel map in convert2hf.py (#116)

* variables are not printly as expect (#114)

* feat(solver): fix code to adapt to torch2.0 and provide docker images (#128)

* feat(solver): fix code to adapt to torch2.0

* docs(install.md): publish internlm environment image

* docs(install.md): update dependency packages version

* docs(install.md): update default image

---------

Co-authored-by: 黄婷 <huangting3@CN0014010744M.local>

* add demo test (#132)

Co-authored-by: qa-caif-cicd <qa-caif-cicd@pjlab.org.cn>

* fix web_demo cache accelerate (#133)

* Doc: add twitter link (#141)

* Feat add checkpoint fraction (#151)

* feat(config): add checkpoint_fraction into config

* feat: remove checkpoint_fraction from configs/7B_sft.py

---------

Co-authored-by: wangguoteng.p <wangguoteng925@qq.com>

* [Doc] update deployment guide to keep consistency with lmdeploy (#136)

* update deployment guide

* fix error

* use llm partition (#159)

Co-authored-by: qa-caif-cicd <qa-caif-cicd@pjlab.org.cn>

* test(ci_scripts): clean test data after test, remove unnecessary global variables, and other optimizations (#165)

* test: optimization of ci scripts(variables, test data cleaning, etc).

* chore(workflows): disable ci job on push.

* fix: update partition

* test(ci_scripts): add install requirements automaticlly,trigger event about lint check and other optimizations (#174)

* add pull_request in lint check

* use default variables in ci_scripts

* fix format

* check and install requirements automaticlly

* fix format

---------

Co-authored-by: qa-caif-cicd <qa-caif-cicd@pjlab.org.cn>

* feat(profiling): add a simple memory profiler (#89)

* feat(profiling): add simple memory profiler

* feat(profiling): add profiling argument

* feat(CI_workflow): Add PR & Issue auto remove workflow (#184)

* feat(ci_workflow): Add PR & Issue auto remove workflow

Add a workflow for stale PR & Issue  auto remove
- pr & issue well be labeled as stale for inactive in 7 days
- staled PR & Issue  well be remove in 7 days
- run this workflow every day on 1:30 a.m.

* Update stale.yml

* feat(bot): Create .owners.yml for Auto Assign (#176)

* Create .owners.yml: for issue/pr assign automatically

* Update .owners.yml

* Update .owners.yml

fix typo

* [feat]: add pal reasoning script (#163)

* [Feat] Add PAL inference script

* Update README.md

* Update tools/README.md

Co-authored-by: BigDong <yudongwang1226@gmail.com>

* Update tools/pal_inference.py

Co-authored-by: BigDong <yudongwang1226@gmail.com>

* Update pal script

* Update README.md

* restore .ore-commit-config.yaml

* Update tools/README.md

Co-authored-by: BigDong <yudongwang1226@gmail.com>

* Update tools/README.md

Co-authored-by: BigDong <yudongwang1226@gmail.com>

* Update pal inference script

* Update READMD.md

* Update internlm/utils/interface.py

Co-authored-by: Wenwei Zhang <40779233+ZwwWayne@users.noreply.github.com>

* Update pal script

* Update pal script

* Update script

* Add docstring

* Update format

* Update script

* Update script

* Update script

---------

Co-authored-by: BigDong <yudongwang1226@gmail.com>
Co-authored-by: Wenwei Zhang <40779233+ZwwWayne@users.noreply.github.com>

* test(ci_scripts): add timeout settings and clean work after the slurm job (#185)

* restore pr test on develop branch

* add mask

* add post action to cancel slurm job

* remove readonly attribute on job log

* add debug info

* debug job log

* try stdin

* use stdin

* set default value avoid error

* try setting readonly on job log

* performance echo

* remove debug info

* use squeue to check slurm job status

* restore the lossed parm

* litmit retry times

* use exclusive to avoid port already in use

* optimize loop body

* remove partition

* add {} for variables

* set env variable for slurm partition

---------

Co-authored-by: qa-caif-cicd <qa-caif-cicd@pjlab.org.cn>

* refactor(tools): move interface.py and import it to web_demo (#195)

* move interface.py and import it to web_demo

* typo

* fix(ci): fix lint error

* fix(ci): fix lint error

---------

Co-authored-by: Sun Peng <sunpengsdu@gmail.com>
Co-authored-by: ChenQiaoling00 <qiaoling_chen@u.nus.edu>
Co-authored-by: Kai Chen <chenkaidev@gmail.com>
Co-authored-by: Yang Gao <Gary1546308416AL@gmail.com>
Co-authored-by: Changjiang GOU <gouchangjiang@gmail.com>
Co-authored-by: gouhchangjiang <gouhchangjiang@gmail.com>
Co-authored-by: vansin <msnode@163.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Co-authored-by: YWMditto <46778265+YWMditto@users.noreply.github.com>
Co-authored-by: YWMditto <862779238@qq.com>
Co-authored-by: WRH <12756472+wangruohui@users.noreply.github.com>
Co-authored-by: liukuikun <24622904+Harold-lkk@users.noreply.github.com>
Co-authored-by: x54-729 <45304952+x54-729@users.noreply.github.com>
Co-authored-by: Shuo Zhang <zhangshuolove@live.com>
Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com>
Co-authored-by: 黄婷 <huangting3@CN0014010744M.local>
Co-authored-by: ytxiong <45058324+yingtongxiong@users.noreply.github.com>
Co-authored-by: Zaida Zhou <58739961+zhouzaida@users.noreply.github.com>
Co-authored-by: kkscilife <126147887+kkscilife@users.noreply.github.com>
Co-authored-by: qa-caif-cicd <qa-caif-cicd@pjlab.org.cn>
Co-authored-by: hw <45089338+MorningForest@users.noreply.github.com>
Co-authored-by: Guoteng <32697156+SolenoidWGT@users.noreply.github.com>
Co-authored-by: wangguoteng.p <wangguoteng925@qq.com>
Co-authored-by: lvhan028 <lvhan_028@163.com>
Co-authored-by: zachtzy <141206206+zachtzy@users.noreply.github.com>
Co-authored-by: cx <759046501@qq.com>
Co-authored-by: Jaylin Lee <61487970+APX103@users.noreply.github.com>
Co-authored-by: del-zhenwu <dele.zhenwu@gmail.com>
Co-authored-by: Shaoyuan Xie <66255889+Daniel-xsy@users.noreply.github.com>
Co-authored-by: BigDong <yudongwang1226@gmail.com>
Co-authored-by: Wenwei Zhang <40779233+ZwwWayne@users.noreply.github.com>
Co-authored-by: huangting4201 <huangting3@sensetime.com>
2023-08-16 15:57:26 +08:00
huangting4201 f3664bfbab
fix(train.py): fix scheduler metric hook skip error (#204) 2023-08-16 15:47:05 +08:00
zhanglei 8cdd1abb35 suppport interleaved pp 2023-08-16 12:02:59 +08:00
huangting4201 5f2381af62
fix/ci train error (#200)
* fix(ci): fix ci train error

* fix(ci): fix ci train error

* fix(ci): fix ci train error
2023-08-16 11:11:27 +08:00
huangting4201 db13bc46bc
fix(ci): fix ci train error (#199) 2023-08-15 20:09:54 +08:00
Sun Peng ef851d16c6
Feat/optimizer (#194)
* feat(optimier.py): reduce memory footprint and avoid _check_overflow call

* feat(optimier.py): reduce memory footprint and avoid _check_overflow call

* feat(optimizer.py): overlap compute norm with allreduce

* update var and function name

* update function compute norm (#197)

Co-authored-by: ChenQiaoling00 <qiaoling_chen@u.nus.edu>

* feat(optimizer/hybrid_zero_optim.py): overlap gradients last bucket allreduce and compute norm (#196)

* support gradients allreduce and compute norm overlap

* fix para set error

* remove timer cal_norm for testing

* feat(optimizer/hybrid_zero_optim.py): support group global norm

* format(lint): fix lint error

* feat(optimizer/store.py): update code based on comment

---------

Co-authored-by: ChenQiaoling00 <qiaoling_chen@u.nus.edu>
Co-authored-by: huangting4201 <1538303371@qq.com>
2023-08-15 18:55:10 +08:00