Commit Graph

165 Commits (68d6abc64a8aa77e23ad8e087b418531f9a4909d)

Author SHA1 Message Date
ytxiong d67be17f96
refactor(*): refactor the code with no-apex (#170)
* support no-apex

* add default for use_apex

* fix lint

* modify the RMSNormTorch

* remove some comments

* remove use_apex parameter

* remove some unnecessary code

* optimize the code including import

* remove the import RMSNorm

* remove warnings
2023-08-03 11:24:12 +08:00
ytxiong 1c397f523f
feat(*): support no apex (#166)
* support no-apex

* add default for use_apex

* fix lint

* modify the RMSNormTorch

* remove some comments

* remove use_apex parameter

* remove some unnecessary code
2023-08-02 20:32:38 +08:00
huangting4201 66a23e326a
feat(utils/evaluation.py): support evaluate (#154)
* style(internlm): fix lint error

* feat(utils/logger.py): support uniscale logger

* fix(utils/logger.py): fix import circular error

* feat(train.py): support dashboard metric panel and fix ci train config

* fix(ci_scripts/train/slurm_train.sh): fix ci train error

* fix(ci_scripts/train/torchrun.sh): fix ci train error

* feat(utils/evaluation.py): support evaluate on validation dataset

* fix(utils/evaluation.py): fix demo error

* fix(ci_scripts/train/ci_7B_sft.py): fix ci train error

* feat(initialize/launch.py): set default value for valid_bsz and valid_every

* fix(ci_scripts/train): restore ci update

* docs(configs/7B_sft.py): update comment for config

* fix(config.json): delete config.json

* fix evaluation bug in scheduler when use_flash_attn=False

* feat(scheduler/no_pipeline_scheduler.py): support micro_bsz>1 in no pp

* modify the jugement in pp and no-pp scheduler

* modify the data_process_func in evaluation

* fix bugs when use_flash_attn=False

* rename symbol

* feat(configs/7B_sft.py): change para valid_bsz to valid_micro_num

* feat(scheduler/no_pipeline_scheduler.py): update para set _grad_accum_batch_size

---------

Co-authored-by: 黄婷 <huangting3@CN0014010744M.local>
Co-authored-by: huangting.p <huangting@sensetime.com>
Co-authored-by: yingtongxiong <974106207@qq.com>
2023-08-02 19:03:59 +08:00
kkscilife 7fbf85eac9
use llm partition (#159)
Co-authored-by: qa-caif-cicd <qa-caif-cicd@pjlab.org.cn>
2023-08-01 17:49:01 +08:00
huangting4201 1f7304a8bb
feat(utils/logger.py): support uniscale logger (#152)
* style(internlm): fix lint error

* feat(utils/logger.py): support uniscale logger

* fix(utils/logger.py): fix import circular error

* feat(train.py): support dashboard metric panel and fix ci train config

* fix(ci_scripts/train/slurm_train.sh): fix ci train error

* fix(ci_scripts/train/torchrun.sh): fix ci train error

* fix(ci_scripts/train): restore ci update

* fix(config.json): delete alert webhook

* feat(train.py): optimize func init logger

* feat(config.json): delete config.json

---------

Co-authored-by: 黄婷 <huangting3@CN0014010744M.local>
Co-authored-by: huangting.p <huangting@sensetime.com>
2023-08-01 17:37:32 +08:00
ytxiong 307c4741d1
fix(initialize/launch.py): set default value for use_flash_attn (#158)
* add default for use_flash_attn

* fix lint
2023-08-01 16:03:06 +08:00
lvhan028 fbe6ef1da5
[Doc] update deployment guide to keep consistency with lmdeploy (#136)
* update deployment guide

* fix error
2023-07-31 14:42:07 +08:00
Guoteng 6b6295aea3
Feat add checkpoint fraction (#151)
* feat(config): add checkpoint_fraction into config

* feat: remove checkpoint_fraction from configs/7B_sft.py

---------

Co-authored-by: wangguoteng.p <wangguoteng925@qq.com>
2023-07-31 13:57:01 +08:00
ytxiong 5ee651c2f1
feat(*): support not-flash-attn for pp and no-pp (#145)
* support not flash attention for no-pp

* support pipeline

* modify the config

* refactor the code

* refactor the code

* remove some unnecessary code
2023-07-28 16:13:04 +08:00
huangting4201 8b1717a05d
style(solver/optimizer/utils.py): fix lint error (#147)
Co-authored-by: huangting.p <huangting@sensetime.com>
2023-07-28 10:48:06 +08:00
vansin 2fee4220a6
Doc: add twitter link (#141) 2023-07-27 15:24:50 +08:00
Sun Peng fcc3534509
[Dev] Pull Main (#139)
* fix/fix_submodule_err (#61)

* fix/fix_submodule_err

---------

Co-authored-by: ChenQiaoling00 <qiaoling_chen@u.nus.edu>

* fix issue templates (#65)

* fix(tokenizer): refactor tokenizer and update usage in readme (#51)

* update tokenizer example

* fix(readme, requirements): fix typo at Chinese readme and select a lower version of transformers (#73)

* fix a typo in readme

* in order to find InternLMTokenizer, select a lower version of Transformers

---------

Co-authored-by: gouhchangjiang <gouhchangjiang@gmail.com>

* [Doc] Add wechat and discord link in readme (#78)

* Doc:add wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* [Docs]: add Japanese README (#43)

* Add Japanese README

* Update README-ja-JP.md

replace message

* Update README-ja-JP.md

* add repetition_penalty in GenerationConfig in web_demo.py (#48)

Co-authored-by: YWMditto <862779238@qq.com>

* use fp16 in instruction (#80)

* [Enchancement] add more options for issue template (#77)

* [Enchancement] add more options for issue template

* update qustion icon

* fix link

* Use tempfile for convert2hf.py (#23)

Fix https://github.com/InternLM/InternLM/issues/50

* delete torch_dtype of README's example code (#100)

* set the value of repetition_penalty to 1.0 to avoid random outputs (#99)

* Update web_demo.py (#97)

Remove meaningless log.

* [Fix]Fix wrong string cutoff in the script for sft text tokenizing (#106)

* docs(install.md): update dependency package transformers version to >= 4.28.0 (#124)

Co-authored-by: 黄婷 <huangting3@CN0014010744M.local>

* docs(LICENSE): add license (#125)

* add license of colossalai and flash-attn

* fix lint

* modify the name

* fix AutoModel map in convert2hf.py (#116)

* variables are not printly as expect (#114)

* feat(solver): fix code to adapt to torch2.0 and provide docker images (#128)

* feat(solver): fix code to adapt to torch2.0

* docs(install.md): publish internlm environment image

* docs(install.md): update dependency packages version

* docs(install.md): update default image

---------

Co-authored-by: 黄婷 <huangting3@CN0014010744M.local>

* add demo test (#132)

Co-authored-by: qa-caif-cicd <qa-caif-cicd@pjlab.org.cn>

* fix web_demo cache accelerate (#133)

* fix(hybrid_zero_optim.py): delete math import

* Update embedding.py

---------

Co-authored-by: ChenQiaoling00 <qiaoling_chen@u.nus.edu>
Co-authored-by: Kai Chen <chenkaidev@gmail.com>
Co-authored-by: Yang Gao <Gary1546308416AL@gmail.com>
Co-authored-by: Changjiang GOU <gouchangjiang@gmail.com>
Co-authored-by: gouhchangjiang <gouhchangjiang@gmail.com>
Co-authored-by: vansin <msnode@163.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Co-authored-by: YWMditto <46778265+YWMditto@users.noreply.github.com>
Co-authored-by: YWMditto <862779238@qq.com>
Co-authored-by: WRH <12756472+wangruohui@users.noreply.github.com>
Co-authored-by: liukuikun <24622904+Harold-lkk@users.noreply.github.com>
Co-authored-by: x54-729 <45304952+x54-729@users.noreply.github.com>
Co-authored-by: Shuo Zhang <zhangshuolove@live.com>
Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com>
Co-authored-by: huangting4201 <1538303371@qq.com>
Co-authored-by: 黄婷 <huangting3@CN0014010744M.local>
Co-authored-by: ytxiong <45058324+yingtongxiong@users.noreply.github.com>
Co-authored-by: Zaida Zhou <58739961+zhouzaida@users.noreply.github.com>
Co-authored-by: kkscilife <126147887+kkscilife@users.noreply.github.com>
Co-authored-by: qa-caif-cicd <qa-caif-cicd@pjlab.org.cn>
Co-authored-by: hw <45089338+MorningForest@users.noreply.github.com>
2023-07-27 10:20:21 +08:00
Sun Peng ad10b8e03f
fix(optimizer/util.py) change inf defination 2023-07-27 10:12:51 +08:00
huangting4201 754c5aa69a
feat(model/metrics.py): support calculating accuracy and perplexity m… (#91)
* feat(model/metrics.py): support calculating accuracy and perplexity metrics

* fix(model/metrics.py): fix import error

* feat(train.py): minor update

---------

Co-authored-by: 黄婷 <huangting3@CN0014010744M.local>
Co-authored-by: huangting.p <huangting@sensetime.com>
2023-07-26 16:22:10 +08:00
hw fb84c9548f
fix web_demo cache accelerate (#133) 2023-07-26 03:04:56 +08:00
kkscilife 03851ea2fa
add demo test (#132)
Co-authored-by: qa-caif-cicd <qa-caif-cicd@pjlab.org.cn>
2023-07-25 19:51:50 +08:00
huangting4201 26205c1edf
feat(solver): fix code to adapt to torch2.0 and provide docker images (#128)
* feat(solver): fix code to adapt to torch2.0

* docs(install.md): publish internlm environment image

* docs(install.md): update dependency packages version

* docs(install.md): update default image

---------

Co-authored-by: 黄婷 <huangting3@CN0014010744M.local>
2023-07-25 19:34:52 +08:00
Zaida Zhou 084a841799
variables are not printly as expect (#114) 2023-07-25 18:06:37 +08:00
ytxiong fd398fae1a
refactor(rotaryEmbedding): refactor forward (#120)
* use fp16 in instruction (#80)

* delete torch_dtype of README's example code (#100)

* refactor the forward for rotary embedding

---------

Co-authored-by: WRH <12756472+wangruohui@users.noreply.github.com>
Co-authored-by: x54-729 <45304952+x54-729@users.noreply.github.com>
2023-07-25 15:25:48 +08:00
huangting4201 762ab297ee
feat(core/scheduler): support pipeline parallel (#98)
* feat(utils/writer.py): support tensorboard writer

* feat(utils/writer.py): add class comment

* feat(core): support pipeline parallel

* fix(core): fix demo running error

* feat(solver/optimizer): add pp zero optimizer

* fix(solver/optimizer): fix word spelling error

* feat(core/scheduler): add new dir scheduler in core/

* fix(core): fix ci lint error

* feat(solver/optimizer): merge pp and nopp optimizer

* doc(usage.md): update usage doc

* feat(core/scheduler): support post func

* feat(core/scheduler): add dtype para in pp sche and update func get_tensor_shape

* feat(core/scheduler): add _load_micro_batch in base scheduler

* feat(core/scheduler): support optimizer overlap communication in pp scheduler

* feat(core/scheduler): delete data process func code

* feat(core/trainer): schedule pre processing for all schedule

---------

Co-authored-by: 黄婷 <huangting3@CN0014010744M.local>
Co-authored-by: huangting.p <huangting@sensetime.com>
2023-07-24 20:52:09 +08:00
x54-729 c52a47a993
fix AutoModel map in convert2hf.py (#116) 2023-07-24 12:07:47 +08:00
ytxiong cde899f3e5
docs(LICENSE): add license (#125)
* add license of colossalai and flash-attn

* fix lint

* modify the name
2023-07-24 11:59:56 +08:00
huangting4201 acea4554ec
docs(install.md): update dependency package transformers version to >= 4.28.0 (#124)
Co-authored-by: 黄婷 <huangting3@CN0014010744M.local>
2023-07-24 11:33:26 +08:00
Sun Peng e0d6a3f84f
[Develop] Pull Main Branch (#121)
* fix/fix_submodule_err (#61)

* fix/fix_submodule_err

---------

Co-authored-by: ChenQiaoling00 <qiaoling_chen@u.nus.edu>

* fix issue templates (#65)

* fix(tokenizer): refactor tokenizer and update usage in readme (#51)

* update tokenizer example

* fix(readme, requirements): fix typo at Chinese readme and select a lower version of transformers (#73)

* fix a typo in readme

* in order to find InternLMTokenizer, select a lower version of Transformers

---------

Co-authored-by: gouhchangjiang <gouhchangjiang@gmail.com>

* [Doc] Add wechat and discord link in readme (#78)

* Doc:add wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* [Docs]: add Japanese README (#43)

* Add Japanese README

* Update README-ja-JP.md

replace message

* Update README-ja-JP.md

* add repetition_penalty in GenerationConfig in web_demo.py (#48)

Co-authored-by: YWMditto <862779238@qq.com>

* use fp16 in instruction (#80)

* [Enchancement] add more options for issue template (#77)

* [Enchancement] add more options for issue template

* update qustion icon

* fix link

* Use tempfile for convert2hf.py (#23)

Fix https://github.com/InternLM/InternLM/issues/50

* delete torch_dtype of README's example code (#100)

* set the value of repetition_penalty to 1.0 to avoid random outputs (#99)

* Update web_demo.py (#97)

Remove meaningless log.

* [Fix]Fix wrong string cutoff in the script for sft text tokenizing (#106)

---------

Co-authored-by: ChenQiaoling00 <qiaoling_chen@u.nus.edu>
Co-authored-by: Kai Chen <chenkaidev@gmail.com>
Co-authored-by: Yang Gao <Gary1546308416AL@gmail.com>
Co-authored-by: Changjiang GOU <gouchangjiang@gmail.com>
Co-authored-by: gouhchangjiang <gouhchangjiang@gmail.com>
Co-authored-by: vansin <msnode@163.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Co-authored-by: YWMditto <46778265+YWMditto@users.noreply.github.com>
Co-authored-by: YWMditto <862779238@qq.com>
Co-authored-by: WRH <12756472+wangruohui@users.noreply.github.com>
Co-authored-by: liukuikun <24622904+Harold-lkk@users.noreply.github.com>
Co-authored-by: x54-729 <45304952+x54-729@users.noreply.github.com>
Co-authored-by: Shuo Zhang <zhangshuolove@live.com>
Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com>
2023-07-21 20:44:33 +08:00
huangting4201 0d3d27cdf4
feat(utils/writer.py): support tensorboard writer (#63)
* feat(utils/writer.py): support tensorboard writer

* feat(utils/writer.py): add class comment

---------

Co-authored-by: 黄婷 <huangting3@CN0014010744M.local>
2023-07-21 15:53:24 +08:00
Miao Zheng 1095263082
[Fix]Fix wrong string cutoff in the script for sft text tokenizing (#106) 2023-07-19 12:12:41 +08:00
Shuo Zhang efbf533570
Update web_demo.py (#97)
Remove meaningless log.
2023-07-17 23:20:35 +08:00
Yang Gao 1ebc3cca99
set the value of repetition_penalty to 1.0 to avoid random outputs (#99) 2023-07-17 23:19:48 +08:00
x54-729 e746754b6e
delete torch_dtype of README's example code (#100) 2023-07-17 23:19:19 +08:00
x54-729 0c1060435d
Use tempfile for convert2hf.py (#23)
Fix https://github.com/InternLM/InternLM/issues/50
2023-07-17 21:08:10 +08:00
liukuikun 59f4727675
[Enchancement] add more options for issue template (#77)
* [Enchancement] add more options for issue template

* update qustion icon

* fix link
2023-07-17 12:54:54 +08:00
WRH cb991b6865
use fp16 in instruction (#80) 2023-07-14 17:56:15 +08:00
YWMditto fda99947ad
add repetition_penalty in GenerationConfig in web_demo.py (#48)
Co-authored-by: YWMditto <862779238@qq.com>
2023-07-14 17:03:52 +08:00
Ikko Eltociear Ashimine be50c02949
[Docs]: add Japanese README (#43)
* Add Japanese README

* Update README-ja-JP.md

replace message

* Update README-ja-JP.md
2023-07-14 16:29:34 +08:00
vansin 28bc0ebebe
[Doc] Add wechat and discord link in readme (#78)
* Doc:add wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link

* Doc:update wechat and discord link
2023-07-14 16:19:10 +08:00
Changjiang GOU 73de6622a9
fix(readme, requirements): fix typo at Chinese readme and select a lower version of transformers (#73)
* fix a typo in readme

* in order to find InternLMTokenizer, select a lower version of Transformers

---------

Co-authored-by: gouhchangjiang <gouhchangjiang@gmail.com>
2023-07-13 18:44:33 +08:00
Yang Gao 555ed60a2c
fix(tokenizer): refactor tokenizer and update usage in readme (#51)
* update tokenizer example
2023-07-13 17:16:27 +08:00
Kai Chen 7f242f644b
fix issue templates (#65) 2023-07-13 00:12:38 +08:00
Sun Peng 6150e4daed
fix/fix_submodule_err (#61)
* fix/fix_submodule_err

---------

Co-authored-by: ChenQiaoling00 <qiaoling_chen@u.nus.edu>
2023-07-12 18:59:31 +08:00
blackBlackCat c7287e2584
feat(readme): add huggingface url (#52)
Co-authored-by: wangguoteng.p <wangguoteng@sensetime.com>
2023-07-11 22:04:46 +08:00
Sun Peng c18bec9361
fix[performance]: fix the performance evaluation mistakes (#40)
* fix(no_pp_scheduler): drop out and label if not used

* Update train_performance.md

* Update readme with new tested data

* update some typos

* doc(performance): fix some typos
2023-07-08 20:42:34 +08:00
Sun Peng 4a3d15650e
fix(no_pp_scheduler): drop model out data and label if not used (#39)
* fix(no_pp_scheduler): drop out and label if not used

* Update train_performance.md

* Update readme with new tested data

* update some typos
2023-07-08 18:55:31 +08:00
Kai Chen dfb2751f00
add commercial license application form (#38) 2023-07-08 10:37:11 +08:00
Wenwei Zhang 81b10e81d9
[Doc]: Add issue template (#34) 2023-07-08 00:37:24 +08:00
Wenwei Zhang c690bf3779
[Doc]: fix citation blocks (#32) 2023-07-08 00:17:41 +08:00
Xingcheng Zhang 2066b36693
add citation (#30) 2023-07-07 22:58:48 +08:00
Sun Peng 912fc8f8aa
doc: update the training examples (#27)
* doc: update the training examples

* update README

* change all "++++" log

* Update pylint

* solve lint err
2023-07-07 15:54:09 +08:00
yhcc 745d2b911a
Fix readme about conversion to transformers (#25)
* add links for 8k

* fix acknowledgement

* modified readme for convert_hf
2023-07-07 13:38:06 +08:00
yhcc ed04c7edb0
Add 8k transformers link (#14)
* add links for 8k
* fix acknowledgement
2023-07-06 22:00:50 +08:00
Xingcheng Zhang 09440d055c
Merge pull request #10 from RangeKing/fix-typos
Doc(README.md): Fix typos
2023-07-06 21:53:30 +08:00