From 620472f15fb6bf6eaac9e4436fd43608f21acf15 Mon Sep 17 00:00:00 2001 From: Sun Peng Date: Fri, 1 Sep 2023 11:00:11 +0800 Subject: [PATCH] [Dev2Main] 20130901 (#261) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * feat(utils/writer.py): support tensorboard writer (#63) * feat(utils/writer.py): support tensorboard writer * feat(utils/writer.py): add class comment --------- Co-authored-by: 黄婷 * [Develop] Pull Main Branch (#121) * fix/fix_submodule_err (#61) * fix/fix_submodule_err --------- Co-authored-by: ChenQiaoling00 * fix issue templates (#65) * fix(tokenizer): refactor tokenizer and update usage in readme (#51) * update tokenizer example * fix(readme, requirements): fix typo at Chinese readme and select a lower version of transformers (#73) * fix a typo in readme * in order to find InternLMTokenizer, select a lower version of Transformers --------- Co-authored-by: gouhchangjiang * [Doc] Add wechat and discord link in readme (#78) * Doc:add wechat and discord link * Doc:update wechat and discord link * Doc:update wechat and discord link * Doc:update wechat and discord link * Doc:update wechat and discord link * Doc:update wechat and discord link * Doc:update wechat and discord link * Doc:update wechat and discord link * Doc:update wechat and discord link * Doc:update wechat and discord link * Doc:update wechat and discord link * [Docs]: add Japanese README (#43) * Add Japanese README * Update README-ja-JP.md replace message * Update README-ja-JP.md * add repetition_penalty in GenerationConfig in web_demo.py (#48) Co-authored-by: YWMditto <862779238@qq.com> * use fp16 in instruction (#80) * [Enchancement] add more options for issue template (#77) * [Enchancement] add more options for issue template * update qustion icon * fix link * Use tempfile for convert2hf.py (#23) Fix https://github.com/InternLM/InternLM/issues/50 * delete torch_dtype of README's example code (#100) * set the value of repetition_penalty to 1.0 to avoid random outputs (#99) * Update web_demo.py (#97) Remove meaningless log. * [Fix]Fix wrong string cutoff in the script for sft text tokenizing (#106) --------- Co-authored-by: ChenQiaoling00 Co-authored-by: Kai Chen Co-authored-by: Yang Gao Co-authored-by: Changjiang GOU Co-authored-by: gouhchangjiang Co-authored-by: vansin Co-authored-by: Ikko Eltociear Ashimine Co-authored-by: YWMditto <46778265+YWMditto@users.noreply.github.com> Co-authored-by: YWMditto <862779238@qq.com> Co-authored-by: WRH <12756472+wangruohui@users.noreply.github.com> Co-authored-by: liukuikun <24622904+Harold-lkk@users.noreply.github.com> Co-authored-by: x54-729 <45304952+x54-729@users.noreply.github.com> Co-authored-by: Shuo Zhang Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com> * feat(core/scheduler): support pipeline parallel (#98) * feat(utils/writer.py): support tensorboard writer * feat(utils/writer.py): add class comment * feat(core): support pipeline parallel * fix(core): fix demo running error * feat(solver/optimizer): add pp zero optimizer * fix(solver/optimizer): fix word spelling error * feat(core/scheduler): add new dir scheduler in core/ * fix(core): fix ci lint error * feat(solver/optimizer): merge pp and nopp optimizer * doc(usage.md): update usage doc * feat(core/scheduler): support post func * feat(core/scheduler): add dtype para in pp sche and update func get_tensor_shape * feat(core/scheduler): add _load_micro_batch in base scheduler * feat(core/scheduler): support optimizer overlap communication in pp scheduler * feat(core/scheduler): delete data process func code * feat(core/trainer): schedule pre processing for all schedule --------- Co-authored-by: 黄婷 Co-authored-by: huangting.p * refactor(rotaryEmbedding): refactor forward (#120) * use fp16 in instruction (#80) * delete torch_dtype of README's example code (#100) * refactor the forward for rotary embedding --------- Co-authored-by: WRH <12756472+wangruohui@users.noreply.github.com> Co-authored-by: x54-729 <45304952+x54-729@users.noreply.github.com> * feat(model/metrics.py): support calculating accuracy and perplexity m… (#91) * feat(model/metrics.py): support calculating accuracy and perplexity metrics * fix(model/metrics.py): fix import error * feat(train.py): minor update --------- Co-authored-by: 黄婷 Co-authored-by: huangting.p * fix(optimizer/util.py) change inf defination * [Dev] Pull Main (#139) * fix/fix_submodule_err (#61) * fix/fix_submodule_err --------- Co-authored-by: ChenQiaoling00 * fix issue templates (#65) * fix(tokenizer): refactor tokenizer and update usage in readme (#51) * update tokenizer example * fix(readme, requirements): fix typo at Chinese readme and select a lower version of transformers (#73) * fix a typo in readme * in order to find InternLMTokenizer, select a lower version of Transformers --------- Co-authored-by: gouhchangjiang * [Doc] Add wechat and discord link in readme (#78) * Doc:add wechat and discord link * Doc:update wechat and discord link * Doc:update wechat and discord link * Doc:update wechat and discord link * Doc:update wechat and discord link * Doc:update wechat and discord link * Doc:update wechat and discord link * Doc:update wechat and discord link * Doc:update wechat and discord link * Doc:update wechat and discord link * Doc:update wechat and discord link * [Docs]: add Japanese README (#43) * Add Japanese README * Update README-ja-JP.md replace message * Update README-ja-JP.md * add repetition_penalty in GenerationConfig in web_demo.py (#48) Co-authored-by: YWMditto <862779238@qq.com> * use fp16 in instruction (#80) * [Enchancement] add more options for issue template (#77) * [Enchancement] add more options for issue template * update qustion icon * fix link * Use tempfile for convert2hf.py (#23) Fix https://github.com/InternLM/InternLM/issues/50 * delete torch_dtype of README's example code (#100) * set the value of repetition_penalty to 1.0 to avoid random outputs (#99) * Update web_demo.py (#97) Remove meaningless log. * [Fix]Fix wrong string cutoff in the script for sft text tokenizing (#106) * docs(install.md): update dependency package transformers version to >= 4.28.0 (#124) Co-authored-by: 黄婷 * docs(LICENSE): add license (#125) * add license of colossalai and flash-attn * fix lint * modify the name * fix AutoModel map in convert2hf.py (#116) * variables are not printly as expect (#114) * feat(solver): fix code to adapt to torch2.0 and provide docker images (#128) * feat(solver): fix code to adapt to torch2.0 * docs(install.md): publish internlm environment image * docs(install.md): update dependency packages version * docs(install.md): update default image --------- Co-authored-by: 黄婷 * add demo test (#132) Co-authored-by: qa-caif-cicd * fix web_demo cache accelerate (#133) * fix(hybrid_zero_optim.py): delete math import * Update embedding.py --------- Co-authored-by: ChenQiaoling00 Co-authored-by: Kai Chen Co-authored-by: Yang Gao Co-authored-by: Changjiang GOU Co-authored-by: gouhchangjiang Co-authored-by: vansin Co-authored-by: Ikko Eltociear Ashimine Co-authored-by: YWMditto <46778265+YWMditto@users.noreply.github.com> Co-authored-by: YWMditto <862779238@qq.com> Co-authored-by: WRH <12756472+wangruohui@users.noreply.github.com> Co-authored-by: liukuikun <24622904+Harold-lkk@users.noreply.github.com> Co-authored-by: x54-729 <45304952+x54-729@users.noreply.github.com> Co-authored-by: Shuo Zhang Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com> Co-authored-by: huangting4201 <1538303371@qq.com> Co-authored-by: 黄婷 Co-authored-by: ytxiong <45058324+yingtongxiong@users.noreply.github.com> Co-authored-by: Zaida Zhou <58739961+zhouzaida@users.noreply.github.com> Co-authored-by: kkscilife <126147887+kkscilife@users.noreply.github.com> Co-authored-by: qa-caif-cicd Co-authored-by: hw <45089338+MorningForest@users.noreply.github.com> * style(solver/optimizer/utils.py): fix lint error (#147) Co-authored-by: huangting.p * feat(*): support not-flash-attn for pp and no-pp (#145) * support not flash attention for no-pp * support pipeline * modify the config * refactor the code * refactor the code * remove some unnecessary code * fix(initialize/launch.py): set default value for use_flash_attn (#158) * add default for use_flash_attn * fix lint * feat(utils/logger.py): support uniscale logger (#152) * style(internlm): fix lint error * feat(utils/logger.py): support uniscale logger * fix(utils/logger.py): fix import circular error * feat(train.py): support dashboard metric panel and fix ci train config * fix(ci_scripts/train/slurm_train.sh): fix ci train error * fix(ci_scripts/train/torchrun.sh): fix ci train error * fix(ci_scripts/train): restore ci update * fix(config.json): delete alert webhook * feat(train.py): optimize func init logger * feat(config.json): delete config.json --------- Co-authored-by: 黄婷 Co-authored-by: huangting.p * feat(utils/evaluation.py): support evaluate (#154) * style(internlm): fix lint error * feat(utils/logger.py): support uniscale logger * fix(utils/logger.py): fix import circular error * feat(train.py): support dashboard metric panel and fix ci train config * fix(ci_scripts/train/slurm_train.sh): fix ci train error * fix(ci_scripts/train/torchrun.sh): fix ci train error * feat(utils/evaluation.py): support evaluate on validation dataset * fix(utils/evaluation.py): fix demo error * fix(ci_scripts/train/ci_7B_sft.py): fix ci train error * feat(initialize/launch.py): set default value for valid_bsz and valid_every * fix(ci_scripts/train): restore ci update * docs(configs/7B_sft.py): update comment for config * fix(config.json): delete config.json * fix evaluation bug in scheduler when use_flash_attn=False * feat(scheduler/no_pipeline_scheduler.py): support micro_bsz>1 in no pp * modify the jugement in pp and no-pp scheduler * modify the data_process_func in evaluation * fix bugs when use_flash_attn=False * rename symbol * feat(configs/7B_sft.py): change para valid_bsz to valid_micro_num * feat(scheduler/no_pipeline_scheduler.py): update para set _grad_accum_batch_size --------- Co-authored-by: 黄婷 Co-authored-by: huangting.p Co-authored-by: yingtongxiong <974106207@qq.com> * feat(*): support no apex (#166) * support no-apex * add default for use_apex * fix lint * modify the RMSNormTorch * remove some comments * remove use_apex parameter * remove some unnecessary code * refactor(*): refactor the code with no-apex (#170) * support no-apex * add default for use_apex * fix lint * modify the RMSNormTorch * remove some comments * remove use_apex parameter * remove some unnecessary code * optimize the code including import * remove the import RMSNorm * remove warnings * refactor(scheduler): rewrite pipeline scheduler (#138) * refactor(scheduler): rewrite pipeline scheduler * fix(*): fix pipeline scheduler bugs * fix(*): fix merge bug * feat(*): update codes with todo tag * feat(*): add comments * feat(internlm/core/scheduler): update recv_prev/next logic * feat(utils/evaluation.py): update sche metric hook for valid --------- Co-authored-by: huangting.p * feat(*): support fp32 training (#155) * support float32 training * fix lint * add adaptation in model/utils.py * remove some unnecessary code * fix lint * feat(optim): add support for fp32 zero * Revert "Merge pull request #2 from SolenoidWGT/fp32_zero" This reverts commit 53fc50b0e52f12466e8dc8ec14c5e22b217537c8, reversing changes made to 40f24d0a73fff5c083e11c18d4a07ad16aaabab3. revert commit * merge develop * Update utils.py * support fp32 in zero optimizer * modify the dtype --------- Co-authored-by: wangguoteng.p * feat(*): support sequence_parallel (#180) * support sequence_parallel for no pipeline * sequence_parallel does not support no-flash-attn * support sequence parallel for pipeline * add memory profiler * Update 13B.py * add memory profiler * fix evaluation bug * remove some unnecessary code * remove some unnecessary code * Update parallel_context.py * modify the config * remove memory profiler * modify the config * support selective dropout * feat(monitor): support monitor and alert (#175) * feat(monitor): support monitor and alert * feat(monitor.py): fix demo error * feat(monitor.py): move cmd monitor args to config file * feat(hybrid_zero_optim.py): if overflow occurs send alert msg * feat(monitor.py): remove alert msg filter * feat(monitor.py): optimize class MonitorTracker * feat(monitor.py): optimize code * feat(monitor.py): optimize code * feat(monitor.py): optimize code * feat(monitor.py): optimize code * feat(train.py): update print to log * style(ci): fix lint error * fix(utils/evaluation.py): remove useless code * fix(model/modeling_internlm.py): fix lint error --------- Co-authored-by: huangting4201 * feat(ckpt): add async upload and ckpt snapshot (#161) * use fp16 in instruction (#80) * delete torch_dtype of README's example code (#100) * feat(ckpt): support async ckpt upload and ckpt snapshot --------- Co-authored-by: WRH <12756472+wangruohui@users.noreply.github.com> Co-authored-by: x54-729 <45304952+x54-729@users.noreply.github.com> Co-authored-by: wangguoteng.p * feat(ckpt): add auto ckpt load and singal quit (#189) Co-authored-by: wangguoteng.p * Revert "feat(ckpt): add auto ckpt load and singal quit (#189)" (#192) This reverts commit a45a91bb843cf0b10b8b014a6ef35e695871f91b. * refactor(solver/optimizer): improve optimizer memory (#193) * refactor(solver/optimizer): improve optimizer memory * feat(data): remove useless dataset type ids map * Feat/optimizer (#194) * feat(optimier.py): reduce memory footprint and avoid _check_overflow call * feat(optimier.py): reduce memory footprint and avoid _check_overflow call * feat(optimizer.py): overlap compute norm with allreduce * update var and function name * update function compute norm (#197) Co-authored-by: ChenQiaoling00 * feat(optimizer/hybrid_zero_optim.py): overlap gradients last bucket allreduce and compute norm (#196) * support gradients allreduce and compute norm overlap * fix para set error * remove timer cal_norm for testing * feat(optimizer/hybrid_zero_optim.py): support group global norm * format(lint): fix lint error * feat(optimizer/store.py): update code based on comment --------- Co-authored-by: ChenQiaoling00 Co-authored-by: huangting4201 <1538303371@qq.com> * fix(ci): fix ci train error (#199) * fix/ci train error (#200) * fix(ci): fix ci train error * fix(ci): fix ci train error * fix(ci): fix ci train error * fix(train.py): fix scheduler metric hook skip error (#204) * Merge main to develop (#203) * fix/fix_submodule_err (#61) * fix/fix_submodule_err --------- Co-authored-by: ChenQiaoling00 * fix issue templates (#65) * fix(tokenizer): refactor tokenizer and update usage in readme (#51) * update tokenizer example * fix(readme, requirements): fix typo at Chinese readme and select a lower version of transformers (#73) * fix a typo in readme * in order to find InternLMTokenizer, select a lower version of Transformers --------- Co-authored-by: gouhchangjiang * [Doc] Add wechat and discord link in readme (#78) * Doc:add wechat and discord link * Doc:update wechat and discord link * Doc:update wechat and discord link * Doc:update wechat and discord link * Doc:update wechat and discord link * Doc:update wechat and discord link * Doc:update wechat and discord link * Doc:update wechat and discord link * Doc:update wechat and discord link * Doc:update wechat and discord link * Doc:update wechat and discord link * [Docs]: add Japanese README (#43) * Add Japanese README * Update README-ja-JP.md replace message * Update README-ja-JP.md * add repetition_penalty in GenerationConfig in web_demo.py (#48) Co-authored-by: YWMditto <862779238@qq.com> * use fp16 in instruction (#80) * [Enchancement] add more options for issue template (#77) * [Enchancement] add more options for issue template * update qustion icon * fix link * Use tempfile for convert2hf.py (#23) Fix https://github.com/InternLM/InternLM/issues/50 * delete torch_dtype of README's example code (#100) * set the value of repetition_penalty to 1.0 to avoid random outputs (#99) * Update web_demo.py (#97) Remove meaningless log. * [Fix]Fix wrong string cutoff in the script for sft text tokenizing (#106) * docs(install.md): update dependency package transformers version to >= 4.28.0 (#124) Co-authored-by: 黄婷 * docs(LICENSE): add license (#125) * add license of colossalai and flash-attn * fix lint * modify the name * fix AutoModel map in convert2hf.py (#116) * variables are not printly as expect (#114) * feat(solver): fix code to adapt to torch2.0 and provide docker images (#128) * feat(solver): fix code to adapt to torch2.0 * docs(install.md): publish internlm environment image * docs(install.md): update dependency packages version * docs(install.md): update default image --------- Co-authored-by: 黄婷 * add demo test (#132) Co-authored-by: qa-caif-cicd * fix web_demo cache accelerate (#133) * Doc: add twitter link (#141) * Feat add checkpoint fraction (#151) * feat(config): add checkpoint_fraction into config * feat: remove checkpoint_fraction from configs/7B_sft.py --------- Co-authored-by: wangguoteng.p * [Doc] update deployment guide to keep consistency with lmdeploy (#136) * update deployment guide * fix error * use llm partition (#159) Co-authored-by: qa-caif-cicd * test(ci_scripts): clean test data after test, remove unnecessary global variables, and other optimizations (#165) * test: optimization of ci scripts(variables, test data cleaning, etc). * chore(workflows): disable ci job on push. * fix: update partition * test(ci_scripts): add install requirements automaticlly,trigger event about lint check and other optimizations (#174) * add pull_request in lint check * use default variables in ci_scripts * fix format * check and install requirements automaticlly * fix format --------- Co-authored-by: qa-caif-cicd * feat(profiling): add a simple memory profiler (#89) * feat(profiling): add simple memory profiler * feat(profiling): add profiling argument * feat(CI_workflow): Add PR & Issue auto remove workflow (#184) * feat(ci_workflow): Add PR & Issue auto remove workflow Add a workflow for stale PR & Issue auto remove - pr & issue well be labeled as stale for inactive in 7 days - staled PR & Issue well be remove in 7 days - run this workflow every day on 1:30 a.m. * Update stale.yml * feat(bot): Create .owners.yml for Auto Assign (#176) * Create .owners.yml: for issue/pr assign automatically * Update .owners.yml * Update .owners.yml fix typo * [feat]: add pal reasoning script (#163) * [Feat] Add PAL inference script * Update README.md * Update tools/README.md Co-authored-by: BigDong * Update tools/pal_inference.py Co-authored-by: BigDong * Update pal script * Update README.md * restore .ore-commit-config.yaml * Update tools/README.md Co-authored-by: BigDong * Update tools/README.md Co-authored-by: BigDong * Update pal inference script * Update READMD.md * Update internlm/utils/interface.py Co-authored-by: Wenwei Zhang <40779233+ZwwWayne@users.noreply.github.com> * Update pal script * Update pal script * Update script * Add docstring * Update format * Update script * Update script * Update script --------- Co-authored-by: BigDong Co-authored-by: Wenwei Zhang <40779233+ZwwWayne@users.noreply.github.com> * test(ci_scripts): add timeout settings and clean work after the slurm job (#185) * restore pr test on develop branch * add mask * add post action to cancel slurm job * remove readonly attribute on job log * add debug info * debug job log * try stdin * use stdin * set default value avoid error * try setting readonly on job log * performance echo * remove debug info * use squeue to check slurm job status * restore the lossed parm * litmit retry times * use exclusive to avoid port already in use * optimize loop body * remove partition * add {} for variables * set env variable for slurm partition --------- Co-authored-by: qa-caif-cicd * refactor(tools): move interface.py and import it to web_demo (#195) * move interface.py and import it to web_demo * typo * fix(ci): fix lint error * fix(ci): fix lint error --------- Co-authored-by: Sun Peng Co-authored-by: ChenQiaoling00 Co-authored-by: Kai Chen Co-authored-by: Yang Gao Co-authored-by: Changjiang GOU Co-authored-by: gouhchangjiang Co-authored-by: vansin Co-authored-by: Ikko Eltociear Ashimine Co-authored-by: YWMditto <46778265+YWMditto@users.noreply.github.com> Co-authored-by: YWMditto <862779238@qq.com> Co-authored-by: WRH <12756472+wangruohui@users.noreply.github.com> Co-authored-by: liukuikun <24622904+Harold-lkk@users.noreply.github.com> Co-authored-by: x54-729 <45304952+x54-729@users.noreply.github.com> Co-authored-by: Shuo Zhang Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com> Co-authored-by: 黄婷 Co-authored-by: ytxiong <45058324+yingtongxiong@users.noreply.github.com> Co-authored-by: Zaida Zhou <58739961+zhouzaida@users.noreply.github.com> Co-authored-by: kkscilife <126147887+kkscilife@users.noreply.github.com> Co-authored-by: qa-caif-cicd Co-authored-by: hw <45089338+MorningForest@users.noreply.github.com> Co-authored-by: Guoteng <32697156+SolenoidWGT@users.noreply.github.com> Co-authored-by: wangguoteng.p Co-authored-by: lvhan028 Co-authored-by: zachtzy <141206206+zachtzy@users.noreply.github.com> Co-authored-by: cx <759046501@qq.com> Co-authored-by: Jaylin Lee <61487970+APX103@users.noreply.github.com> Co-authored-by: del-zhenwu Co-authored-by: Shaoyuan Xie <66255889+Daniel-xsy@users.noreply.github.com> Co-authored-by: BigDong Co-authored-by: Wenwei Zhang <40779233+ZwwWayne@users.noreply.github.com> Co-authored-by: huangting4201 * fix(pipeline_scheduler.py): fix tensor shape err and comm block (#210) * feat(train.py): support torch profiler (#201) * feat(train.py): support torch profiling * feat(train.py): optimize initialize_llm_profile * feat(train.py): profiling with tp0 and dp0 * move sequence parallel context manager to evalation func * fix lint * move the process for type_ids to load_new_batch * fix lint --------- Co-authored-by: yingtongxiong <974106207@qq.com> * feat(ckpt): add auto ckpt load and singal quit (#216) Co-authored-by: wangguoteng.p * feat(memory_profiler): improve memory profiler (#217) * Feat/overlap_bcast_forward (#218) * feat/support bcast forward overlao * feat/optimize the bcast call * feat/optimize the bcast call * feat/optimize the bcast call * fix lint * fix lint * fix lint * fix lint * add torch.cuda.synchronize in save_checkpoint --------- Co-authored-by: sunpeng * fix(*): move sequence_parallel to parallel config (#224) * move sequence_parallel to parallel config * set the sequece_parallel default value is False * fix lint * fix lint * fix lint * Feat/example training internlm (#212) * feat(train/training_internlm.py): move common init funcs to internlm/train * feat(train/training_internlm.py): update some public funcs * feat(train/training_internlm.py): update some public funcs * feat(evaluation.py): adapt evaluate to streaming dataset * feat(train/training_internlm.py): minor update based on comments * fix(training_internlm.py): set train dataloader persistent_workers true only when num_worker>0 * fix(training_internlm.py): fix demo error * feat(data/utils.py): add new dataset type code for streaming dataset (#225) * test(model): support fp32 with flash_attn (#223) * support tf32 with flash * move autocast to attention * fix lint * fix lint * fix lint * fix lint * fix some bugs in model * modify the convert dtype * fix(pipeline): modify the sequence_parallel in pipeline (#227) * move sequence_parallel to parallel config * set the sequece_parallel default value is False * fix lint * fix lint * fix lint * modify the sequence_parallel in pp * feat(init): add skip args check flag and add zero overlap flag (#222) * feat(init): add skip args check flag * fix(optim): add param overlap enable flag * fix(ci): fix train error (#228) Co-authored-by: huangting4201 * fix(writer): fix tensorboard resume bug (#229) * fix(train.py): fix overflow grad norm error (#230) * feat(ckpt): add train config into ckpt (#231) * docs(doc/code-docs): support readthedocs (#245) * feat(doc/code-docs): add code-docs for readthedocs * feat(doc/code-docs): add .readthedocs.yaml configuration file * feat(doc/code-docs): update .readthedocs.yaml configuration file * feat(doc/code-docs): update .readthedocs.yaml configuration file * feat(doc/code-docs): update .readthedocs.yaml configuration file * feat(doc/code-docs): update .readthedocs.yaml configuration file * feat(doc/code-docs): update code-docs * [Daily Pull] Merge Main to Develop 20230901 (#260) * Standard and experiment docker (#220) * feat:standard docker image * feat:standard docker image * feat: standard dockerfile * feat: standard dockerfile * feat: standard dockerfile * feat: standard dockerfile * feat: standard dockerfile * feat: standard dockerfile * feat: standard dockerfile * experiment and standard docker * experiment and standard docker * fix(core/trainer.py): fix streaming train state load error (#247) * Fix requirement (#243) * feat:standard docker image * feat:standard docker image * fix: a little problem * fix: a little problem * fix(eval): StreamingDataset does not have an __len__ method. (#251) * fix(metric): argument missing in getting loss metrics. (#256) * feat(model): implement uniform_init for tensor. (#252) * Implement uniform_init for tensor. * Fix functinal calling bugs: normal->uniform. * Format editting: remove unused torch importing. --------- Co-authored-by: li126com <43110891+li126com@users.noreply.github.com> Co-authored-by: huangting4201 <1538303371@qq.com> Co-authored-by: Shuo Zhang Co-authored-by: Ryan (张磊) Co-authored-by: Pryest <54388244+Pryest@users.noreply.github.com> --------- Co-authored-by: huangting4201 <1538303371@qq.com> Co-authored-by: 黄婷 Co-authored-by: ChenQiaoling00 Co-authored-by: Kai Chen Co-authored-by: Yang Gao Co-authored-by: Changjiang GOU Co-authored-by: gouhchangjiang Co-authored-by: vansin Co-authored-by: Ikko Eltociear Ashimine Co-authored-by: YWMditto <46778265+YWMditto@users.noreply.github.com> Co-authored-by: YWMditto <862779238@qq.com> Co-authored-by: WRH <12756472+wangruohui@users.noreply.github.com> Co-authored-by: liukuikun <24622904+Harold-lkk@users.noreply.github.com> Co-authored-by: x54-729 <45304952+x54-729@users.noreply.github.com> Co-authored-by: Shuo Zhang Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com> Co-authored-by: huangting.p Co-authored-by: ytxiong <45058324+yingtongxiong@users.noreply.github.com> Co-authored-by: Zaida Zhou <58739961+zhouzaida@users.noreply.github.com> Co-authored-by: kkscilife <126147887+kkscilife@users.noreply.github.com> Co-authored-by: qa-caif-cicd Co-authored-by: hw <45089338+MorningForest@users.noreply.github.com> Co-authored-by: yingtongxiong <974106207@qq.com> Co-authored-by: cx <759046501@qq.com> Co-authored-by: wangguoteng.p Co-authored-by: huangting4201 Co-authored-by: Guoteng <32697156+SolenoidWGT@users.noreply.github.com> Co-authored-by: lvhan028 Co-authored-by: zachtzy <141206206+zachtzy@users.noreply.github.com> Co-authored-by: Jaylin Lee <61487970+APX103@users.noreply.github.com> Co-authored-by: del-zhenwu Co-authored-by: Shaoyuan Xie <66255889+Daniel-xsy@users.noreply.github.com> Co-authored-by: BigDong Co-authored-by: Wenwei Zhang <40779233+ZwwWayne@users.noreply.github.com> Co-authored-by: li126com <43110891+li126com@users.noreply.github.com> Co-authored-by: Ryan (张磊) Co-authored-by: Pryest <54388244+Pryest@users.noreply.github.com> --- .readthedocs.yml | 28 ++++++++++++ doc/code-docs/Makefile | 20 +++++++++ doc/code-docs/make.bat | 35 +++++++++++++++ doc/code-docs/requirements.txt | 6 +++ doc/code-docs/source/checkpoint.rst | 2 + doc/code-docs/source/conf.py | 62 +++++++++++++++++++++++++ doc/code-docs/source/index.rst | 70 +++++++++++++++++++++++++++++ doc/code-docs/source/initialize.rst | 35 +++++++++++++++ doc/code-docs/source/install.md | 70 +++++++++++++++++++++++++++++ doc/code-docs/source/monitor.rst | 10 +++++ doc/code-docs/source/parallel.rst | 23 ++++++++++ doc/code-docs/source/profiler.rst | 11 +++++ doc/code-docs/source/training.rst | 2 + 13 files changed, 374 insertions(+) create mode 100644 .readthedocs.yml create mode 100644 doc/code-docs/Makefile create mode 100644 doc/code-docs/make.bat create mode 100644 doc/code-docs/requirements.txt create mode 100644 doc/code-docs/source/checkpoint.rst create mode 100644 doc/code-docs/source/conf.py create mode 100644 doc/code-docs/source/index.rst create mode 100644 doc/code-docs/source/initialize.rst create mode 100644 doc/code-docs/source/install.md create mode 100644 doc/code-docs/source/monitor.rst create mode 100644 doc/code-docs/source/parallel.rst create mode 100644 doc/code-docs/source/profiler.rst create mode 100644 doc/code-docs/source/training.rst diff --git a/.readthedocs.yml b/.readthedocs.yml new file mode 100644 index 0000000..650ee88 --- /dev/null +++ b/.readthedocs.yml @@ -0,0 +1,28 @@ +# .readthedocs.yaml +# Read the Docs configuration file +# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details + +# Required +version: 2 + +# Set the OS, Python version and other tools you might need +build: + os: ubuntu-22.04 + tools: + python: "3.8" + +# Build documentation in the docs/ directory with Sphinx +sphinx: + configuration: doc/code-docs/source/conf.py + fail_on_warning: false + +# Optionally build your docs in additional formats such as PDF +formats: + - pdf + +# Optional but recommended, declare the Python requirements required +# to build your documentation +# See https://docs.readthedocs.io/en/stable/guides/reproducible-builds.html +python: + install: + - requirements: doc/code-docs/requirements.txt diff --git a/doc/code-docs/Makefile b/doc/code-docs/Makefile new file mode 100644 index 0000000..d0c3cbf --- /dev/null +++ b/doc/code-docs/Makefile @@ -0,0 +1,20 @@ +# Minimal makefile for Sphinx documentation +# + +# You can set these variables from the command line, and also +# from the environment for the first two. +SPHINXOPTS ?= +SPHINXBUILD ?= sphinx-build +SOURCEDIR = source +BUILDDIR = build + +# Put it first so that "make" without argument is like "make help". +help: + @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) + +.PHONY: help Makefile + +# Catch-all target: route all unknown targets to Sphinx using the new +# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). +%: Makefile + @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) diff --git a/doc/code-docs/make.bat b/doc/code-docs/make.bat new file mode 100644 index 0000000..747ffb7 --- /dev/null +++ b/doc/code-docs/make.bat @@ -0,0 +1,35 @@ +@ECHO OFF + +pushd %~dp0 + +REM Command file for Sphinx documentation + +if "%SPHINXBUILD%" == "" ( + set SPHINXBUILD=sphinx-build +) +set SOURCEDIR=source +set BUILDDIR=build + +%SPHINXBUILD% >NUL 2>NUL +if errorlevel 9009 ( + echo. + echo.The 'sphinx-build' command was not found. Make sure you have Sphinx + echo.installed, then set the SPHINXBUILD environment variable to point + echo.to the full path of the 'sphinx-build' executable. Alternatively you + echo.may add the Sphinx directory to PATH. + echo. + echo.If you don't have Sphinx installed, grab it from + echo.https://www.sphinx-doc.org/ + exit /b 1 +) + +if "%1" == "" goto help + +%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O% +goto end + +:help +%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O% + +:end +popd diff --git a/doc/code-docs/requirements.txt b/doc/code-docs/requirements.txt new file mode 100644 index 0000000..9a4bb3d --- /dev/null +++ b/doc/code-docs/requirements.txt @@ -0,0 +1,6 @@ +Sphinx +sphinx-autobuild +recommonmark +sphinx_rtd_theme +sphinx_markdown_tables +autodoc_pydantic==1.9 \ No newline at end of file diff --git a/doc/code-docs/source/checkpoint.rst b/doc/code-docs/source/checkpoint.rst new file mode 100644 index 0000000..3ceed08 --- /dev/null +++ b/doc/code-docs/source/checkpoint.rst @@ -0,0 +1,2 @@ +Model Checkpointing +=================== \ No newline at end of file diff --git a/doc/code-docs/source/conf.py b/doc/code-docs/source/conf.py new file mode 100644 index 0000000..5986f06 --- /dev/null +++ b/doc/code-docs/source/conf.py @@ -0,0 +1,62 @@ +# Configuration file for the Sphinx documentation builder. +# +# For the full list of built-in configuration values, see the documentation: +# https://www.sphinx-doc.org/en/master/usage/configuration.html + +# -- Project information ----------------------------------------------------- +# https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information + +import os +import sys + +project = "InternLM" +copyright = "2023, InternLM Team" +author = "InternLM Team" +release = "v0.2.0" + +# -- General configuration --------------------------------------------------- +# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration + +extensions = [ + "recommonmark", + "sphinx_rtd_theme", + "sphinx.ext.viewcode", + "sphinx.ext.autodoc", + "sphinxcontrib.autodoc_pydantic", + "sphinx.ext.autosectionlabel", + "sphinx.ext.napoleon", +] + +pygments_style = "sphinx" + +# autodoc_pyandtic config +autodoc_pydantic_model_show_field_summary = False +autodoc_pydantic_field_signature_prefix = " " +autodoc_pydantic_model_signature_prefix = "class" +autodoc_pydantic_model_show_json = False +autodoc_pydantic_model_show_config_summary = False +autodoc_pydantic_model_show_config_member = False +autodoc_pydantic_model_show_validator_summary = False +autodoc_pydantic_model_show_validator_members = False +autodoc_pydantic_model_summary_list_order = "bysource" +autodoc_pydantic_model_member_order = "bysource" +autodoc_pydantic_field_list_validators = False + +templates_path = ["_templates"] + +exclude_patterns = [] + +# -- Options for HTML output ------------------------------------------------- +# https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output + +html_theme = "sphinx_rtd_theme" +html_static_path = ["_static"] + +sys.path.insert(0, os.path.abspath("../../../")) + +# Prepend module names to class descriptions +add_module_names = True + +autoclass_content = "init" + +autodoc_mock_imports = ["apex", "torch"] diff --git a/doc/code-docs/source/index.rst b/doc/code-docs/source/index.rst new file mode 100644 index 0000000..3011df6 --- /dev/null +++ b/doc/code-docs/source/index.rst @@ -0,0 +1,70 @@ +.. InternLM documentation master file, created by + sphinx-quickstart on Mon Aug 28 17:33:28 2023. + You can adapt this file completely to your liking, but it should at least + contain the root `toctree` directive. + +InternLM +======== + +Environment Setup +------------------- + +.. toctree:: + :maxdepth: 2 + + install + +Model Setup +------------------- + +.. toctree:: + :maxdepth: 2 + + initialize + +Training API +------------------- + +.. toctree:: + :maxdepth: 2 + + training + +Parallel Training +------------------- + +.. toctree:: + :maxdepth: 2 + + parallel + +Model Checkpointing +------------------- + +.. toctree:: + :maxdepth: 2 + + checkpoint + +Profiler +------------------- + +.. toctree:: + :maxdepth: 2 + + profiler + +Monitor +------------------- + +.. toctree:: + :maxdepth: 2 + + monitor + +Indices and tables +================== + +* :ref:`genindex` +* :ref:`modindex` +* :ref:`search` diff --git a/doc/code-docs/source/initialize.rst b/doc/code-docs/source/initialize.rst new file mode 100644 index 0000000..a638c33 --- /dev/null +++ b/doc/code-docs/source/initialize.rst @@ -0,0 +1,35 @@ +Training Setup +============== + +.. _InternLM-args: + +Argument Parsing +---------------- +InternLM uses the `argparse `_ library to supply commandline +configuration to the InternLM runtime. Use ``internlm.initialize.get_default_parser()`` to get InternLM's default +parser with some builtin arguments, users can add custom parameters to this parser. + +.. code-block:: python + + # Get InternLM default parser + parser = internlm.initialize.get_default_parser() + # Add new argument + parser.add_argument("--user_arg", type=int, default=-1, help="arguments add by user.") + cmd_args = parser.parse_args() + +.. autofunction:: internlm.initialize.get_default_parser + + +.. _InternLM-init: + +Model Initialization +------------------------- + +Optimizer Initialization +------------------------- + +Dataloader Initialization +------------------------- + +Trainer Initialization +------------------------- diff --git a/doc/code-docs/source/install.md b/doc/code-docs/source/install.md new file mode 100644 index 0000000..26f57c0 --- /dev/null +++ b/doc/code-docs/source/install.md @@ -0,0 +1,70 @@ +## Installation + +### Environment Preparation +The required packages and corresponding version are shown as follows: +- Python == 3.10 +- GCC == 10.2.0 +- MPFR == 4.1.0 +- CUDA >= 11.7 +- Pytorch >= 1.13.1 +- Transformers >= 4.28.0 +- Flash-Attention >= v1.0.5 +- Apex == 23.05 +- GPU with Ampere or Hopper architecture (such as H100, A100) +- Linux OS + +After installing the above dependencies, some system environment variables need to be updated: +```bash +export CUDA_PATH={path_of_cuda_11.7} +export GCC_HOME={path_of_gcc_10.2.0} +export MPFR_HOME={path_of_mpfr_4.1.0} +export LD_LIBRARY_PATH=${GCC_HOME}/lib64:${MPFR_HOME}/lib:${CUDA_PATH}/lib64:$LD_LIBRARY_PATH +export PATH=${GCC_HOME}/bin:${CUDA_PATH}/bin:$PATH +export CC=${GCC_HOME}/bin/gcc +export CXX=${GCC_HOME}/bin/c++ +``` + +### Environment Installation +Clone the project `internlm` and its dependent submodules from the github repository, as follows: +```bash +git clone git@github.com:InternLM/InternLM.git --recurse-submodules +``` + +It is recommended to build a Python-3.10 virtual environment using conda and install the required dependencies based on the `requirements/` files: +```bash +conda create --name internlm-env python=3.10 -y +conda activate internlm-env +cd internlm +pip install -r requirements/torch.txt +pip install -r requirements/runtime.txt +``` + +Install flash-attention (version v1.0.5): +```bash +cd ./third_party/flash-attention +python setup.py install +cd ./csrc +cd fused_dense_lib && pip install -v . +cd ../xentropy && pip install -v . +cd ../rotary && pip install -v . +cd ../layer_norm && pip install -v . +cd ../../../../ +``` + +Install Apex (version 23.05): +```bash +cd ./third_party/apex +pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./ +cd ../../ +``` + +### Environment Image +Users can obtain an image with the InternLM runtime environment installed from https://hub.docker.com/r/sunpengsdu/internlm. The commands for pulling the image and starting the container are as follows: + +```bash +# pull image +docker pull sunpengsdu/internlm:torch1.13-cuda11.7-flashatten1.0.5-centos +# start container +docker run --gpus all -d -it --shm-size=2gb --name myinternlm sunpengsdu/internlm:torch1.13-cuda11.7-flashatten1.0.5-centos +docker exec -it myinternlm bash +``` diff --git a/doc/code-docs/source/monitor.rst b/doc/code-docs/source/monitor.rst new file mode 100644 index 0000000..ff8cd1b --- /dev/null +++ b/doc/code-docs/source/monitor.rst @@ -0,0 +1,10 @@ +Monitor and Alert +================= + + +Monitoring +----------------- + + +Alerting +----------------- diff --git a/doc/code-docs/source/parallel.rst b/doc/code-docs/source/parallel.rst new file mode 100644 index 0000000..3515847 --- /dev/null +++ b/doc/code-docs/source/parallel.rst @@ -0,0 +1,23 @@ +Parallel Training +================= + +.. 整体说一下并行配置使用方式,接下来再分模块详细说明 + +Tensor Parallel +----------------- + + +Pipeline Parallel +----------------- + + +Sequence Parallel +----------------- + + +Data Parallel +----------------- + + +ZeRO1.5 +----------------- \ No newline at end of file diff --git a/doc/code-docs/source/profiler.rst b/doc/code-docs/source/profiler.rst new file mode 100644 index 0000000..c10f425 --- /dev/null +++ b/doc/code-docs/source/profiler.rst @@ -0,0 +1,11 @@ +Profiler +======== + +.. 可介绍torch profiler, memory profiler的使用 + +Torch Profiler +----------------- + + +Memory Profiler +----------------- \ No newline at end of file diff --git a/doc/code-docs/source/training.rst b/doc/code-docs/source/training.rst new file mode 100644 index 0000000..e9ee124 --- /dev/null +++ b/doc/code-docs/source/training.rst @@ -0,0 +1,2 @@ +Training API +============ \ No newline at end of file