Commit Graph

11 Commits (85f4d4af58fabd86cd5c792532a2a0c5024a6bc1)

Author SHA1 Message Date
zhanglei ccdaf8ec45 fix the moe_loss for ci and val 2023-09-22 15:45:36 +08:00
Wenwen Qu 8a595837fc merge upstream/develop into feature_add_moe 2023-09-11 16:20:08 +08:00
Wenwen Qu 409f139ba5 merge 2023-08-24 16:38:36 +08:00
huangting4201 94b2aa28fc
Feat/example training internlm (#212)
* feat(train/training_internlm.py): move common init funcs to internlm/train

* feat(train/training_internlm.py): update some public funcs

* feat(train/training_internlm.py): update some public funcs

* feat(evaluation.py): adapt evaluate to streaming dataset

* feat(train/training_internlm.py): minor update based on comments

* fix(training_internlm.py): set train dataloader persistent_workers true only when num_worker>0

* fix(training_internlm.py): fix demo error
2023-08-24 10:00:15 +08:00
ytxiong a017cab4b3
fix(*): move sequence_parallel to parallel config (#224)
* move sequence_parallel to parallel config

* set the sequece_parallel default value is False

* fix lint

* fix lint

* fix lint
2023-08-24 09:49:04 +08:00
huangting4201 53648dc0e9
feat(train.py): support torch profiler (#201)
* feat(train.py): support torch profiling

* feat(train.py): optimize initialize_llm_profile

* feat(train.py): profiling with tp0 and dp0

* move sequence parallel context manager to evalation func

* fix lint

* move the process for type_ids to load_new_batch

* fix lint

---------

Co-authored-by: yingtongxiong <974106207@qq.com>
2023-08-21 15:23:38 +08:00
zhanglei 2983076d89 add logger for moe_loss 2023-08-17 16:52:11 +08:00
huangting4201 ff0fa7659f
feat(monitor): support monitor and alert (#175)
* feat(monitor): support monitor and alert

* feat(monitor.py): fix demo error

* feat(monitor.py): move cmd monitor args to config file

* feat(hybrid_zero_optim.py): if overflow occurs send alert msg

* feat(monitor.py): remove alert msg filter

* feat(monitor.py): optimize class MonitorTracker

* feat(monitor.py): optimize code

* feat(monitor.py): optimize code

* feat(monitor.py): optimize code

* feat(monitor.py): optimize code

* feat(train.py): update print to log

* style(ci): fix lint error

* fix(utils/evaluation.py): remove useless code

* fix(model/modeling_internlm.py): fix lint error

---------

Co-authored-by: huangting4201 <huangting3@sensetime.com>
2023-08-08 11:18:15 +08:00
ytxiong c219065348
feat(*): support sequence_parallel (#180)
* support sequence_parallel for no pipeline

* sequence_parallel does not support no-flash-attn

* support sequence parallel for pipeline

* add memory profiler

* Update 13B.py

* add memory profiler

* fix evaluation bug

* remove some unnecessary code

* remove some unnecessary code

* Update parallel_context.py

* modify the config

* remove memory profiler

* modify the config

* support selective dropout
2023-08-07 16:42:52 +08:00
cx 0268d8eda1
refactor(scheduler): rewrite pipeline scheduler (#138)
* refactor(scheduler): rewrite pipeline scheduler

* fix(*): fix pipeline scheduler bugs

* fix(*): fix merge bug

* feat(*): update codes with todo tag

* feat(*): add comments

* feat(internlm/core/scheduler): update recv_prev/next logic

* feat(utils/evaluation.py): update sche metric hook for valid

---------

Co-authored-by: huangting.p <huangting@sensetime.com>
2023-08-03 11:48:12 +08:00
huangting4201 66a23e326a
feat(utils/evaluation.py): support evaluate (#154)
* style(internlm): fix lint error

* feat(utils/logger.py): support uniscale logger

* fix(utils/logger.py): fix import circular error

* feat(train.py): support dashboard metric panel and fix ci train config

* fix(ci_scripts/train/slurm_train.sh): fix ci train error

* fix(ci_scripts/train/torchrun.sh): fix ci train error

* feat(utils/evaluation.py): support evaluate on validation dataset

* fix(utils/evaluation.py): fix demo error

* fix(ci_scripts/train/ci_7B_sft.py): fix ci train error

* feat(initialize/launch.py): set default value for valid_bsz and valid_every

* fix(ci_scripts/train): restore ci update

* docs(configs/7B_sft.py): update comment for config

* fix(config.json): delete config.json

* fix evaluation bug in scheduler when use_flash_attn=False

* feat(scheduler/no_pipeline_scheduler.py): support micro_bsz>1 in no pp

* modify the jugement in pp and no-pp scheduler

* modify the data_process_func in evaluation

* fix bugs when use_flash_attn=False

* rename symbol

* feat(configs/7B_sft.py): change para valid_bsz to valid_micro_num

* feat(scheduler/no_pipeline_scheduler.py): update para set _grad_accum_batch_size

---------

Co-authored-by: 黄婷 <huangting3@CN0014010744M.local>
Co-authored-by: huangting.p <huangting@sensetime.com>
Co-authored-by: yingtongxiong <974106207@qq.com>
2023-08-02 19:03:59 +08:00