Commit Graph

13 Commits (4a5cf5d1dfc338dcde2b309dc3010bc5494d829b)

Author SHA1 Message Date
zhanglei d1c7b607fa add param arguments 2023-08-09 15:37:53 +08:00
zhanglei cdf3ed9533 add residual and other moe features 2023-08-09 14:14:18 +08:00
Wenwen Qu 9ad7942568
Merge branch 'develop' into feature_add_moe 2023-08-08 16:51:10 +08:00
Wenwen Qu 2a52452ed2 modified: internlm/model/modeling_internlm.py 2023-08-08 15:47:46 +08:00
huangting4201 ff0fa7659f
feat(monitor): support monitor and alert (#175)
* feat(monitor): support monitor and alert

* feat(monitor.py): fix demo error

* feat(monitor.py): move cmd monitor args to config file

* feat(hybrid_zero_optim.py): if overflow occurs send alert msg

* feat(monitor.py): remove alert msg filter

* feat(monitor.py): optimize class MonitorTracker

* feat(monitor.py): optimize code

* feat(monitor.py): optimize code

* feat(monitor.py): optimize code

* feat(monitor.py): optimize code

* feat(train.py): update print to log

* style(ci): fix lint error

* fix(utils/evaluation.py): remove useless code

* fix(model/modeling_internlm.py): fix lint error

---------

Co-authored-by: huangting4201 <huangting3@sensetime.com>
2023-08-08 11:18:15 +08:00
Wenwen Qu c357288a8b feat(XXX): add moe 2023-08-07 20:17:49 +08:00
ytxiong c219065348
feat(*): support sequence_parallel (#180)
* support sequence_parallel for no pipeline

* sequence_parallel does not support no-flash-attn

* support sequence parallel for pipeline

* add memory profiler

* Update 13B.py

* add memory profiler

* fix evaluation bug

* remove some unnecessary code

* remove some unnecessary code

* Update parallel_context.py

* modify the config

* remove memory profiler

* modify the config

* support selective dropout
2023-08-07 16:42:52 +08:00
ytxiong d67be17f96
refactor(*): refactor the code with no-apex (#170)
* support no-apex

* add default for use_apex

* fix lint

* modify the RMSNormTorch

* remove some comments

* remove use_apex parameter

* remove some unnecessary code

* optimize the code including import

* remove the import RMSNorm

* remove warnings
2023-08-03 11:24:12 +08:00
ytxiong 1c397f523f
feat(*): support no apex (#166)
* support no-apex

* add default for use_apex

* fix lint

* modify the RMSNormTorch

* remove some comments

* remove use_apex parameter

* remove some unnecessary code
2023-08-02 20:32:38 +08:00
huangting4201 1f7304a8bb
feat(utils/logger.py): support uniscale logger (#152)
* style(internlm): fix lint error

* feat(utils/logger.py): support uniscale logger

* fix(utils/logger.py): fix import circular error

* feat(train.py): support dashboard metric panel and fix ci train config

* fix(ci_scripts/train/slurm_train.sh): fix ci train error

* fix(ci_scripts/train/torchrun.sh): fix ci train error

* fix(ci_scripts/train): restore ci update

* fix(config.json): delete alert webhook

* feat(train.py): optimize func init logger

* feat(config.json): delete config.json

---------

Co-authored-by: 黄婷 <huangting3@CN0014010744M.local>
Co-authored-by: huangting.p <huangting@sensetime.com>
2023-08-01 17:37:32 +08:00
ytxiong 5ee651c2f1
feat(*): support not-flash-attn for pp and no-pp (#145)
* support not flash attention for no-pp

* support pipeline

* modify the config

* refactor the code

* refactor the code

* remove some unnecessary code
2023-07-28 16:13:04 +08:00
huangting4201 762ab297ee
feat(core/scheduler): support pipeline parallel (#98)
* feat(utils/writer.py): support tensorboard writer

* feat(utils/writer.py): add class comment

* feat(core): support pipeline parallel

* fix(core): fix demo running error

* feat(solver/optimizer): add pp zero optimizer

* fix(solver/optimizer): fix word spelling error

* feat(core/scheduler): add new dir scheduler in core/

* fix(core): fix ci lint error

* feat(solver/optimizer): merge pp and nopp optimizer

* doc(usage.md): update usage doc

* feat(core/scheduler): support post func

* feat(core/scheduler): add dtype para in pp sche and update func get_tensor_shape

* feat(core/scheduler): add _load_micro_batch in base scheduler

* feat(core/scheduler): support optimizer overlap communication in pp scheduler

* feat(core/scheduler): delete data process func code

* feat(core/trainer): schedule pre processing for all schedule

---------

Co-authored-by: 黄婷 <huangting3@CN0014010744M.local>
Co-authored-by: huangting.p <huangting@sensetime.com>
2023-07-24 20:52:09 +08:00
Sun Peng fa7337b37b initial commit 2023-07-06 12:55:23 +08:00