Commit Graph

172 Commits (b46d1c17aff37d50f4365bbbbccf01505dce3598)

Author SHA1 Message Date
Wenwen Qu b46d1c17af merge upstream/develop into feature_add_moe 2023-09-11 16:27:33 +08:00
Wenwen Qu 8a595837fc merge upstream/develop into feature_add_moe 2023-09-11 16:20:08 +08:00
Wenwen Qu b10e5132fe fix bugs with _compute_norm_with_moe_group 2023-09-08 18:09:13 +08:00
Wenwen Qu 6cf0fec314 replace flashatten experts by feedforward experts 2023-09-08 18:04:57 +08:00
Wenwen Qu cd6b28b073 use dummy mode to generate random numbers in model construction 2023-09-08 17:56:42 +08:00
Wenwen Qu 1baa7b41f0 Merge branch 'feature_add_moe' of https://github.com/blankde/InternLM into feature_add_moe 2023-08-31 18:48:46 +08:00
Wenwen Qu 7ca5da27e8 fix group_norms computing in hybrid_zero_optim 2023-08-31 18:46:13 +08:00
Wenwen Qu 2ad5f512b5 remove moe_loss_coeff parameter passing 2023-08-31 18:44:58 +08:00
Wenwen Qu 9af934faf3
Update .pre-commit-config.yaml 2023-08-30 16:27:39 +08:00
Wenwen Qu e498f9262e fix bugs 2023-08-30 16:22:35 +08:00
Wenwen Qu b021995199 fix bugs 2023-08-30 16:14:33 +08:00
Wenwen Qu f3da80a7ca reformat code 2023-08-28 14:46:03 +08:00
Wenwen Qu 629e6a5ad1 add comments for moe 2023-08-25 19:03:31 +08:00
Wenwen Qu aa2612edc4
Merge branch 'develop' into feature_add_moe 2023-08-25 13:35:56 +08:00
Guoteng 42851be36b
feat(ckpt): add train config into ckpt (#231) 2023-08-24 19:57:32 +08:00
huangting4201 29dd401071
fix(train.py): fix overflow grad norm error (#230) 2023-08-24 17:46:27 +08:00
Guoteng 2acb278e1f
fix(writer): fix tensorboard resume bug (#229) 2023-08-24 17:38:39 +08:00
huangting4201 04c02a61b2
fix(ci): fix train error (#228)
Co-authored-by: huangting4201 <huangting3@sensetime.com>
2023-08-24 17:11:32 +08:00
Wenwen Qu 0e6b1f856c add support for moe checkpoint 2023-08-24 17:01:14 +08:00
Guoteng 7c820cfa40
feat(init): add skip args check flag and add zero overlap flag (#222)
* feat(init): add skip args check flag

* fix(optim): add param overlap enable flag
2023-08-24 16:44:18 +08:00
Wenwen Qu e32fbaaae2
Update 7B_sft.py 2023-08-24 16:40:11 +08:00
Wenwen Qu 409f139ba5 merge 2023-08-24 16:38:36 +08:00
ytxiong 9cd1e0314e
fix(pipeline): modify the sequence_parallel in pipeline (#227)
* move sequence_parallel to parallel config

* set the sequece_parallel default value is False

* fix lint

* fix lint

* fix lint

* modify the sequence_parallel in pp
2023-08-24 14:45:40 +08:00
huangting4201 9eec3d9465 fix(conflicts): merge main to develop 2023-08-24 14:26:10 +08:00
ytxiong eee93b5a68
test(model): support fp32 with flash_attn (#223)
* support tf32 with flash

* move autocast to attention

* fix lint

* fix lint

* fix lint

* fix lint

* fix some bugs in model

* modify the convert dtype
2023-08-24 13:54:44 +08:00
huangting4201 fd28bcab58
feat(data/utils.py): add new dataset type code for streaming dataset (#225) 2023-08-24 13:46:18 +08:00
huangting4201 94b2aa28fc
Feat/example training internlm (#212)
* feat(train/training_internlm.py): move common init funcs to internlm/train

* feat(train/training_internlm.py): update some public funcs

* feat(train/training_internlm.py): update some public funcs

* feat(evaluation.py): adapt evaluate to streaming dataset

* feat(train/training_internlm.py): minor update based on comments

* fix(training_internlm.py): set train dataloader persistent_workers true only when num_worker>0

* fix(training_internlm.py): fix demo error
2023-08-24 10:00:15 +08:00
ytxiong a017cab4b3
fix(*): move sequence_parallel to parallel config (#224)
* move sequence_parallel to parallel config

* set the sequece_parallel default value is False

* fix lint

* fix lint

* fix lint
2023-08-24 09:49:04 +08:00
Sun Peng 32664328e7
Feat/overlap_bcast_forward (#218)
* feat/support bcast forward overlao

* feat/optimize the bcast call

* feat/optimize the bcast call

* feat/optimize the bcast call

* fix lint

* fix lint

* fix lint

* fix lint

* add torch.cuda.synchronize in save_checkpoint

---------

Co-authored-by: sunpeng <sunpengsdu@gmail.com>
2023-08-23 16:59:59 +08:00
cx a48210f1f3
feat(memory_profiler): improve memory profiler (#217) 2023-08-23 14:18:33 +08:00
Guoteng 29779c75f0
feat(ckpt): add auto ckpt load and singal quit (#216)
Co-authored-by: wangguoteng.p <wangguoteng925@qq.com>
2023-08-23 14:17:45 +08:00
Wenwen Qu a1f99b64bc Merge branch 'feature_add_moe' of https://github.com/blankde/InternLM into feature_add_moe 2023-08-23 13:52:29 +08:00
Wenwen Qu 401796940a
Merge pull request #2 from blankde/feature_add_moe_pp_zl
feat(moe): moe pipeline support
2023-08-23 13:51:37 +08:00
zhanglei 72e3b1afd5 change the scale position for latent moe_loss 2023-08-23 13:25:20 +08:00
zhanglei 3a3ca71459 fix moe loss logger for the interleaved pp 2023-08-23 13:03:21 +08:00
zhanglei d1d21546d9 refactor code 2023-08-23 11:03:08 +08:00
zhanglei 3f32ee31bb fix the bug that missing scale the latent moe loss 2023-08-23 10:53:36 +08:00
loveSnowBest e1cefaef6b
fix huggingface link (#219) 2023-08-22 22:20:01 +08:00
zhanglei 12b739e83b Merge branch 'feature_add_moe' of github.com:blankde/InternLM into feature_add_moe_pp_zl 2023-08-22 18:56:29 +08:00
Wenwen Qu 94b8b18a49 optimize code with moe norm computing 2023-08-22 14:30:13 +08:00
Wenwen Qu 0ab3de8994 fix bugs with compute moe norm 2023-08-22 14:00:07 +08:00
Lyu Han 716131e477
introduce how to deploy 4-bit quantized internlm model (#207) 2023-08-22 11:31:01 +08:00
zhanglei 8407c203a3 refactor code 2023-08-22 10:53:21 +08:00
zhanglei ac243e5b33 refactor code 2023-08-22 10:42:39 +08:00
zhanglei b01e20adc8 update moe training cfg on real-dataset 2023-08-22 10:36:17 +08:00
zhanglei a8dd77ce76 fix bug on logger 2023-08-22 10:35:17 +08:00
Kai Chen 075648cd70
update readme related to internlm-chat-7v-v1.1 (#214) 2023-08-22 08:08:44 +08:00
Wenwei Zhang 58108413bd
Update readme for news of InternLM-Chat-7B-v1.1 and Lagent (#213)
* update readme

* fix typo
2023-08-22 07:46:01 +08:00
kkscilife cc3c48ae47
test(ci_scripts): add load ckpt cases (#208)
* fix format

* add scripts for load ckpt case

* update test config

* debug:use var in json

* fix syntax error

* export pythonpath

* use absolute path

* use father path of workspace

* debug load new ckpt

* change data path

* add train folder

* fix code format

* fix pylint warning

---------

Co-authored-by: wangmengke <wangmengke@pjlab.org.cn>
2023-08-21 15:24:43 +08:00
huangting4201 53648dc0e9
feat(train.py): support torch profiler (#201)
* feat(train.py): support torch profiling

* feat(train.py): optimize initialize_llm_profile

* feat(train.py): profiling with tp0 and dp0

* move sequence parallel context manager to evalation func

* fix lint

* move the process for type_ids to load_new_batch

* fix lint

---------

Co-authored-by: yingtongxiong <974106207@qq.com>
2023-08-21 15:23:38 +08:00