Yang Gao
5539f9db50
fix when resuming lr_scheduler without loading optimizer ( #565 )
2023-12-29 20:22:39 +08:00
Guoteng
220953d7e5
fix(metrics): remove redundant cuda memory in metric calculations ( #557 )
2023-12-29 20:21:24 +08:00
Guoteng
c39d758a8a
feat(logger): add tensorboard key value buffer ( #549 )
...
* feat(logger): add tensorboard key value buffer
* fix
2023-12-29 16:23:47 +08:00
jiaxingli
d418eba094
fix(model): add ckpt_type constraint when loading ckpts ( #542 )
...
* support hf llama
* support hf llama
* support hf llama
* support hf llama
* importerror
* importerror
* modeling
* modeling
* fix bug
* add assert
2023-12-20 16:43:27 +08:00
kkscilife
a58bf853db
change into reserved ( #550 )
...
Co-authored-by: kkscilife <wangmengke@pjlab.org.cn>
2023-12-20 14:41:09 +08:00
jiaopenglong
de53b17506
fix token grad norm with tp ( #547 )
2023-12-18 18:33:28 +08:00
Wenwen Qu
513ebb9c3a
fix(moe): fix moe zero mode bug ( #548 )
...
* fix moe zero mode bugs
* update moe config to fit training on 8 GPU
2023-12-18 14:39:42 +08:00
jiaxingli
bbb5651582
fix(model): change model_type `LLAMA` to `LLAMA2` ( #539 )
...
* support hf llama
* support hf llama
* support hf llama
* support hf llama
* importerror
* importerror
* modeling
* modeling
* fix bug
2023-12-13 17:24:45 +08:00
Guoteng
5ecb6aa712
fix(pp): fix no-packed dataset load micro batch error ( #538 )
...
* fix(pp): fix no-packed dataset load micro batch error
* fix based on comment
2023-12-13 14:48:32 +08:00
ytxiong
432bd5ee9f
fix the bug so that the sequence parallel norm is all-reduced when overlap is False ( #534 )
2023-12-12 16:22:39 +08:00
jiaxingli
d904730be7
feat(ckpt): support auto resume in Volc and Ali ( #529 )
...
* multipart upload
* upload
* storage
* storage
* storage
* storage
* change ak sk name
* change ak sk name
* change ak sk name
* change ak sk name
* storage
* storage
* auto resume
* auto resume
* auto resume
* bug
2023-12-12 13:27:24 +08:00
Pryest
cc5b15349d
fix(metric): add metric dtype control ( #533 )
...
* fix(metric): add metric dtype control
* fix demo config to avoid implicity
* fix default behavior
2023-12-11 19:36:31 +08:00
jiaxingli
6c0ff4820f
feat(model): support llama model with checkpoint loading ( #532 )
...
* support hf llama
* support hf llama
* support hf llama
* support hf llama
* importerror
* importerror
* modeling
* modeling
2023-12-11 16:25:24 +08:00
Guoteng
81ffb3d824
fix(test): fix type_ids unpack bug ( #530 )
2023-12-07 18:47:19 +08:00
jiaxingli
828033aed5
fix(storage): unify the name of ak & sk ( #527 )
...
* multipart upload
* upload
* storage
* storage
* storage
* storage
* change ak sk name
* change ak sk name
* change ak sk name
* change ak sk name
* storage
* storage
2023-12-06 15:31:44 +08:00
ytxiong
809ad9ebc8
fix the type_ids when micro_num=1 and use_flash_attn=False ( #516 )
2023-12-06 14:38:28 +08:00
jiaopenglong
112c34ae09
feat(grad_norm): vocab grad norm profiling ( #519 )
...
* compute vocab grad norm && save pt
* add grad_norm profiling interval && refactor save grad norm
* fix ci test_pipeline
2023-12-06 13:52:42 +08:00
jiaopenglong
9fc252f40e
add output embedding tf32 option ( #523 )
2023-12-06 13:50:59 +08:00
ytxiong
c581cc4c02
fix(model): add IS_SEQUENCE_PARALLEL check for norm module ( #528 )
...
* add IS_SEQUENCE_PARALLEL check for norm module
* fix lint
* remove comments
* replace the named_children by named_modules
* fix lint
* fix the spell bug and move the sequence judge to training_internlm
2023-12-06 12:06:22 +08:00
jiaxingli
2dbbab7418
fix test_checkpoint ( #526 )
2023-12-04 15:38:13 +08:00
jiaxingli
1738bee002
feat(storage): use multipart upload when using oss ( #520 )
...
* multipart upload
* upload
* storage
* storage
* storage
* storage
2023-12-01 17:05:58 +08:00
kkscilife
66bffffe5c
add unit test case ( #524 )
...
Co-authored-by: wangmengke <wangmengke@pjlab.org.cn>
2023-12-01 16:12:39 +08:00
Guoteng
b3be333aa2
fix(ci): fix test model ckpt ci test ( #518 )
2023-11-30 19:16:35 +08:00
kkscilife
b79d5ea7ae
test(workflow): add workflow for loss test and change trigger event ( #513 )
...
* add workflow for loss test
* change trigger event
* optimize trigger event
---------
Co-authored-by: wangmengke <wangmengke@pjlab.org.cn>
2023-11-30 11:04:07 +08:00
Guoteng
757e19e01a
1. fix(config): rampup_batch_size defalut value BC. ( #515 )
...
2. fix(config): standardize config parameter access.
3. feat(launch): add warmup_process_group
4. feat(memory): add cuda_memory_analyze
2023-11-28 19:33:46 +08:00
jiaxingli
06e8301861
name ( #514 )
2023-11-24 18:24:54 +08:00
jiaxingli
b59641715a
Feat(QA): Check loss when swapping micro_num and micro_bsz && Check grad norm ( #510 )
...
* unitest_only_forward
* memory_test
* doc fix
* doc fix
* check loss
* check grad norm
* check grad norm
2023-11-24 12:05:14 +08:00
Shuo Zhang
0d3811c029
feat(model): add rope_base interface ( #512 )
2023-11-23 16:30:14 +08:00
jiaxingli
7776693373
feat(doc): add GPU memory info for 7B & 20B models ( #507 )
...
* unitest_only_forward
* memory_test
* doc fix
* doc fix
2023-11-21 19:20:02 +08:00
jiaopenglong
f5aea7e08c
fix(timeout): larger timeout ( #495 )
...
* larger initialize timeout
* unify time format
* update timeout thresholds
2023-11-21 19:19:22 +08:00
jiaxingli
eba2b859fc
feat(seed): set global seed for every model initialization ( #496 )
...
* bind seed
* bind seed
2023-11-17 14:42:50 +08:00
kkscilife
679ed3c8ca
test(workflow): add model init test ( #504 )
...
* add model init test
* reduce timeout
---------
Co-authored-by: wangmengke <wangmengke@pjlab.org.cn>
2023-11-17 09:59:34 +08:00
Guoteng
0bfc86205e
feat(train): support_rampup_batch_size and fix bugs ( #493 )
2023-11-16 19:51:01 +08:00
jiaxingli
4a6987d5e7
unitest_only_forward ( #484 )
2023-11-16 15:30:57 +08:00
jiaxingli
e8cf27b8c0
Feat(QA): Check init model weights ( #502 )
...
* check_init
* check_init
* check_init
* check_init
2023-11-16 11:03:19 +08:00
YWMditto
be5b9ea2fa
feat(train): update get_train_data_loader to make logic clearer ( #498 )
...
* update get_train_data_loader
* update get_train_data_loader, del old doc
---------
Co-authored-by: YWMditto <862779238@qq.com>
2023-11-14 17:05:15 +08:00
kkscilife
2b984ffa58
test(workflow): add ci workflow for acc test ( #485 )
...
* add ci workflow for acc test
* change train script
* add --kill-on-bad-exit=1 and change always to !cancelled
---------
Co-authored-by: wangmengke <wangmengke@pjlab.org.cn>
2023-11-13 18:04:01 +08:00
jiaopenglong
626ed0fc5e
fix(train): unify the exp paths ( #492 )
2023-11-11 20:15:59 +08:00
jiaopenglong
3418898cbe
fix(alert): send exception of all ranks ( #491 )
...
* catch exception of all ranks
* monitor task only if DO_ALERT is True
2023-11-10 19:04:31 +08:00
huangting4201
8ada074cfd
fix(docs): fix 20B demo log ( #490 )
...
* feat(docs): change 30B demo to 20B
* feat(docs): change 30B demo to 20B
* feat(docs): fix demo log
2023-11-10 15:57:11 +08:00
Yang Gao
07026d1821
fix dataset types when using random dataset ( #489 )
2023-11-10 15:08:22 +08:00
huangting4201
5d3242027a
docs(code-docs): add 20b training demo ( #488 )
...
* feat(docs): change 30B demo to 20B
* feat(docs): change 30B demo to 20B
2023-11-10 14:00:27 +08:00
Guoteng
b7ecdba617
feat(ckpt): save ckpt when reach total step count ( #486 )
2023-11-09 21:07:16 +08:00
Pryest
5b67db33d0
fix(metric): use float32 to compute ppl ( #481 )
2023-11-09 20:26:46 +08:00
jiaopenglong
a435980e0c
rename vars ( #468 )
2023-11-09 20:04:35 +08:00
jiaopenglong
0763bf3972
init light monitoring on all ranks ( #462 )
2023-11-09 20:04:21 +08:00
YWMditto
0218e3131c
feat(tools): support origin internlm architecture in web_demo ( #478 )
...
* debug for web_demo_internlm
* support web_demo_internlm
* update readme.md
* update web_demo.py
* update InternLM/tools/load_internlm_model.py
* update apis/inference.py
* update apis/inference.py
* update tools/load_internlm_model
* del private info in load_internlm_model.py
* fix some info
* fix some info
---------
Co-authored-by: YWMditto <862779238@qq.com>
2023-11-09 20:01:55 +08:00
jiaxingli
bd7e501b69
Feat(QA): Check model weights for acc ( #476 )
...
* check_weights
* check_weights
2023-11-09 16:16:29 +08:00
x54-729
a38af602bc
feat(doc): add torch_dtype to examples in README ( #479 )
...
* add torch_dtype to README examples
* typo
2023-11-09 15:58:58 +08:00
YWMditto
79e84fade3
feat(doc): add dynamic ntk example ( #480 )
...
* add dynamic ntk compare example
* add dynamic ntk compare example
---------
Co-authored-by: YWMditto <862779238@qq.com>
2023-11-09 13:12:38 +08:00