Guoteng
0bfc86205e
feat(train): support_rampup_batch_size and fix bugs ( #493 )
2023-11-16 19:51:01 +08:00
jiaxingli
4a6987d5e7
unitest_only_forward ( #484 )
2023-11-16 15:30:57 +08:00
jiaxingli
e8cf27b8c0
Feat(QA): Check init model weights ( #502 )
...
* check_init
* check_init
* check_init
* check_init
2023-11-16 11:03:19 +08:00
YWMditto
be5b9ea2fa
feat(train): update get_train_data_loader to make logic clearer ( #498 )
...
* update get_train_data_loader
* update get_train_data_loader, del old doc
---------
Co-authored-by: YWMditto <862779238@qq.com>
2023-11-14 17:05:15 +08:00
kkscilife
2b984ffa58
test(workflow): add ci workflow for acc test ( #485 )
...
* add ci workflow for acc test
* change train script
* add --kill-on-bad-exit=1 and change always to !cancelled
---------
Co-authored-by: wangmengke <wangmengke@pjlab.org.cn>
2023-11-13 18:04:01 +08:00
jiaopenglong
626ed0fc5e
fix(train): unify the exp paths ( #492 )
2023-11-11 20:15:59 +08:00
jiaopenglong
3418898cbe
fix(alert): send exception of all ranks ( #491 )
...
* catch exception of all ranks
* monitor task only if DO_ALERT is True
2023-11-10 19:04:31 +08:00
huangting4201
8ada074cfd
fix(docs): fix 20B demo log ( #490 )
...
* feat(docs): change 30B demo to 20B
* feat(docs): change 30B demo to 20B
* feat(docs): fix demo log
2023-11-10 15:57:11 +08:00
Yang Gao
07026d1821
fix dataset types when using random dataset ( #489 )
2023-11-10 15:08:22 +08:00
huangting4201
5d3242027a
docs(code-docs): add 20b training demo ( #488 )
...
* feat(docs): change 30B demo to 20B
* feat(docs): change 30B demo to 20B
2023-11-10 14:00:27 +08:00
Guoteng
b7ecdba617
feat(ckpt): save ckpt when reach total step count ( #486 )
2023-11-09 21:07:16 +08:00
Pryest
5b67db33d0
fix(metric): use float32 to compute ppl ( #481 )
2023-11-09 20:26:46 +08:00
jiaopenglong
a435980e0c
rename vars ( #468 )
2023-11-09 20:04:35 +08:00
jiaopenglong
0763bf3972
init light monitoring on all ranks ( #462 )
2023-11-09 20:04:21 +08:00
YWMditto
0218e3131c
feat(tools): support origin internlm architecture in web_demo ( #478 )
...
* debug for web_demo_internlm
* support web_demo_internlm
* update readme.md
* update web_demo.py
* update InternLM/tools/load_internlm_model.py
* update apis/inference.py
* update apis/inference.py
* update tools/load_internlm_model
* del private info in load_internlm_model.py
* fix some info
* fix some info
---------
Co-authored-by: YWMditto <862779238@qq.com>
2023-11-09 20:01:55 +08:00
jiaxingli
bd7e501b69
Feat(QA): Check model weights for acc ( #476 )
...
* check_weights
* check_weights
2023-11-09 16:16:29 +08:00
x54-729
a38af602bc
feat(doc): add torch_dtype to examples in README ( #479 )
...
* add torch_dtype to README examples
* typo
2023-11-09 15:58:58 +08:00
YWMditto
79e84fade3
feat(doc): add dynamic ntk example ( #480 )
...
* add dynamic ntk compare example
* add dynamic ntk compare example
---------
Co-authored-by: YWMditto <862779238@qq.com>
2023-11-09 13:12:38 +08:00
x54-729
1706ae2eaa
fix(tools): set bos, eos, pad in convert2hf to fix improper generation ( #471 )
...
* Set bos eos pad in convert2hf to fix improper generation
* set pos eos pad in convert2hf to fix improper generation
2023-11-07 23:10:06 +08:00
Yang Gao
6f69bd2087
feat(data): walk folder to get dataset_type_ids_map ( #477 )
...
* walk folder to get dataset_type_ids_map
* fix a bug
2023-11-07 19:21:10 +08:00
Yang Gao
4d1b1cd5f1
fix(data): broadcast list when walking folders ( #475 )
2023-11-07 13:12:35 +08:00
YWMditto
095ebfff9d
feat(tools): support dynamic ntk rope in transformers ( #470 )
...
* support dynamic ntk in transformers
* support dynamic ntk in transformers
* support dynamic ntk in transformers
* add rope doc
* add rotary config in configuration_internlm.py
---------
Co-authored-by: YWMditto <862779238@qq.com>
2023-11-06 23:15:06 +08:00
x54-729
42ad9cc786
fix(readme): fix model path in readme ( #474 )
2023-11-06 19:26:48 +08:00
x54-729
b9c813a972
fix(tools): fix streaming_chat and update docs ( #467 )
...
* move hf model to tools/transformers/internlm_model
* fix stream_chat
* Add stream_chat example
* fix import
* Add __init__ to internlm_model
* Add hf link
* fix import of tools/tokenizer.py
* fix huggingface url in readme
2023-11-03 16:12:37 +08:00
jiaopenglong
debb7e77b9
refactor grad norm profiling ( #466 )
2023-11-03 10:55:26 +08:00
jiaopenglong
d537e45456
send exception to light monitor only if the server is available ( #465 )
2023-11-03 10:55:16 +08:00
kkscilife
6b2bff421c
change slurm partition ( #464 )
...
Co-authored-by: wangmengke <wangmengke@pjlab.org.cn>
2023-11-02 13:25:46 +08:00
Wenwen Qu
21624f6f81
fix(moe): remove norm&gate force sync ( #448 )
...
* add zero broadcast_sync
* delete old sync logic
* fix merged error
* refactor code
* remove some unused function (is norm/gate group)
2023-11-01 11:29:55 +08:00
Yang Gao
f77f376fd6
fix(os): fix FileNotFoundError in storage_manager ( #455 )
...
* use rank0 to makedirs
* use try-except to handle file error
* fix ci
2023-10-27 22:32:46 +08:00
jiaxingli
4995060d84
feat(storage): support ali oss ckpt saving ( #439 )
2023-10-27 22:32:10 +08:00
jiaxingli
e6d8ebc3e5
volc_path ( #454 )
2023-10-27 18:53:06 +08:00
jiaopenglong
87a3c5c374
feat(optimizer): zero gradient count ( #449 )
...
* add zero grad count
* fix layer norm with pp
* fix layer norm with pp
* add zero_grad_profiling option
* fix param_metrics is not a tensor
2023-10-27 16:26:55 +08:00
ytxiong
ad70e323eb
fix(optimizer):broadcast ( #453 )
...
* fix broadcast synchronize()
* fix synchronize
2023-10-26 17:54:54 +08:00
ytxiong
aeee9fd2a9
fix broadcast synchronize() ( #450 )
2023-10-26 17:33:00 +08:00
ytxiong
1d7e2d04ec
fix(*)/all-reduce for norm in sequence parallel ( #443 )
...
* fix all-reduce norm grad
* change the order of dp and sp all-reduce
* fix lint
2023-10-25 14:16:32 +08:00
jiaopenglong
949a0a1d55
feat(optimizer): add layer norm to tensorboard ( #429 )
...
* add layer norm to tensorboard
* test moe layer norm
* add function: reduce grads
2023-10-23 17:07:04 +08:00
kkscilife
140be20511
test(workflow): add unit test yaml ( #427 )
...
* add unit test yaml
* add main branch
---------
Co-authored-by: changxiaodongTHU <2437105032@qq.com>
2023-10-20 14:22:58 +08:00
Wenwen Qu
3c992a2101
fix(pipeline): fix interleave type assert and metrics error ( #423 )
...
* fix interleave type assert bug
* refactor code for assert
* fix is_no_pp_or_last_stage logic
2023-10-19 17:29:30 +08:00
jiaxingli
3ea46324dd
fix: unitest ( #424 )
2023-10-19 15:19:40 +08:00
Wenwen Qu
2c5395fdfd
Doc(moe): add documentation for moe training ( #411 )
...
* add doc for moe
* fix moe and zero1 check in args_sanity_check
* restore moe config file
2023-10-19 10:01:12 +08:00
Guoteng
3ea94f2e2a
fix(utils): disable bench_net in gputest.py ( #421 )
2023-10-19 10:00:57 +08:00
jiaopenglong
4b5bdedff2
feat(monitor): send exception to light monitor ( #420 )
...
* send exception to light monitor
* update try_import_send_exception
2023-10-18 21:00:21 +08:00
jiaxingli
30f610b1fa
Test(pp): test pipeline parallel ( #413 )
...
* test: pp
* feat: add pp test
* test pp
* pp test
* pp test
* test pp
2023-10-18 17:53:08 +08:00
Wenwen Qu
aa5e34d815
compatible with old ckpt ( #418 )
2023-10-17 17:25:36 +08:00
Wenwen Qu
eeef07934a
fix(moe): fix moe compatibility for fsdp and memory profiling ( #417 )
...
* fix moe compatibility for fsdp and memory profiling
* update moe config
2023-10-17 14:13:48 +08:00
Guoteng
37e0c86e5a
fix(init): allow resume_tb_folder is an empty string ( #391 )
2023-10-13 03:46:14 -05:00
jiaxingli
71a0388b87
feat(storage): support volc oss ckpt saving ( #397 )
...
* feat: support volc tos
* feat: support volc oss
2023-10-13 03:44:29 -05:00
huangting4201
9a731b6e9b
fix(optimizer/fsdp_optimizer.py): fsdp process empty params group ( #408 )
...
Co-authored-by: huangting4201 <huangting3@sensetime.com>
2023-10-10 20:06:04 +08:00
Pryest
b3645b0244
fix(model): fix errant inference_forward ( #396 )
...
* Fix errant inference_forward.
* Recover use_dynamic_ntk_rope.
* Fix bugs.
* Fit to flash attention 1.0
* Fit to flash attention 1.0
* Fit to flash attention 1.0.5.
* Fit to flash attention 1.0.5.
2023-10-09 08:29:11 -05:00
zaglc
a075153adf
feat(train): add fsdp training option ( #293 )
...
* feat(fsdp): add training option for fsdp
* fix(fsdp): add mix-precision training
* fix failure in lint-check
* fix format problem
* restore 7B_sft
* fix load ckpt bug
* fix load ckpt bug2
* feat(solver/optimizer): add new file fsdp_optimizer.py
* fix(train.py): fix ci lint error
* fix(fsdp_optimizer.py): wait grad async
* fix bug for loading ckpts when zero1 < dp_size
* fix(context/parallel_context.py): only log warning for fsdp
* change ckpt name
* fix(model/modeling_internlm.py): fix checkpoint=False runtime error
* more wrap
* add support for FSDP with tp
* modify args_sanity_check for fsdp with pipeline and fsdp with moe
* fix(internlm/utils/parallel.py): fix circular import
* fix(internlm/train/training_internlm.py): remove set IS_TENSOR_PARALLEL attr
* fix(internlm/train/training_internlm.py): update wrap class and fix lint error
* fix(internlm/model): reset dropout_selective_checkpoint=True
* feat(configs/7B_sft.py): move fsdp config to parallel zero1
* feat(configs/7B_sft.py): adapt to old version config
---------
Co-authored-by: huangting4201 <1538303371@qq.com>
2023-10-09 18:59:31 +08:00