Commit Graph

101 Commits (dccdfc7e4eb2bc2257cf685fb674f0af0bed3b22)

Author SHA1 Message Date
Wenwen Qu dccdfc7e4e support mixtral-7x8b 2024-01-16 19:23:11 +08:00
jiaxingli d904730be7
feat(ckpt): support auto resume in Volc and Ali (#529)
* multipart upload

* upload

* storage

* storage

* storage

* storage

* change ak sk name

* change ak sk name

* change ak sk name

* change ak sk name

* storage

* storage

* auto resume

* auto resume

* auto resume

* bug
2023-12-12 13:27:24 +08:00
Pryest cc5b15349d
fix(metric): add metric dtype control (#533)
* fix(metric): add metric dtype control

* fix demo config to avoid implicity

* fix default behavior
2023-12-11 19:36:31 +08:00
jiaxingli 6c0ff4820f
feat(model): support llama model with checkpoint loading (#532)
* support hf llama

* support hf llama

* support hf llama

* support hf llama

* importerror

* importerror

* modeling

* modeling
2023-12-11 16:25:24 +08:00
jiaxingli 828033aed5
fix(storage): unify the name of ak & sk (#527)
* multipart upload

* upload

* storage

* storage

* storage

* storage

* change ak sk name

* change ak sk name

* change ak sk name

* change ak sk name

* storage

* storage
2023-12-06 15:31:44 +08:00
ytxiong 809ad9ebc8
fix the type_ids when micro_num=1 and use_flash_attn=False (#516) 2023-12-06 14:38:28 +08:00
jiaopenglong 112c34ae09
feat(grad_norm): vocab grad norm profiling (#519)
* compute vocab grad norm && save pt

* add grad_norm profiling interval && refactor save grad norm

* fix ci test_pipeline
2023-12-06 13:52:42 +08:00
jiaopenglong 9fc252f40e
add output embedding tf32 option (#523) 2023-12-06 13:50:59 +08:00
ytxiong c581cc4c02
fix(model): add IS_SEQUENCE_PARALLEL check for norm module (#528)
* add IS_SEQUENCE_PARALLEL check for norm module

* fix lint

* remove comments

* replace the named_children by named_modules

* fix lint

* fix the spell bug and move the sequence judge to training_internlm
2023-12-06 12:06:22 +08:00
jiaxingli 1738bee002
feat(storage): use multipart upload when using oss (#520)
* multipart upload

* upload

* storage

* storage

* storage

* storage
2023-12-01 17:05:58 +08:00
Guoteng 757e19e01a
1. fix(config): rampup_batch_size defalut value BC. (#515)
2. fix(config): standardize config parameter access.
3. feat(launch): add warmup_process_group
4. feat(memory): add cuda_memory_analyze
2023-11-28 19:33:46 +08:00
Shuo Zhang 0d3811c029
feat(model): add rope_base interface (#512) 2023-11-23 16:30:14 +08:00
jiaopenglong f5aea7e08c
fix(timeout): larger timeout (#495)
* larger initialize timeout

* unify time format

* update timeout thresholds
2023-11-21 19:19:22 +08:00
jiaxingli eba2b859fc
feat(seed): set global seed for every model initialization (#496)
* bind seed

* bind seed
2023-11-17 14:42:50 +08:00
Guoteng 0bfc86205e
feat(train): support_rampup_batch_size and fix bugs (#493) 2023-11-16 19:51:01 +08:00
YWMditto be5b9ea2fa
feat(train): update get_train_data_loader to make logic clearer (#498)
* update get_train_data_loader

* update get_train_data_loader, del old doc

---------

Co-authored-by: YWMditto <862779238@qq.com>
2023-11-14 17:05:15 +08:00
jiaopenglong 626ed0fc5e
fix(train): unify the exp paths (#492) 2023-11-11 20:15:59 +08:00
jiaopenglong 3418898cbe
fix(alert): send exception of all ranks (#491)
* catch exception of all ranks

* monitor task only if DO_ALERT is True
2023-11-10 19:04:31 +08:00
Yang Gao 07026d1821
fix dataset types when using random dataset (#489) 2023-11-10 15:08:22 +08:00
Guoteng b7ecdba617
feat(ckpt): save ckpt when reach total step count (#486) 2023-11-09 21:07:16 +08:00
Pryest 5b67db33d0
fix(metric): use float32 to compute ppl (#481) 2023-11-09 20:26:46 +08:00
jiaopenglong a435980e0c
rename vars (#468) 2023-11-09 20:04:35 +08:00
jiaopenglong 0763bf3972
init light monitoring on all ranks (#462) 2023-11-09 20:04:21 +08:00
YWMditto 0218e3131c
feat(tools): support origin internlm architecture in web_demo (#478)
* debug for web_demo_internlm

* support web_demo_internlm

* update readme.md

* update web_demo.py

* update InternLM/tools/load_internlm_model.py

* update apis/inference.py

* update apis/inference.py

* update tools/load_internlm_model

* del private info in load_internlm_model.py

* fix some info

* fix some info

---------

Co-authored-by: YWMditto <862779238@qq.com>
2023-11-09 20:01:55 +08:00
Yang Gao 6f69bd2087
feat(data): walk folder to get dataset_type_ids_map (#477)
* walk folder to get dataset_type_ids_map

* fix a bug
2023-11-07 19:21:10 +08:00
Yang Gao 4d1b1cd5f1
fix(data): broadcast list when walking folders (#475) 2023-11-07 13:12:35 +08:00
jiaopenglong debb7e77b9
refactor grad norm profiling (#466) 2023-11-03 10:55:26 +08:00
jiaopenglong d537e45456
send exception to light monitor only if the server is available (#465) 2023-11-03 10:55:16 +08:00
Wenwen Qu 21624f6f81
fix(moe): remove norm&gate force sync (#448)
* add zero broadcast_sync

* delete old sync logic

* fix merged error

* refactor code

* remove some unused function (is norm/gate group)
2023-11-01 11:29:55 +08:00
Yang Gao f77f376fd6
fix(os): fix FileNotFoundError in storage_manager (#455)
* use rank0 to makedirs

* use try-except to handle file error

* fix ci
2023-10-27 22:32:46 +08:00
jiaxingli 4995060d84
feat(storage): support ali oss ckpt saving (#439) 2023-10-27 22:32:10 +08:00
jiaxingli e6d8ebc3e5
volc_path (#454) 2023-10-27 18:53:06 +08:00
jiaopenglong 87a3c5c374
feat(optimizer): zero gradient count (#449)
* add zero grad count

* fix layer norm with pp

* fix layer norm with pp

* add zero_grad_profiling option

* fix param_metrics is not a tensor
2023-10-27 16:26:55 +08:00
ytxiong ad70e323eb
fix(optimizer):broadcast (#453)
* fix broadcast synchronize()

* fix synchronize
2023-10-26 17:54:54 +08:00
ytxiong aeee9fd2a9
fix broadcast synchronize() (#450) 2023-10-26 17:33:00 +08:00
ytxiong 1d7e2d04ec
fix(*)/all-reduce for norm in sequence parallel (#443)
* fix all-reduce norm grad

* change the order of dp and sp all-reduce

* fix lint
2023-10-25 14:16:32 +08:00
jiaopenglong 949a0a1d55
feat(optimizer): add layer norm to tensorboard (#429)
* add layer norm to tensorboard

* test moe layer norm

* add function: reduce grads
2023-10-23 17:07:04 +08:00
Wenwen Qu 3c992a2101
fix(pipeline): fix interleave type assert and metrics error (#423)
* fix interleave type assert bug

* refactor code for assert

* fix is_no_pp_or_last_stage logic
2023-10-19 17:29:30 +08:00
Wenwen Qu 2c5395fdfd
Doc(moe): add documentation for moe training (#411)
* add doc for moe

* fix moe and zero1 check in args_sanity_check

* restore moe config file
2023-10-19 10:01:12 +08:00
Guoteng 3ea94f2e2a
fix(utils): disable bench_net in gputest.py (#421) 2023-10-19 10:00:57 +08:00
jiaopenglong 4b5bdedff2
feat(monitor): send exception to light monitor (#420)
* send exception to light monitor

* update try_import_send_exception
2023-10-18 21:00:21 +08:00
Wenwen Qu aa5e34d815
compatible with old ckpt (#418) 2023-10-17 17:25:36 +08:00
Wenwen Qu eeef07934a
fix(moe): fix moe compatibility for fsdp and memory profiling (#417)
* fix moe compatibility for fsdp and memory profiling

* update moe config
2023-10-17 14:13:48 +08:00
Guoteng 37e0c86e5a
fix(init): allow resume_tb_folder is an empty string (#391) 2023-10-13 03:46:14 -05:00
jiaxingli 71a0388b87
feat(storage): support volc oss ckpt saving (#397)
* feat: support volc tos

* feat: support volc oss
2023-10-13 03:44:29 -05:00
huangting4201 9a731b6e9b
fix(optimizer/fsdp_optimizer.py): fsdp process empty params group (#408)
Co-authored-by: huangting4201 <huangting3@sensetime.com>
2023-10-10 20:06:04 +08:00
Pryest b3645b0244
fix(model): fix errant inference_forward (#396)
* Fix errant inference_forward.

* Recover use_dynamic_ntk_rope.

* Fix bugs.

* Fit to flash attention 1.0

* Fit to flash attention 1.0

* Fit to flash attention 1.0.5.

* Fit to flash attention 1.0.5.
2023-10-09 08:29:11 -05:00
zaglc a075153adf
feat(train): add fsdp training option (#293)
* feat(fsdp): add training option for fsdp

* fix(fsdp): add mix-precision training

* fix failure in lint-check

* fix format problem

* restore 7B_sft

* fix load ckpt bug

* fix load ckpt bug2

* feat(solver/optimizer): add new file fsdp_optimizer.py

* fix(train.py): fix ci lint error

* fix(fsdp_optimizer.py): wait grad async

* fix bug for loading ckpts when zero1 < dp_size

* fix(context/parallel_context.py): only log warning for fsdp

* change ckpt name

* fix(model/modeling_internlm.py): fix checkpoint=False runtime error

* more wrap

* add support for FSDP with tp

* modify args_sanity_check for fsdp with pipeline and fsdp with moe

* fix(internlm/utils/parallel.py): fix circular import

* fix(internlm/train/training_internlm.py): remove set IS_TENSOR_PARALLEL attr

* fix(internlm/train/training_internlm.py): update wrap class and fix lint error

* fix(internlm/model): reset dropout_selective_checkpoint=True

* feat(configs/7B_sft.py): move fsdp config to parallel zero1

* feat(configs/7B_sft.py): adapt to old version config

---------

Co-authored-by: huangting4201 <1538303371@qq.com>
2023-10-09 18:59:31 +08:00
Wenwen Qu 582ee000bd
feat(moe):support zero for expert local dp (#404)
* support zero for expert local dp

* fix above codes:
    *treat optim.zero_world_size and optim.zero_local_rank as list in model_checkpoint.py and test_model_checkpoint.py
    *add overlap and zero check for moe in args_sanity_check(.)
2023-10-09 17:45:26 +08:00
Wenwen Qu 916647c0a1
fix(pipeline): fix bugs for pipeline when enable mixed precision (#382)
* fix bugs for pipeline

* restore logic for empty fp32 group

* move optim.dtype to each param group
2023-10-09 14:01:15 +08:00