lijiaxing
06cdcc3654
upload
2023-11-29 11:08:40 +08:00
lijiaxing
4e4fb52898
multipart upload
2023-11-28 15:37:26 +08:00
Shuo Zhang
0d3811c029
feat(model): add rope_base interface ( #512 )
2023-11-23 16:30:14 +08:00
jiaopenglong
f5aea7e08c
fix(timeout): larger timeout ( #495 )
...
* larger initialize timeout
* unify time format
* update timeout thresholds
2023-11-21 19:19:22 +08:00
jiaxingli
eba2b859fc
feat(seed): set global seed for every model initialization ( #496 )
...
* bind seed
* bind seed
2023-11-17 14:42:50 +08:00
Guoteng
0bfc86205e
feat(train): support_rampup_batch_size and fix bugs ( #493 )
2023-11-16 19:51:01 +08:00
YWMditto
be5b9ea2fa
feat(train): update get_train_data_loader to make logic clearer ( #498 )
...
* update get_train_data_loader
* update get_train_data_loader, del old doc
---------
Co-authored-by: YWMditto <862779238@qq.com>
2023-11-14 17:05:15 +08:00
jiaopenglong
626ed0fc5e
fix(train): unify the exp paths ( #492 )
2023-11-11 20:15:59 +08:00
jiaopenglong
3418898cbe
fix(alert): send exception of all ranks ( #491 )
...
* catch exception of all ranks
* monitor task only if DO_ALERT is True
2023-11-10 19:04:31 +08:00
Yang Gao
07026d1821
fix dataset types when using random dataset ( #489 )
2023-11-10 15:08:22 +08:00
Guoteng
b7ecdba617
feat(ckpt): save ckpt when reach total step count ( #486 )
2023-11-09 21:07:16 +08:00
Pryest
5b67db33d0
fix(metric): use float32 to compute ppl ( #481 )
2023-11-09 20:26:46 +08:00
jiaopenglong
a435980e0c
rename vars ( #468 )
2023-11-09 20:04:35 +08:00
jiaopenglong
0763bf3972
init light monitoring on all ranks ( #462 )
2023-11-09 20:04:21 +08:00
YWMditto
0218e3131c
feat(tools): support origin internlm architecture in web_demo ( #478 )
...
* debug for web_demo_internlm
* support web_demo_internlm
* update readme.md
* update web_demo.py
* update InternLM/tools/load_internlm_model.py
* update apis/inference.py
* update apis/inference.py
* update tools/load_internlm_model
* del private info in load_internlm_model.py
* fix some info
* fix some info
---------
Co-authored-by: YWMditto <862779238@qq.com>
2023-11-09 20:01:55 +08:00
Yang Gao
6f69bd2087
feat(data): walk folder to get dataset_type_ids_map ( #477 )
...
* walk folder to get dataset_type_ids_map
* fix a bug
2023-11-07 19:21:10 +08:00
Yang Gao
4d1b1cd5f1
fix(data): broadcast list when walking folders ( #475 )
2023-11-07 13:12:35 +08:00
jiaopenglong
debb7e77b9
refactor grad norm profiling ( #466 )
2023-11-03 10:55:26 +08:00
jiaopenglong
d537e45456
send exception to light monitor only if the server is available ( #465 )
2023-11-03 10:55:16 +08:00
Wenwen Qu
21624f6f81
fix(moe): remove norm&gate force sync ( #448 )
...
* add zero broadcast_sync
* delete old sync logic
* fix merged error
* refactor code
* remove some unused function (is norm/gate group)
2023-11-01 11:29:55 +08:00
Yang Gao
f77f376fd6
fix(os): fix FileNotFoundError in storage_manager ( #455 )
...
* use rank0 to makedirs
* use try-except to handle file error
* fix ci
2023-10-27 22:32:46 +08:00
jiaxingli
4995060d84
feat(storage): support ali oss ckpt saving ( #439 )
2023-10-27 22:32:10 +08:00
jiaxingli
e6d8ebc3e5
volc_path ( #454 )
2023-10-27 18:53:06 +08:00
jiaopenglong
87a3c5c374
feat(optimizer): zero gradient count ( #449 )
...
* add zero grad count
* fix layer norm with pp
* fix layer norm with pp
* add zero_grad_profiling option
* fix param_metrics is not a tensor
2023-10-27 16:26:55 +08:00
ytxiong
ad70e323eb
fix(optimizer):broadcast ( #453 )
...
* fix broadcast synchronize()
* fix synchronize
2023-10-26 17:54:54 +08:00
ytxiong
aeee9fd2a9
fix broadcast synchronize() ( #450 )
2023-10-26 17:33:00 +08:00
ytxiong
1d7e2d04ec
fix(*)/all-reduce for norm in sequence parallel ( #443 )
...
* fix all-reduce norm grad
* change the order of dp and sp all-reduce
* fix lint
2023-10-25 14:16:32 +08:00
jiaopenglong
949a0a1d55
feat(optimizer): add layer norm to tensorboard ( #429 )
...
* add layer norm to tensorboard
* test moe layer norm
* add function: reduce grads
2023-10-23 17:07:04 +08:00
Wenwen Qu
3c992a2101
fix(pipeline): fix interleave type assert and metrics error ( #423 )
...
* fix interleave type assert bug
* refactor code for assert
* fix is_no_pp_or_last_stage logic
2023-10-19 17:29:30 +08:00
Wenwen Qu
2c5395fdfd
Doc(moe): add documentation for moe training ( #411 )
...
* add doc for moe
* fix moe and zero1 check in args_sanity_check
* restore moe config file
2023-10-19 10:01:12 +08:00
Guoteng
3ea94f2e2a
fix(utils): disable bench_net in gputest.py ( #421 )
2023-10-19 10:00:57 +08:00
jiaopenglong
4b5bdedff2
feat(monitor): send exception to light monitor ( #420 )
...
* send exception to light monitor
* update try_import_send_exception
2023-10-18 21:00:21 +08:00
Wenwen Qu
aa5e34d815
compatible with old ckpt ( #418 )
2023-10-17 17:25:36 +08:00
Wenwen Qu
eeef07934a
fix(moe): fix moe compatibility for fsdp and memory profiling ( #417 )
...
* fix moe compatibility for fsdp and memory profiling
* update moe config
2023-10-17 14:13:48 +08:00
Guoteng
37e0c86e5a
fix(init): allow resume_tb_folder is an empty string ( #391 )
2023-10-13 03:46:14 -05:00
jiaxingli
71a0388b87
feat(storage): support volc oss ckpt saving ( #397 )
...
* feat: support volc tos
* feat: support volc oss
2023-10-13 03:44:29 -05:00
huangting4201
9a731b6e9b
fix(optimizer/fsdp_optimizer.py): fsdp process empty params group ( #408 )
...
Co-authored-by: huangting4201 <huangting3@sensetime.com>
2023-10-10 20:06:04 +08:00
Pryest
b3645b0244
fix(model): fix errant inference_forward ( #396 )
...
* Fix errant inference_forward.
* Recover use_dynamic_ntk_rope.
* Fix bugs.
* Fit to flash attention 1.0
* Fit to flash attention 1.0
* Fit to flash attention 1.0.5.
* Fit to flash attention 1.0.5.
2023-10-09 08:29:11 -05:00
zaglc
a075153adf
feat(train): add fsdp training option ( #293 )
...
* feat(fsdp): add training option for fsdp
* fix(fsdp): add mix-precision training
* fix failure in lint-check
* fix format problem
* restore 7B_sft
* fix load ckpt bug
* fix load ckpt bug2
* feat(solver/optimizer): add new file fsdp_optimizer.py
* fix(train.py): fix ci lint error
* fix(fsdp_optimizer.py): wait grad async
* fix bug for loading ckpts when zero1 < dp_size
* fix(context/parallel_context.py): only log warning for fsdp
* change ckpt name
* fix(model/modeling_internlm.py): fix checkpoint=False runtime error
* more wrap
* add support for FSDP with tp
* modify args_sanity_check for fsdp with pipeline and fsdp with moe
* fix(internlm/utils/parallel.py): fix circular import
* fix(internlm/train/training_internlm.py): remove set IS_TENSOR_PARALLEL attr
* fix(internlm/train/training_internlm.py): update wrap class and fix lint error
* fix(internlm/model): reset dropout_selective_checkpoint=True
* feat(configs/7B_sft.py): move fsdp config to parallel zero1
* feat(configs/7B_sft.py): adapt to old version config
---------
Co-authored-by: huangting4201 <1538303371@qq.com>
2023-10-09 18:59:31 +08:00
Wenwen Qu
582ee000bd
feat(moe):support zero for expert local dp ( #404 )
...
* support zero for expert local dp
* fix above codes:
*treat optim.zero_world_size and optim.zero_local_rank as list in model_checkpoint.py and test_model_checkpoint.py
*add overlap and zero check for moe in args_sanity_check(.)
2023-10-09 17:45:26 +08:00
Wenwen Qu
916647c0a1
fix(pipeline): fix bugs for pipeline when enable mixed precision ( #382 )
...
* fix bugs for pipeline
* restore logic for empty fp32 group
* move optim.dtype to each param group
2023-10-09 14:01:15 +08:00
ytxiong
9aef11e89c
make seed in different tensor rank different ( #405 )
2023-10-09 13:53:52 +08:00
Wenwen Qu
375240e039
feat(moe): add local data parallel support for experts ( #376 )
...
* add local data parallel support for experts
* fix model checkpoint for local dp mode of expert
* do not set ep size from config
2023-09-28 13:38:02 +08:00
Ryan (张磊)
c8242572f2
fix the moe loss as none for panel_metrics ( #379 )
2023-09-27 20:29:50 +08:00
Wenwen Qu
136d55ec30
feat(moe): add moe module ( #182 )
...
* feat(XXX): add moe
* reformat code
* modified: .pre-commit-config.yaml
modified: internlm/model/moe.py
modified: internlm/model/modeling_internlm.py
* modified: internlm/model/modeling_internlm.py
* modified: internlm/core/context/process_group_initializer.py
modified: internlm/core/scheduler/no_pipeline_scheduler.py
modified: internlm/solver/optimizer/hybrid_zero_optim.py
* modified: internlm/model/moe.py
modified: internlm/moe/sharded_moe.py
modified: internlm/utils/parallel.py
* rollback .pre-commit-config.yaml
* add residual and other moe features
* modify grad clipping due to moe
* add param arguments
* reformat code
* add expert data support and fix bugs
* Update .pre-commit-config.yaml
* modified: internlm/model/modeling_internlm.py
* add no-interleaved & no-overlapped moe pp support
* support zero_overlap_communication
* avoid moe parameter partition in zero optimizer
* fix the moe_loss_coeff bug
* suppport interleaved pp
* fix moe bugs in zero optimizer
* fix more moe bugs in zero optimizer
* fix moe bugs in zero optimizer
* add logger for moe_loss
* fix bugs with merge
* fix the pp moe bugs
* fix bug on logger
* update moe training cfg on real-dataset
* refactor code
* refactor code
* fix bugs with compute moe norm
* optimize code with moe norm computing
* fix the bug that missing scale the latent moe loss
* refactor code
* fix moe loss logger for the interleaved pp
* change the scale position for latent moe_loss
* Update 7B_sft.py
* add support for moe checkpoint
* add comments for moe
* reformat code
* fix bugs
* fix bugs
* Update .pre-commit-config.yaml
* remove moe_loss_coeff parameter passing
* fix group_norms computing in hybrid_zero_optim
* use dummy mode to generate random numbers in model construction
* replace flashatten experts by feedforward experts
* fix bugs with _compute_norm_with_moe_group
* merge upstream/develop into feature_add_moe
* merge upstream/develop into feature_add_moe
* change float16 to bfloat16
* fix interface for dense pipeline
* refactor split_moe_group code
* fix precision inconsistency
* refactor code
* Update 7B_sft.py
* refactor code
* refactor code
* refactor code
* refactor code
* refactor code for split group
* refactor code for log
* fix logger for moe
* refactor code for split param group
* fix the moe_loss for ci and val
* refactor
* fix bugs with split group
* fix bugs in save/load moe checkpoint
* add moe module to `__init__.py`
* add compatible code for old version
* update moe config file
* modify moe config file
* fix merge bugs
* update moe config file
* change condition for compatibility
---------
Co-authored-by: zhanglei <ryancheung98@163.com>
Co-authored-by: Ryan (张磊) <leizhang.real@gmail.com>
2023-09-27 15:54:53 +08:00
Wenwen Qu
655e9dae40
Feat(norm)/support fused precision ( #319 )
...
* add fused precision support for norm
* refactor code
* refactor code
* change the granularity of hook
* fix bugs if self.model is ModuleList
* add dtype condition for post hook
* refactor code for split group
* refactor code for pre/post hook
* refactor code for split group
* remove fp32 hook for norm
* unit tests for fused precision
* add doc for fused precision
* add doc for En. version
* reformat docs
* Update mixed_precision.rst
* Update mixed_precision.po
* update mixed_precision.po
2023-09-26 20:39:55 +08:00
jiaxingli
c1e30cff2c
feat(numa): bind numa if possible ( #320 )
...
* feat:add numa
* feat:add bind numa
* feat:add bind numa
* feat:add bind numa
* feat: bind numa
* feat: bind numa
* feat: add numa
* feat:add numa
* feat:add numa
* try_bind_numa should not raise exception
---------
Co-authored-by: 877825076@qq.com <877825076@qq.com>
2023-09-25 19:34:52 +08:00
jiaopenglong
9284303a6d
doc(monitor): add light monitoring doc ( #352 )
...
* add light monitoring doc
* update light monitoring doc
* update light monitoring doc
* update light monitoring doc
* update light monitoring doc continue
* update light monitoring doc continue
* update monitor config doc
* update monitor config doc continue
* update monitor config doc continue
2023-09-25 19:28:09 +08:00
jiaopenglong
847cc819dd
fix(monitor): add volc and aliyun jobid ( #338 )
...
* add volc and aliyun jobid
* rm workspaceid
2023-09-25 17:58:32 +08:00
jiaopenglong
064965527b
fix(config): monitor config key error when args_check is False ( #362 )
...
* add monitor switch
* add switch to light monitor
* fix alert_address is empty
* fix light monitor heartbeat
* init light_monitor on rank_log only
* add comments to the monitoring config
* optimize config
* fix monitor config key error when args_check is False
2023-09-25 17:30:36 +08:00