Commit Graph

4 Commits (950f2de833e9fae1a1629d0b8968c3e699cf4534)

Author SHA1 Message Date
ytxiong 1d7e2d04ec
fix(*)/all-reduce for norm in sequence parallel (#443)
* fix all-reduce norm grad

* change the order of dp and sp all-reduce

* fix lint
2023-10-25 14:16:32 +08:00
zaglc a075153adf
feat(train): add fsdp training option (#293)
* feat(fsdp): add training option for fsdp

* fix(fsdp): add mix-precision training

* fix failure in lint-check

* fix format problem

* restore 7B_sft

* fix load ckpt bug

* fix load ckpt bug2

* feat(solver/optimizer): add new file fsdp_optimizer.py

* fix(train.py): fix ci lint error

* fix(fsdp_optimizer.py): wait grad async

* fix bug for loading ckpts when zero1 < dp_size

* fix(context/parallel_context.py): only log warning for fsdp

* change ckpt name

* fix(model/modeling_internlm.py): fix checkpoint=False runtime error

* more wrap

* add support for FSDP with tp

* modify args_sanity_check for fsdp with pipeline and fsdp with moe

* fix(internlm/utils/parallel.py): fix circular import

* fix(internlm/train/training_internlm.py): remove set IS_TENSOR_PARALLEL attr

* fix(internlm/train/training_internlm.py): update wrap class and fix lint error

* fix(internlm/model): reset dropout_selective_checkpoint=True

* feat(configs/7B_sft.py): move fsdp config to parallel zero1

* feat(configs/7B_sft.py): adapt to old version config

---------

Co-authored-by: huangting4201 <1538303371@qq.com>
2023-10-09 18:59:31 +08:00
Sun Peng 860de0aa46
Feat/add runntime gpu test (#254)
* feat: add gpu bench

* feat/add allreduce runtime bench

---------

Co-authored-by: sunpengsdu <sunpengsdu@gmail.com>
2023-09-01 13:38:01 +08:00
Sun Peng fa7337b37b initial commit 2023-07-06 12:55:23 +08:00