Commit Graph

8 Commits (0e6b1f856cbc71b4926d975a55b0ae3e80a1d46d)

Author SHA1 Message Date
Wenwen Qu 0e6b1f856c add support for moe checkpoint 2023-08-24 17:01:14 +08:00
Sun Peng 32664328e7
Feat/overlap_bcast_forward (#218)
* feat/support bcast forward overlao

* feat/optimize the bcast call

* feat/optimize the bcast call

* feat/optimize the bcast call

* fix lint

* fix lint

* fix lint

* fix lint

* add torch.cuda.synchronize in save_checkpoint

---------

Co-authored-by: sunpeng <sunpengsdu@gmail.com>
2023-08-23 16:59:59 +08:00
Guoteng 29779c75f0
feat(ckpt): add auto ckpt load and singal quit (#216)
Co-authored-by: wangguoteng.p <wangguoteng925@qq.com>
2023-08-23 14:17:45 +08:00
Sun Peng 5f3133fac8
Revert "feat(ckpt): add auto ckpt load and singal quit (#189)" (#192)
This reverts commit a45a91bb84.
2023-08-11 17:12:26 +08:00
Guoteng a45a91bb84
feat(ckpt): add auto ckpt load and singal quit (#189)
Co-authored-by: wangguoteng.p <wangguoteng925@qq.com>
2023-08-11 17:08:01 +08:00
Guoteng 29d27a6227
feat(ckpt): add async upload and ckpt snapshot (#161)
* use fp16 in instruction (#80)

* delete torch_dtype of README's example code (#100)

* feat(ckpt): support async ckpt upload and ckpt snapshot

---------

Co-authored-by: WRH <12756472+wangruohui@users.noreply.github.com>
Co-authored-by: x54-729 <45304952+x54-729@users.noreply.github.com>
Co-authored-by: wangguoteng.p <wangguoteng925@qq.com>
2023-08-08 13:08:36 +08:00
huangting4201 762ab297ee
feat(core/scheduler): support pipeline parallel (#98)
* feat(utils/writer.py): support tensorboard writer

* feat(utils/writer.py): add class comment

* feat(core): support pipeline parallel

* fix(core): fix demo running error

* feat(solver/optimizer): add pp zero optimizer

* fix(solver/optimizer): fix word spelling error

* feat(core/scheduler): add new dir scheduler in core/

* fix(core): fix ci lint error

* feat(solver/optimizer): merge pp and nopp optimizer

* doc(usage.md): update usage doc

* feat(core/scheduler): support post func

* feat(core/scheduler): add dtype para in pp sche and update func get_tensor_shape

* feat(core/scheduler): add _load_micro_batch in base scheduler

* feat(core/scheduler): support optimizer overlap communication in pp scheduler

* feat(core/scheduler): delete data process func code

* feat(core/trainer): schedule pre processing for all schedule

---------

Co-authored-by: 黄婷 <huangting3@CN0014010744M.local>
Co-authored-by: huangting.p <huangting@sensetime.com>
2023-07-24 20:52:09 +08:00
Sun Peng fa7337b37b initial commit 2023-07-06 12:55:23 +08:00