ColossalAI/colossalai/nn
ver217 96780e6ee4
Optimize pipeline schedule (#94)
* add pipeline shared module wrapper and update load batch

* added model parallel process group for amp and clip grad (#86)

* added model parallel process group for amp and clip grad

* update amp and clip with model parallel process group

* remove pipeline_prev/next group (#88)

* micro batch offload

* optimize pipeline gpu memory usage

* pipeline can receive tensor shape (#93)

* optimize pipeline gpu memory usage

* fix grad accumulation step counter

* rename classes and functions

Co-authored-by: Frank Lee <somerlee.9@gmail.com>
2021-12-30 15:56:46 +08:00
..
layer Optimize pipeline schedule (#94) 2021-12-30 15:56:46 +08:00
loss Hotfix/Colossalai layers (#92) 2021-12-29 23:32:10 +08:00
lr_scheduler Develop/experiments (#59) 2021-12-09 15:08:29 +08:00
metric Hotfix/Colossalai layers (#92) 2021-12-29 23:32:10 +08:00
model Develop/experiments (#59) 2021-12-09 15:08:29 +08:00
optimizer Develop/experiments (#59) 2021-12-09 15:08:29 +08:00
__init__.py Layer integration (#83) 2021-12-27 15:04:32 +08:00
init.py Layer integration (#83) 2021-12-27 15:04:32 +08:00