This website requires JavaScript.
Explore
关于
Help
Register
Sign In
github
/
ColossalAI
mirror of
https://github.com/hpcaitech/ColossalAI
Watch
1
Star
0
Fork
You've already forked ColossalAI
0
Code
Issues
Projects
Releases
Wiki
Activity
b29e1f0722
ColossalAI
/
colossalai
/
nn
/
layer
/
wrapper
/
__init__.py
4 lines
101 B
Python
Raw
Normal View
History
Unescape
Escape
Optimize pipeline schedule (#94) * add pipeline shared module wrapper and update load batch * added model parallel process group for amp and clip grad (#86) * added model parallel process group for amp and clip grad * update amp and clip with model parallel process group * remove pipeline_prev/next group (#88) * micro batch offload * optimize pipeline gpu memory usage * pipeline can receive tensor shape (#93) * optimize pipeline gpu memory usage * fix grad accumulation step counter * rename classes and functions Co-authored-by: Frank Lee <somerlee.9@gmail.com>
2021-12-30 07:56:46 +00:00
from
.
pipeline_wrapper
import
PipelineSharedModuleWrapper
Migrated project
2021-10-28 16:21:23 +00:00
[pipeline] refactor the pipeline module (#1087) * [pipeline] refactor the pipeline module * polish code
2022-06-10 03:27:38 +00:00
__all__
=
[
'
PipelineSharedModuleWrapper
'
]