This website requires JavaScript.
Explore
关于
Help
Register
Sign In
github
/
ColossalAI
mirror of
https://github.com/hpcaitech/ColossalAI
Watch
1
Star
0
Fork
You've already forked ColossalAI
0
Code
Issues
Projects
Releases
Wiki
Activity
8d56c9c389
ColossalAI
/
colossalai
/
legacy
/
nn
/
layer
/
wrapper
/
__init__.py
4 lines
101 B
Python
Raw
Normal View
History
Unescape
Escape
Optimize pipeline schedule (#94) * add pipeline shared module wrapper and update load batch * added model parallel process group for amp and clip grad (#86) * added model parallel process group for amp and clip grad * update amp and clip with model parallel process group * remove pipeline_prev/next group (#88) * micro batch offload * optimize pipeline gpu memory usage * pipeline can receive tensor shape (#93) * optimize pipeline gpu memory usage * fix grad accumulation step counter * rename classes and functions Co-authored-by: Frank Lee <somerlee.9@gmail.com>
2021-12-30 07:56:46 +00:00
from
.
pipeline_wrapper
import
PipelineSharedModuleWrapper
Migrated project
2021-10-28 16:21:23 +00:00
[misc] update pre-commit and run all files (#4752) * [misc] update pre-commit * [misc] run pre-commit * [misc] remove useless configuration files * [misc] ignore cuda for clang-format
2023-09-19 06:20:26 +00:00
__all__
=
[
"
PipelineSharedModuleWrapper
"
]