mirror of https://github.com/hpcaitech/ColossalAI
400f63012e
* add alpa dp split * add alpa dp split * use fwd+bwd instead of fwd only --------- Co-authored-by: Ziyue Jiang <ziyue.jiang@gmail.com> |
||
---|---|---|
.. | ||
README.md | ||
model_zoo.py | ||
requirements.txt | ||
run.sh | ||
train_gpt_pp.py |
README.md
Pipeline Parallelism Demo with GPT2
Requirements
Before you can launch training, you need to install the following requirements.
Install PyTorch
#conda
conda install pytorch==1.12.0 torchvision==0.13.0 torchaudio==0.12.0 cudatoolkit=11.3 -c pytorch
#pip
pip install torch==1.12.0+cu113 torchvision==0.13.0+cu113 torchaudio==0.12.0 --extra-index-url https://download.pytorch.org/whl/cu113
Install Colossal-AI v0.2.0 From Official Website
pip install colossalai==0.2.0+torch1.12cu11.3 -f https://release.colossalai.org
Install transformers
pip install transformers
Dataset
For simplicity, the input data is randonly generated here.
Training
#Run the Pipeline Parallel on GPT with default setting and a dummy dataset.
#You can change the GPU number or microbatch number in the run.sh .
bash run.sh