ColossalAI/examples/language/gpt/experiments/pipeline_parallel
Ziyue Jiang 400f63012e
[pipeline] Add Simplified Alpa DP Partition (#2507)
* add alpa dp split

* add alpa dp split

* use fwd+bwd instead of fwd only

---------

Co-authored-by: Ziyue Jiang <ziyue.jiang@gmail.com>
2023-03-07 10:34:31 +08:00
..
README.md polish 2023-01-06 16:03:16 +08:00
model_zoo.py Move GPT PP Example 2023-01-06 14:48:58 +08:00
requirements.txt [example] fix requirements (#2488) 2023-01-17 13:07:25 +08:00
run.sh Move GPT PP Example 2023-01-06 14:48:58 +08:00
train_gpt_pp.py [pipeline] Add Simplified Alpa DP Partition (#2507) 2023-03-07 10:34:31 +08:00

README.md

Pipeline Parallelism Demo with GPT2

Requirements

Before you can launch training, you need to install the following requirements.

Install PyTorch

#conda
conda install pytorch==1.12.0 torchvision==0.13.0 torchaudio==0.12.0 cudatoolkit=11.3 -c pytorch
#pip
pip install torch==1.12.0+cu113 torchvision==0.13.0+cu113 torchaudio==0.12.0 --extra-index-url https://download.pytorch.org/whl/cu113

Install Colossal-AI v0.2.0 From Official Website

pip install colossalai==0.2.0+torch1.12cu11.3 -f https://release.colossalai.org

Install transformers

pip install transformers

Dataset

For simplicity, the input data is randonly generated here.

Training

#Run the Pipeline Parallel on GPT with default setting and a dummy dataset.
#You can change the GPU number or microbatch number in the run.sh .
bash run.sh