mirror of https://github.com/hpcaitech/ColossalAI
39 lines
914 B
Markdown
39 lines
914 B
Markdown
# Pipeline Parallelism Demo with GPT2
|
|
|
|
## Requirements
|
|
|
|
Before you can launch training, you need to install the following requirements.
|
|
|
|
### Install PyTorch
|
|
|
|
```bash
|
|
#conda
|
|
conda install pytorch==1.12.0 torchvision==0.13.0 torchaudio==0.12.0 cudatoolkit=11.3 -c pytorch
|
|
#pip
|
|
pip install torch==1.12.0+cu113 torchvision==0.13.0+cu113 torchaudio==0.12.0 --extra-index-url https://download.pytorch.org/whl/cu113
|
|
```
|
|
|
|
### Install [Colossal-AI v0.2.0](https://colossalai.org/download/) From Official Website
|
|
|
|
```bash
|
|
pip install colossalai==0.2.0+torch1.12cu11.3 -f https://release.colossalai.org
|
|
```
|
|
|
|
### Install transformers
|
|
|
|
```bash
|
|
pip install transformers
|
|
```
|
|
|
|
## Dataset
|
|
|
|
For simplicity, the input data is randonly generated here.
|
|
|
|
## Training
|
|
|
|
```bash
|
|
#Run the Pipeline Parallel on GPT with default setting and a dummy dataset.
|
|
#You can change the GPU number or microbatch number in the run.sh .
|
|
bash run.sh
|
|
```
|