mirror of https://github.com/hpcaitech/ColossalAI
polish readme
parent
37baea20cb
commit
315e1433ce
|
@ -39,9 +39,15 @@ If you want to test ZeRO1 and ZeRO2 in Colossal-AI, you need to ensure Colossal-
|
|||
For simplicity, the input data is randonly generated here.
|
||||
|
||||
## Training
|
||||
We provide two solutions. One utilizes the hybrid parallel strategies of Gemini, DDP/ZeRO, and Tensor Parallelism.
|
||||
The other one uses Pipeline Parallelism Only.
|
||||
In the future, we are going merge them together and they can be used orthogonally to each other.
|
||||
We provide two stable solutions.
|
||||
One utilizes the Gemini to implement hybrid parallel strategies of Gemini, DDP/ZeRO, and Tensor Parallelism for a huggingface GPT model.
|
||||
The other one use [Titans](https://github.com/hpcaitech/Titans), a distributed executed model zoo maintained by ColossalAI,to implement the hybrid parallel strategies of TP + ZeRO + PP.
|
||||
|
||||
We recommend using Gemini to qucikly run your model in a distributed manner.
|
||||
It doesn't require significant changes to the model structures, therefore you can apply it on a new model easily.
|
||||
And use Titans as an advanced weapon to pursue a more extreme performance.
|
||||
Titans has included the some typical models, such as Vit and GPT.
|
||||
However, it requires some efforts to start if facing a new model structure.
|
||||
|
||||
### GeminiDPP/ZeRO + Tensor Parallelism
|
||||
```bash
|
||||
|
@ -56,6 +62,11 @@ The `train_gpt_demo.py` provides three distributed plans, you can choose the pla
|
|||
- Pytorch DDP
|
||||
- Pytorch ZeRO
|
||||
|
||||
### Titans (Tensor Parallelism) + ZeRO + Pipeline Parallelism
|
||||
|
||||
Titans provides a customized GPT model, which uses distributed operators as building blocks.
|
||||
In [./titans/README.md], we provide a hybrid parallelism of ZeRO, TP and PP.
|
||||
You can switch parallel strategies using a config file.
|
||||
|
||||
## Performance
|
||||
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
You can download the preprocessed sample dataset for this demo via our [Google Drive sharing link](https://drive.google.com/file/d/1QKI6k-e2gJ7XgS8yIpgPPiMmwiBP_BPE/view?usp=sharing).
|
||||
|
||||
|
||||
You can also avoid dataset preparation by using `--use_dummy_data` during running.
|
||||
You can also avoid dataset preparation by using `--use_dummy_dataset` during running.
|
||||
|
||||
## Run this Demo
|
||||
|
||||
|
@ -13,15 +13,15 @@ Use the following commands to install prerequisites.
|
|||
|
||||
```bash
|
||||
# assuming using cuda 11.3
|
||||
conda install pytorch==1.11.0 torchvision==0.12.0 torchaudio==0.11.0 cudatoolkit=11.3 -c pytorch
|
||||
pip install colossalai==0.1.9+torch1.11cu11.3 -f https://release.colossalai.org
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
Use the following commands to execute training.
|
||||
|
||||
```Bash
|
||||
#!/usr/bin/env sh
|
||||
export DATA=/path/to/small-gpt-dataset.json'
|
||||
# if you want to use real dataset, then remove --use_dummy_dataset
|
||||
# export DATA=/path/to/small-gpt-dataset.json'
|
||||
|
||||
# run on a single node
|
||||
colossalai run --nproc_per_node=<num_gpus> train_gpt.py --config configs/<config_file> --from_torch
|
||||
|
@ -34,14 +34,14 @@ colossalai run --nproc_per_node=<num_gpus> \
|
|||
train_gpt.py \
|
||||
--config configs/<config_file> \
|
||||
--from_torch \
|
||||
--use_dummy_data
|
||||
--use_dummy_dataset
|
||||
|
||||
# run on multiple nodes with slurm
|
||||
srun python \
|
||||
train_gpt.py \
|
||||
--config configs/<config_file> \
|
||||
--host <master_node> \
|
||||
--use_dummy_data
|
||||
--use_dummy_dataset
|
||||
|
||||
```
|
||||
|
||||
|
|
Loading…
Reference in New Issue