From 315e1433ce4a4f8a7a1c2de6b87ccc63a7203941 Mon Sep 17 00:00:00 2001 From: jiaruifang Date: Mon, 16 Jan 2023 15:17:27 +0800 Subject: [PATCH] polish readme --- examples/language/gpt/README.md | 17 ++++++++++++++--- examples/language/gpt/titans/README.md | 12 ++++++------ 2 files changed, 20 insertions(+), 9 deletions(-) diff --git a/examples/language/gpt/README.md b/examples/language/gpt/README.md index 8fdf6be3b..7e6acb3d3 100644 --- a/examples/language/gpt/README.md +++ b/examples/language/gpt/README.md @@ -39,9 +39,15 @@ If you want to test ZeRO1 and ZeRO2 in Colossal-AI, you need to ensure Colossal- For simplicity, the input data is randonly generated here. ## Training -We provide two solutions. One utilizes the hybrid parallel strategies of Gemini, DDP/ZeRO, and Tensor Parallelism. -The other one uses Pipeline Parallelism Only. -In the future, we are going merge them together and they can be used orthogonally to each other. +We provide two stable solutions. +One utilizes the Gemini to implement hybrid parallel strategies of Gemini, DDP/ZeRO, and Tensor Parallelism for a huggingface GPT model. +The other one use [Titans](https://github.com/hpcaitech/Titans), a distributed executed model zoo maintained by ColossalAI,to implement the hybrid parallel strategies of TP + ZeRO + PP. + +We recommend using Gemini to qucikly run your model in a distributed manner. +It doesn't require significant changes to the model structures, therefore you can apply it on a new model easily. +And use Titans as an advanced weapon to pursue a more extreme performance. +Titans has included the some typical models, such as Vit and GPT. +However, it requires some efforts to start if facing a new model structure. ### GeminiDPP/ZeRO + Tensor Parallelism ```bash @@ -56,6 +62,11 @@ The `train_gpt_demo.py` provides three distributed plans, you can choose the pla - Pytorch DDP - Pytorch ZeRO +### Titans (Tensor Parallelism) + ZeRO + Pipeline Parallelism + +Titans provides a customized GPT model, which uses distributed operators as building blocks. +In [./titans/README.md], we provide a hybrid parallelism of ZeRO, TP and PP. +You can switch parallel strategies using a config file. ## Performance diff --git a/examples/language/gpt/titans/README.md b/examples/language/gpt/titans/README.md index 14c07442b..9fc26ad80 100644 --- a/examples/language/gpt/titans/README.md +++ b/examples/language/gpt/titans/README.md @@ -5,7 +5,7 @@ You can download the preprocessed sample dataset for this demo via our [Google Drive sharing link](https://drive.google.com/file/d/1QKI6k-e2gJ7XgS8yIpgPPiMmwiBP_BPE/view?usp=sharing). -You can also avoid dataset preparation by using `--use_dummy_data` during running. +You can also avoid dataset preparation by using `--use_dummy_dataset` during running. ## Run this Demo @@ -13,15 +13,15 @@ Use the following commands to install prerequisites. ```bash # assuming using cuda 11.3 -conda install pytorch==1.11.0 torchvision==0.12.0 torchaudio==0.11.0 cudatoolkit=11.3 -c pytorch -pip install colossalai==0.1.9+torch1.11cu11.3 -f https://release.colossalai.org +pip install -r requirements.txt ``` Use the following commands to execute training. ```Bash #!/usr/bin/env sh -export DATA=/path/to/small-gpt-dataset.json' +# if you want to use real dataset, then remove --use_dummy_dataset +# export DATA=/path/to/small-gpt-dataset.json' # run on a single node colossalai run --nproc_per_node= train_gpt.py --config configs/ --from_torch @@ -34,14 +34,14 @@ colossalai run --nproc_per_node= \ train_gpt.py \ --config configs/ \ --from_torch \ - --use_dummy_data + --use_dummy_dataset # run on multiple nodes with slurm srun python \ train_gpt.py \ --config configs/ \ --host \ - --use_dummy_data + --use_dummy_dataset ```