ColossalAI/examples/language/gpt/titans
Hongxin Liu 7f8b16635b
[misc] refactor launch API and tensor constructor (#5666)
* [misc] remove config arg from initialize

* [misc] remove old tensor contrusctor

* [plugin] add npu support for ddp

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* [devops] fix doc test ci

* [test] fix test launch

* [doc] update launch doc

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-04-29 10:40:11 +08:00
..
configs [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
dataset [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
model [npu] change device to accelerator api (#5239) 2024-01-09 10:20:05 +08:00
LICENSE [example] titans for gpt (#2484) 2023-01-16 15:55:41 +08:00
README.md [doc] fix GPT tutorial (#2860) 2023-02-22 10:58:52 +08:00
requirements.txt [example] titans for gpt (#2484) 2023-01-16 15:55:41 +08:00
run.sh [hotfix] gpt example titans bug #2493 (#2494) 2023-01-18 12:04:18 +08:00
test_ci.sh [example] titans for gpt (#2484) 2023-01-16 15:55:41 +08:00
train_gpt.py [misc] refactor launch API and tensor constructor (#5666) 2024-04-29 10:40:11 +08:00

README.md

Run GPT With Colossal-AI

How to Prepare Webtext Dataset

You can download the preprocessed sample dataset for this demo via our Google Drive sharing link.

You can also avoid dataset preparation by using --use_dummy_dataset during running.

Run this Demo

Use the following commands to install prerequisites.

# assuming using cuda 11.3
pip install -r requirements.txt

Use the following commands to execute training.

#!/usr/bin/env sh
# if you want to use real dataset, then remove --use_dummy_dataset
# export DATA=/path/to/small-gpt-dataset.json'

# run on a single node
colossalai run --nproc_per_node=<num_gpus> train_gpt.py --config configs/<config_file> --from_torch --use_dummy_dataset

# run on multiple nodes
colossalai run --nproc_per_node=<num_gpus> \
   --master_addr <hostname> \
   --master_port <port-number> \
   --hosts <list-of-hostname-separated-by-comma> \
   train_gpt.py \
   --config configs/<config_file> \
   --from_torch \
   --use_dummy_dataset

# run on multiple nodes with slurm
srun python \
   train_gpt.py \
   --config configs/<config_file> \
   --host <master_node> \
   --use_dummy_dataset

You can set the <config_file> to any file in the configs folder. To simply get it running, you can start with gpt_small_zero3_pp1d.py on a single node first. You can view the explanations in the config file regarding how to change the parallel setting.