Making large AI models cheaper, faster and more accessible
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
 
 
Hongxin Liu 7f8b16635b
[misc] refactor launch API and tensor constructor (#5666)
7 months ago
..
README.md [example] add train resnet/vit with booster example (#3694) 2 years ago
data.py [misc] update pre-commit and run all files (#4752) 1 year ago
finetune.py [misc] refactor launch API and tensor constructor (#5666) 7 months ago
requirements.txt [example] add train resnet/vit with booster example (#3694) 2 years ago
test_ci.sh [fix] fix weekly runing example (#4787) 1 year ago

README.md

Finetune BERT on GLUE

🚀 Quick Start

This example provides a training script, which provides an example of finetuning BERT on GLUE dataset.

  • Training Arguments
    • -t, --task: GLUE task to run. Defaults to mrpc.
    • -p, --plugin: Plugin to use. Choices: torch_ddp, torch_ddp_fp16, gemini, low_level_zero. Defaults to torch_ddp.
    • --target_f1: Target f1 score. Raise exception if not reached. Defaults to None.

Install requirements

pip install -r requirements.txt

Train

# train with torch DDP with fp32
colossalai run --nproc_per_node 4 finetune.py

# train with torch DDP with mixed precision training
colossalai run --nproc_per_node 4 finetune.py -p torch_ddp_fp16

# train with gemini
colossalai run --nproc_per_node 4 finetune.py -p gemini

# train with low level zero
colossalai run --nproc_per_node 4 finetune.py -p low_level_zero

Expected F1-score will be:

Model Single-GPU Baseline FP32 Booster DDP with FP32 Booster DDP with FP16 Booster Gemini Booster Low Level Zero
bert-base-uncased 0.86 0.88 0.87 0.88 0.89