ColossalAI/examples/tutorial/new_api/cifar_vit
digger-yu b7141c36dd
[CI] fix some spelling errors (#3707)
* fix spelling error with examples/comminity/

* fix spelling error with tests/

* fix some spelling error with tests/ colossalai/ etc.
2023-05-10 17:12:03 +08:00
..
README.md [example] add train resnet/vit with booster example (#3694) 2023-05-08 10:42:30 +08:00
requirements.txt [example] add train resnet/vit with booster example (#3694) 2023-05-08 10:42:30 +08:00
test_ci.sh [example] add train resnet/vit with booster example (#3694) 2023-05-08 10:42:30 +08:00
train.py [CI] fix some spelling errors (#3707) 2023-05-10 17:12:03 +08:00

README.md

Train ViT on CIFAR-10 from scratch

🚀 Quick Start

This example provides a training script, which provides an example of training ViT on CIFAR10 dataset from scratch.

  • Training Arguments
    • -p, --plugin: Plugin to use. Choices: torch_ddp, torch_ddp_fp16, low_level_zero. Defaults to torch_ddp.
    • -r, --resume: Resume from checkpoint file path. Defaults to -1, which means not resuming.
    • -c, --checkpoint: The folder to save checkpoints. Defaults to ./checkpoint.
    • -i, --interval: Epoch interval to save checkpoints. Defaults to 5. If set to 0, no checkpoint will be saved.
    • --target_acc: Target accuracy. Raise exception if not reached. Defaults to None.

Install requirements

pip install -r requirements.txt

Train

# train with torch DDP with fp32
colossalai run --nproc_per_node 4 train.py -c ./ckpt-fp32

# train with torch DDP with mixed precision training
colossalai run --nproc_per_node 4 train.py -c ./ckpt-fp16 -p torch_ddp_fp16

# train with low level zero
colossalai run --nproc_per_node 4 train.py -c ./ckpt-low_level_zero -p low_level_zero

Expected accuracy performance will be:

Model Single-GPU Baseline FP32 Booster DDP with FP32 Booster DDP with FP16 Booster Low Level Zero
ViT 83.00% 84.03% 84.00% 84.43%