mirror of https://github.com/hpcaitech/ColossalAI
You can not select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
38 lines
1006 B
38 lines
1006 B
# Large Batch Training Optimization
|
|
|
|
## Table of contents
|
|
|
|
- [Large Batch Training Optimization](#large-batch-training-optimization)
|
|
- [Table of contents](#table-of-contents)
|
|
- [📚 Overview](#-overview)
|
|
- [🚀 Quick Start](#-quick-start)
|
|
|
|
## 📚 Overview
|
|
|
|
This example lets you to quickly try out the large batch training optimization provided by Colossal-AI. We use synthetic dataset to go through the process, thus, you don't need to prepare any dataset. You can try out the `Lamb` and `Lars` optimizers from Colossal-AI with the following code.
|
|
|
|
```python
|
|
from colossalai.nn.optimizer import Lamb, Lars
|
|
```
|
|
|
|
## 🚀 Quick Start
|
|
|
|
1. Install PyTorch
|
|
|
|
2. Install the dependencies.
|
|
|
|
```bash
|
|
pip install -r requirements.txt
|
|
```
|
|
|
|
3. Run the training scripts with synthetic data.
|
|
|
|
```bash
|
|
# run on 4 GPUs
|
|
# run with lars
|
|
colossalai run --nproc_per_node 4 train.py --config config.py --optimizer lars
|
|
|
|
# run with lamb
|
|
colossalai run --nproc_per_node 4 train.py --config config.py --optimizer lamb
|
|
```
|