2023-01-13 06:40:05 +00:00
# Large Batch Training Optimization
2022-11-11 09:08:17 +00:00
2023-01-11 08:27:31 +00:00
## Table of contents
2022-11-14 11:49:32 +00:00
2023-01-13 06:40:05 +00:00
- [Large Batch Training Optimization ](#large-batch-training-optimization )
- [Table of contents ](#table-of-contents )
- [📚 Overview ](#-overview )
- [🚀 Quick Start ](#-quick-start )
2022-11-14 11:49:32 +00:00
2023-01-11 08:27:31 +00:00
## 📚 Overview
2022-11-11 09:08:17 +00:00
2023-01-11 08:27:31 +00:00
This example lets you to quickly try out the large batch training optimization provided by Colossal-AI. We use synthetic dataset to go through the process, thus, you don't need to prepare any dataset. You can try out the `Lamb` and `Lars` optimizers from Colossal-AI with the following code.
2022-11-11 09:08:17 +00:00
2023-01-11 08:27:31 +00:00
```python
from colossalai.nn.optimizer import Lamb, Lars
2022-11-11 09:08:17 +00:00
```
2023-01-11 08:27:31 +00:00
## 🚀 Quick Start
1. Install PyTorch
2022-11-12 09:49:48 +00:00
2023-01-11 08:27:31 +00:00
2. Install the dependencies.
```bash
pip install -r requirements.txt
```
2022-11-11 09:08:17 +00:00
2023-01-11 08:27:31 +00:00
3. Run the training scripts with synthetic data.
2022-11-11 09:08:17 +00:00
```bash
2023-01-11 08:27:31 +00:00
# run on 4 GPUs
# run with lars
colossalai run --nproc_per_node 4 train.py --config config.py --optimizer lars
2022-11-12 09:49:48 +00:00
2023-01-11 08:27:31 +00:00
# run with lamb
colossalai run --nproc_per_node 4 train.py --config config.py --optimizer lamb
2022-11-12 09:49:48 +00:00
```