ColossalAI/README.md

233 lines
6.6 KiB
Markdown
Raw Normal View History

# Colossal-AI
[![logo](./docs/images/Colossal-AI_logo.png)](https://www.colossalai.org/)
<div align="center">
<h3> <a href="https://arxiv.org/abs/2110.14883"> Paper </a> |
<a href="https://www.colossalai.org/"> Documentation </a> |
<a href="https://github.com/hpcaitech/ColossalAI-Examples"> Examples </a> |
<a href="https://github.com/hpcaitech/ColossalAI/discussions"> Forum </a> |
2022-02-14 09:22:48 +00:00
<a href="https://medium.com/@hpcaitech"> Blog </a></h3>
<br/>
[![Build](https://github.com/hpcaitech/ColossalAI/actions/workflows/PR_CI.yml/badge.svg)](https://github.com/hpcaitech/ColossalAI/actions/workflows/PR_CI.yml)
[![Documentation](https://readthedocs.org/projects/colossalai/badge/?version=latest)](https://colossalai.readthedocs.io/en/latest/?badge=latest)
2022-02-03 06:01:09 +00:00
[![codebeat badge](https://codebeat.co/badges/bfe8f98b-5d61-4256-8ad2-ccd34d9cc156)](https://codebeat.co/projects/github-com-hpcaitech-colossalai-main)
2022-03-04 10:04:51 +00:00
[![slack badge](https://img.shields.io/badge/Slack-join-blueviolet?logo=slack&amp)](https://join.slack.com/t/colossalaiworkspace/shared_invite/zt-z7b26eeb-CBp7jouvu~r0~lcFzX832w)
[![WeChat badge](https://img.shields.io/badge/微信-加入-green?logo=wechat&amp)](./docs/images/WeChat.png)
2022-02-18 08:28:37 +00:00
| [English](README.md) | [中文](README-zh-Hans.md) |
</div>
2021-10-29 01:29:20 +00:00
An integrated large-scale model training system with efficient parallelization techniques.
2022-02-18 08:28:37 +00:00
## Features
Colossal-AI provides a collection of parallel training components for you. We aim to support you to write your
distributed deep learning models just like how you write your single-GPU model. We provide friendly tools to kickstart
distributed training in a few lines.
- Data Parallelism
- Pipeline Parallelism
- 1D, 2D, 2.5D, 3D tensor parallelism
- Sequence parallelism
- Friendly trainer and engine
- Extensible for new parallelism
- Mixed Precision Training
- Zero Redundancy Optimizer (ZeRO)
## Examples
### ViT
<img src="./docs/images/ViT.png" width="450" />
2022-02-18 08:28:37 +00:00
- 14x larger batch size, and 5x faster training for Tensor Parallel = 64
2022-02-18 08:28:37 +00:00
### GPT-3
2022-02-18 08:28:37 +00:00
<img src="./docs/images/GPT3.png" width=700/>
2022-02-18 08:28:37 +00:00
- Free 50% GPU resources, or 10.7% acceleration
### GPT-2
<img src="./docs/images/GPT2.png" width=800/>
- 11x lower GPU RAM, or superlinear scaling
2022-02-18 08:28:37 +00:00
### BERT
<img src="./docs/images/BERT.png" width=800/>
2022-02-18 08:28:37 +00:00
- 2x faster training, or 50% longer sequence length
2022-02-18 08:28:37 +00:00
Please visit our [documentation and tutorials](https://www.colossalai.org/) for more details.
2021-10-28 16:21:23 +00:00
## Installation
2022-02-14 09:09:30 +00:00
### PyPI
```bash
pip install colossalai
```
This command will install CUDA extension if your have installed CUDA, NVCC and torch.
2022-02-14 09:09:30 +00:00
If you don't want to install CUDA extension, you should add `--global-option="--no_cuda_ext"`, like:
```bash
pip install colossalai --global-option="--no_cuda_ext"
```
If you want to use `ZeRO`, you can run:
```bash
pip install colossalai[zero]
```
### Install From Source
2022-02-18 08:28:37 +00:00
> The version of Colossal-AI will be in line with the main branch of the repository. Feel free to raise an issue if you encounter any problem. :)
2021-10-28 16:21:23 +00:00
```shell
git clone https://github.com/hpcaitech/ColossalAI.git
2021-10-28 16:21:23 +00:00
cd ColossalAI
# install dependency
pip install -r requirements/requirements.txt
# install colossalai
pip install .
```
2022-02-14 09:09:30 +00:00
If you don't want to install and enable CUDA kernel fusion (compulsory installation when using fused optimizer):
2021-10-28 16:21:23 +00:00
```shell
2022-02-14 09:09:30 +00:00
pip install --global-option="--no_cuda_ext" .
2021-10-28 16:21:23 +00:00
```
2022-03-04 10:04:51 +00:00
2022-01-18 05:35:18 +00:00
## Use Docker
Run the following command to build a docker image from Dockerfile provided.
```bash
cd ColossalAI
docker build -t colossalai ./docker
```
Run the following command to start the docker container in interactive mode.
```bash
docker run -ti --gpus all --rm --ipc=host colossalai bash
```
2022-03-04 10:04:51 +00:00
## Community
Join the Colossal-AI community on [Forum](https://github.com/hpcaitech/ColossalAI/discussions),
[Slack](https://join.slack.com/t/colossalaiworkspace/shared_invite/zt-z7b26eeb-CBp7jouvu~r0~lcFzX832w),
and [WeChat](./docs/images/WeChat.png "qrcode") to share your suggestions, advice, and questions with our engineering team.
2022-02-14 09:22:48 +00:00
## Contributing
2022-03-04 10:04:51 +00:00
If you wish to contribute to this project, please follow the guideline in [Contributing](./CONTRIBUTING.md).
Thanks so much to all of our amazing contributors!
2022-02-14 09:22:48 +00:00
2022-03-04 10:04:51 +00:00
<a href="https://github.com/hpcaitech/ColossalAI/graphs/contributors"><img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/contributor_avatar.png" width="800px"></a>
*The order of contributor avatars is randomly shuffled.*
2022-02-14 09:22:48 +00:00
2021-10-28 16:21:23 +00:00
## Quick View
### Start Distributed Training in Lines
```python
import colossalai
2021-12-10 06:37:33 +00:00
from colossalai.utils import get_dataloader
# my_config can be path to config file or a dictionary obj
# 'localhost' is only for single node, you need to specify
# the node name if using multiple nodes
colossalai.launch(
config=my_config,
rank=rank,
world_size=world_size,
backend='nccl',
port=29500,
host='localhost'
2021-10-28 16:21:23 +00:00
)
2021-12-10 06:37:33 +00:00
# build your model
model = ...
2021-12-10 06:37:33 +00:00
# build you dataset, the dataloader will have distributed data
2021-12-10 06:37:33 +00:00
# sampler by default
train_dataset = ...
2021-12-10 06:37:33 +00:00
train_dataloader = get_dataloader(dataset=dataset,
shuffle=True
)
2021-12-10 06:37:33 +00:00
2022-02-18 08:28:37 +00:00
# build your optimizer
optimizer = ...
2021-12-10 06:37:33 +00:00
# build your loss function
criterion = ...
2022-02-18 08:28:37 +00:00
# initialize colossalai
2021-12-10 06:37:33 +00:00
engine, train_dataloader, _, _ = colossalai.initialize(
model=model,
optimizer=optimizer,
criterion=criterion,
train_dataloader=train_dataloader
)
# start training
engine.train()
for epoch in range(NUM_EPOCHS):
for data, label in train_dataloader:
engine.zero_grad()
output = engine(data)
loss = engine.criterion(output, label)
engine.backward(loss)
engine.step()
2021-10-28 16:21:23 +00:00
```
### Write a Simple 2D Parallel Model
Let's say we have a huge MLP model and its very large hidden size makes it difficult to fit into a single GPU. We can
then distribute the model weights across GPUs in a 2D mesh while you still write your model in a familiar way.
```python
from colossalai.nn import Linear2D
import torch.nn as nn
class MLP_2D(nn.Module):
def __init__(self):
super().__init__()
self.linear_1 = Linear2D(in_features=1024, out_features=16384)
self.linear_2 = Linear2D(in_features=16384, out_features=1024)
def forward(self, x):
x = self.linear_1(x)
x = self.linear_2(x)
return x
```
2021-10-28 16:21:23 +00:00
## Cite Us
2021-10-28 16:21:23 +00:00
```
@article{bian2021colossal,
title={Colossal-AI: A Unified Deep Learning System For Large-Scale Parallel Training},
author={Bian, Zhengda and Liu, Hongxin and Wang, Boxiang and Huang, Haichen and Li, Yongbin and Wang, Chuanrui and Cui, Fan and You, Yang},
journal={arXiv preprint arXiv:2110.14883},
year={2021}
}
```