Making large AI models cheaper, faster and more accessible
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
 
 
Go to file
Jiarui Fang e761ad2cd7
Revert "[zero] add ZeroTensorShardStrategy (#793)" (#806)
3 years ago
.github [hotfix] update requirements-test (#701) 3 years ago
benchmark@607bb4a515 Automated submodule synchronization (#556) 3 years ago
colossalai Revert "[zero] add ZeroTensorShardStrategy (#793)" (#806) 3 years ago
docker
docs [refactor] moving memtracer to gemini (#801) 3 years ago
examples@12e699ed00 Automated submodule synchronization (#751) 3 years ago
model_zoo [model zoo] add activation offload for gpt model (#582) 3 years ago
requirements [cli] add missing requirement (#805) 3 years ago
tests Revert "[zero] add ZeroTensorShardStrategy (#793)" (#806) 3 years ago
.clang-format [tool] create .clang-format for pre-commit (#578) 3 years ago
.flake8
.gitignore [model checkpoint] added unit tests for checkpoint save/load (#599) 3 years ago
.gitmodules
.pre-commit-config.yaml
.readthedocs.yaml
.style.yapf
CHANGE_LOG.md fix typo in CHANGE_LOG.md 3 years ago
CONTRIBUTING.md update contributing.md with the current workflow (#440) 3 years ago
LICENSE
MANIFEST.in
README-zh-Hans.md [readme] sync CN readme (#766) 3 years ago
README.md [readme] sync CN readme (#766) 3 years ago
pytest.ini
setup.py Revert "[zero] add ZeroTensorShardStrategy (#793)" (#806) 3 years ago
version.txt update version (#779) 3 years ago

README.md

Colossal-AI

logo

An integrated large-scale model training system with efficient parallelization techniques.

Paper | Documentation | Examples | Forum | Blog

Build Documentation CodeFactor HuggingFace badge slack badge WeChat badge

| English | 中文 |

Table of Contents

Why Colossal-AI

Prof. James Demmel (UC Berkeley): Colossal-AI makes distributed training efficient, easy and scalable.

(back to top)

Features

Colossal-AI provides a collection of parallel training components for you. We aim to support you to write your distributed deep learning models just like how you write your model on your laptop. We provide user-friendly tools to kickstart distributed training in a few lines.

(back to top)

Demo

ViT

  • 14x larger batch size, and 5x faster training for Tensor Parallelism = 64

GPT-3

  • Save 50% GPU resources, and 10.7% acceleration

GPT-2

  • 11x lower GPU memory consumption, and superlinear scaling efficiency with Tensor Parallelism
  • 24x larger model size on the same hardware
  • over 3x acceleration

BERT

  • 2x faster training, or 50% longer sequence length

PaLM

Please visit our documentation and tutorials for more details.

(back to top)

Installation

PyPI

pip install colossalai

This command will install CUDA extension if your have installed CUDA, NVCC and torch.

If you don't want to install CUDA extension, you should add --global-option="--no_cuda_ext", like:

pip install colossalai --global-option="--no_cuda_ext"

Install From Source

The version of Colossal-AI will be in line with the main branch of the repository. Feel free to create an issue if you encounter any problems. :-)

git clone https://github.com/hpcaitech/ColossalAI.git
cd ColossalAI
# install dependency
pip install -r requirements/requirements.txt

# install colossalai
pip install .

If you don't want to install and enable CUDA kernel fusion (compulsory installation when using fused optimizer):

pip install --global-option="--no_cuda_ext" .

(back to top)

Use Docker

Run the following command to build a docker image from Dockerfile provided.

cd ColossalAI
docker build -t colossalai ./docker

Run the following command to start the docker container in interactive mode.

docker run -ti --gpus all --rm --ipc=host colossalai bash

(back to top)

Community

Join the Colossal-AI community on Forum, Slack, and WeChat to share your suggestions, feedback, and questions with our engineering team.

Contributing

If you wish to contribute to this project, please follow the guideline in Contributing.

Thanks so much to all of our amazing contributors!

The order of contributor avatars is randomly shuffled.

(back to top)

Quick View

Start Distributed Training in Lines

import colossalai
from colossalai.utils import get_dataloader


# my_config can be path to config file or a dictionary obj
# 'localhost' is only for single node, you need to specify
# the node name if using multiple nodes
colossalai.launch(
    config=my_config,
    rank=rank,
    world_size=world_size,
    backend='nccl',
    port=29500,
    host='localhost'
)

# build your model
model = ...

# build you dataset, the dataloader will have distributed data
# sampler by default
train_dataset = ...
train_dataloader = get_dataloader(dataset=dataset,
                                shuffle=True
                                )


# build your optimizer
optimizer = ...

# build your loss function
criterion = ...

# initialize colossalai
engine, train_dataloader, _, _ = colossalai.initialize(
    model=model,
    optimizer=optimizer,
    criterion=criterion,
    train_dataloader=train_dataloader
)

# start training
engine.train()
for epoch in range(NUM_EPOCHS):
    for data, label in train_dataloader:
        engine.zero_grad()
        output = engine(data)
        loss = engine.criterion(output, label)
        engine.backward(loss)
        engine.step()

Write a Simple 2D Parallel Model

Let's say we have a huge MLP model and its very large hidden size makes it difficult to fit into a single GPU. We can then distribute the model weights across GPUs in a 2D mesh while you still write your model in a familiar way.

from colossalai.nn import Linear2D
import torch.nn as nn


class MLP_2D(nn.Module):

    def __init__(self):
        super().__init__()
        self.linear_1 = Linear2D(in_features=1024, out_features=16384)
        self.linear_2 = Linear2D(in_features=16384, out_features=1024)

    def forward(self, x):
        x = self.linear_1(x)
        x = self.linear_2(x)
        return x

(back to top)

Cite Us

@article{bian2021colossal,
  title={Colossal-AI: A Unified Deep Learning System For Large-Scale Parallel Training},
  author={Bian, Zhengda and Liu, Hongxin and Wang, Boxiang and Huang, Haichen and Li, Yongbin and Wang, Chuanrui and Cui, Fan and You, Yang},
  journal={arXiv preprint arXiv:2110.14883},
  year={2021}
}

(back to top)