Making large AI models cheaper, faster and more accessible
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
 
 
Go to file
ver217 dbe62c67b8
add an example of ViT-B/16 and remove w_norm clipping in LAMB (#29)
3 years ago
colossalai add an example of ViT-B/16 and remove w_norm clipping in LAMB (#29) 3 years ago
configs Support TP-compatible Torch AMP and Update trainer API (#27) 3 years ago
csrc Migrated project 3 years ago
docs Support TP-compatible Torch AMP and Update trainer API (#27) 3 years ago
examples add an example of ViT-B/16 and remove w_norm clipping in LAMB (#29) 3 years ago
model_zoo Migrated project 3 years ago
requirements Support TP-compatible Torch AMP and Update trainer API (#27) 3 years ago
scripts Migrated project 3 years ago
tests Support TP-compatible Torch AMP and Update trainer API (#27) 3 years ago
.gitignore Migrated project 3 years ago
LICENSE
MANIFEST.in Migrated project 3 years ago
README.md Support TP-compatible Torch AMP and Update trainer API (#27) 3 years ago
pytest.ini Migrated project 3 years ago
setup.py Support TP-compatible Torch AMP and Update trainer API (#27) 3 years ago

README.md

Colossal-AI

An integrated large-scale model training system with efficient parallelization techniques.

Paper: Colossal-AI: A Unified Deep Learning System For Large-Scale Parallel Training

Blog: Colossal-AI: A Unified Deep Learning System For Large-Scale Parallel Training

Installation

PyPI

pip install colossalai

Install From Source

git clone git@github.com:hpcaitech/ColossalAI.git
cd ColossalAI
# install dependency
pip install -r requirements/requirements.txt

# install colossalai
pip install .

Install and enable CUDA kernel fusion (compulsory installation when using fused optimizer)

pip install -v --no-cache-dir --global-option="--cuda_ext" .

Documentation

Quick View

Start Distributed Training in Lines

import colossalai
from colossalai.trainer import Trainer
from colossalai.core import global_context as gpc

engine, train_dataloader, test_dataloader = colossalai.initialize()

trainer = Trainer(engine=engine,
                  verbose=True)
trainer.fit(
    train_dataloader=train_dataloader,
    test_dataloader=test_dataloader,
    epochs=gpc.config.num_epochs,
    hooks_cfg=gpc.config.hooks,
    display_progress=True,
    test_interval=5
)

Write a Simple 2D Parallel Model

Let's say we have a huge MLP model and its very large hidden size makes it difficult to fit into a single GPU. We can then distribute the model weights across GPUs in a 2D mesh while you still write your model in a familiar way.

from colossalai.nn import Linear2D
import torch.nn as nn


class MLP_2D(nn.Module):

    def __init__(self):
        super().__init__()
        self.linear_1 = Linear2D(in_features=1024, out_features=16384)
        self.linear_2 = Linear2D(in_features=16384, out_features=1024)

    def forward(self, x):
        x = self.linear_1(x)
        x = self.linear_2(x)
        return x

Features

Colossal-AI provides a collection of parallel training components for you. We aim to support you to write your distributed deep learning models just like how you write your single-GPU model. We provide friendly tools to kickstart distributed training in a few lines.

Cite Us

@article{bian2021colossal,
  title={Colossal-AI: A Unified Deep Learning System For Large-Scale Parallel Training},
  author={Bian, Zhengda and Liu, Hongxin and Wang, Boxiang and Huang, Haichen and Li, Yongbin and Wang, Chuanrui and Cui, Fan and You, Yang},
  journal={arXiv preprint arXiv:2110.14883},
  year={2021}
}