Making large AI models cheaper, faster and more accessible
 
 
 
 
 
 
Go to file
Frank Lee af88570f4b fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes 2021-11-10 10:47:58 +08:00
colossalai fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes 2021-11-10 10:47:58 +08:00
configs Add gradient accumulation, fix lr scheduler 2021-11-08 16:30:24 +08:00
csrc Migrated project 2021-10-28 18:21:23 +02:00
docs fix FP16 optimizer and adapted torch amp with tensor parallel (#18) 2021-11-08 16:47:32 +08:00
examples Add gradient accumulation, fix lr scheduler 2021-11-08 16:30:24 +08:00
model_zoo Migrated project 2021-10-28 18:21:23 +02:00
requirements Migrated project 2021-10-28 18:21:23 +02:00
scripts Migrated project 2021-10-28 18:21:23 +02:00
tests cleaned test scripts 2021-10-29 00:48:14 +08:00
.gitignore Migrated project 2021-10-28 18:21:23 +02:00
LICENSE Initial commit 2021-10-29 00:19:45 +08:00
MANIFEST.in Migrated project 2021-10-28 18:21:23 +02:00
README.md fixed some typos in the documents, added blog link and paper author information in README 2021-11-03 17:18:43 +08:00
pytest.ini Migrated project 2021-10-28 18:21:23 +02:00
setup.py fixed some typos in the documents, added blog link and paper author information in README 2021-11-03 17:18:43 +08:00

README.md

Colossal-AI

An integrated large-scale model training system with efficient parallelization techniques.

Paper: Colossal-AI: A Unified Deep Learning System For Large-Scale Parallel Training

Blog: Colossal-AI: A Unified Deep Learning System For Large-Scale Parallel Training

Installation

PyPI

pip install colossalai

Install From Source

git clone git@github.com:hpcaitech/ColossalAI.git
cd ColossalAI
# install dependency
pip install -r requirements/requirements.txt

# install colossalai
pip install .

Install and enable CUDA kernel fusion (compulsory installation when using fused optimizer)

pip install -v --no-cache-dir --global-option="--cuda_ext" .

Documentation

Quick View

Start Distributed Training in Lines

import colossalai
from colossalai.engine import Engine
from colossalai.trainer import Trainer
from colossalai.core import global_context as gpc

model, train_dataloader, test_dataloader, criterion, optimizer, schedule, lr_scheduler = colossalai.initialize()
engine = Engine(
    model=model,
    criterion=criterion,
    optimizer=optimizer,
    lr_scheduler=lr_scheduler,
    schedule=schedule
)

trainer = Trainer(engine=engine,
                  hooks_cfg=gpc.config.hooks,
                  verbose=True)
trainer.fit(
    train_dataloader=train_dataloader,
    test_dataloader=test_dataloader,
    max_epochs=gpc.config.num_epochs,
    display_progress=True,
    test_interval=5
)

Write a Simple 2D Parallel Model

Let's say we have a huge MLP model and its very large hidden size makes it difficult to fit into a single GPU. We can then distribute the model weights across GPUs in a 2D mesh while you still write your model in a familiar way.

from colossalai.nn import Linear2D
import torch.nn as nn


class MLP_2D(nn.Module):

    def __init__(self):
        super().__init__()
        self.linear_1 = Linear2D(in_features=1024, out_features=16384)
        self.linear_2 = Linear2D(in_features=16384, out_features=1024)

    def forward(self, x):
        x = self.linear_1(x)
        x = self.linear_2(x)
        return x

Features

Colossal-AI provides a collection of parallel training components for you. We aim to support you to write your distributed deep learning models just like how you write your single-GPU model. We provide friendly tools to kickstart distributed training in a few lines.

Cite Us

@article{bian2021colossal,
  title={Colossal-AI: A Unified Deep Learning System For Large-Scale Parallel Training},
  author={Bian, Zhengda and Liu, Hongxin and Wang, Boxiang and Huang, Haichen and Li, Yongbin and Wang, Chuanrui and Cui, Fan and You, Yang},
  journal={arXiv preprint arXiv:2110.14883},
  year={2021}
}