Making large AI models cheaper, faster and more accessible
 
 
 
 
 
 
Go to file
YuliangLiu0306 819e25d8b1
[hotfix] fix autoparallel compatibility test issues (#2754)
2023-02-23 17:28:36 +08:00
.github [CI/CD] fix nightly release CD running on forked repo (#2812) 2023-02-18 13:27:13 +08:00
applications/ChatGPT [chatgpt]support opt & gpt for rm training (#2876) 2023-02-22 16:58:11 +08:00
colossalai [hotfix] fix autoparallel compatibility test issues (#2754) 2023-02-23 17:28:36 +08:00
docker [docker] updated Dockerfile and release workflow (#2410) 2023-01-10 09:26:14 +08:00
docs Hotfix/auto parallel zh doc (#2820) 2023-02-19 15:57:14 +08:00
examples Fix typos (#2863) 2023-02-22 10:59:48 +08:00
inference@cde4c8f4e7 Automated submodule synchronization (#2648) 2023-02-13 18:10:54 +08:00
op_builder [kernel] fixed repeated loading of kernels (#2549) 2023-02-03 09:47:13 +08:00
requirements [build] fixed the doc build process (#2618) 2023-02-07 14:36:34 +08:00
tests [hotfix] fix autoparallel compatibility test issues (#2754) 2023-02-23 17:28:36 +08:00
.clang-format [tool] create .clang-format for pre-commit (#578) 2022-03-31 16:34:00 +08:00
.compatibility [workflow] automated the compatiblity test (#2453) 2023-01-11 23:40:16 +08:00
.cuda_ext.json [workflow] added cuda extension build test before release (#2598) 2023-02-06 17:07:41 +08:00
.flake8 added flake8 config (#219) 2022-02-15 11:31:13 +08:00
.gitignore [workflow]auto comment with test coverage report (#2419) 2023-01-10 22:30:16 +08:00
.gitmodules [tutorial] update fastfold tutorial (#2565) 2023-02-03 16:54:28 +08:00
.isort.cfg [pre-commit] update pre-commit (#1726) 2022-10-18 14:35:37 +08:00
.pre-commit-config.yaml [workflow] fixed the precommit CI (#2525) 2023-01-30 10:02:13 +08:00
.readthedocs.yaml [NFC] polish .readthedocs.yaml code style (#1852) 2022-11-09 14:49:16 +08:00
.style.yapf fixed mkdir conflict and align yapf config with flake (#220) 2022-02-15 11:31:13 +08:00
CHANGE_LOG.md [doc] updated the CHANGE_LOG.md for github release page (#2552) 2023-02-03 10:47:27 +08:00
CONTRIBUTING.md update contributing.md with the current workflow (#440) 2022-03-17 10:28:04 +08:00
LICENSE [triton] added copyright information for flash attention (#2835) 2023-02-21 11:25:57 +08:00
MANIFEST.in [builder] reconfig op_builder for pypi install (#2314) 2023-01-04 16:32:32 +08:00
README-zh-Hans.md [doc] update OPT serving (#2804) 2023-02-17 23:21:42 +08:00
README.md Typo (#2826) 2023-02-20 10:36:23 +08:00
pytest.ini
setup.py [setup] fixed inconsistent version meta (#2578) 2023-02-06 13:48:20 +08:00
version.txt Update version.txt (#2727) 2023-02-15 16:48:08 +08:00

README.md

Colossal-AI

logo

Colossal-AI: Making big AI models cheaper, easier, and more scalable

Paper | Documentation | Examples | Forum | Blog

Build Documentation CodeFactor HuggingFace badge slack badge WeChat badge

| English | 中文 |

Latest News

Table of Contents

Why Colossal-AI

Prof. James Demmel (UC Berkeley): Colossal-AI makes training AI models efficient, easy, and scalable.

(back to top)

Features

Colossal-AI provides a collection of parallel components for you. We aim to support you to write your distributed deep learning models just like how you write your model on your laptop. We provide user-friendly tools to kickstart distributed training and inference in a few lines.

(back to top)

Parallel Training Demo

GPT-3

  • Save 50% GPU resources, and 10.7% acceleration

GPT-2

  • 11x lower GPU memory consumption, and superlinear scaling efficiency with Tensor Parallelism
  • 24x larger model size on the same hardware
  • over 3x acceleration

BERT

  • 2x faster training, or 50% longer sequence length

PaLM

OPT

  • Open Pretrained Transformer (OPT), a 175-Billion parameter AI language model released by Meta, which stimulates AI programmers to perform various downstream tasks and application deployments because public pretrained model weights.
  • 45% speedup fine-tuning OPT at low cost in lines. [Example] [Online Serving]

Please visit our documentation and examples for more details.

ViT

  • 14x larger batch size, and 5x faster training for Tensor Parallelism = 64

Recommendation System Models

  • Cached Embedding, utilize software cache to train larger embedding tables with a smaller GPU memory budget.

(back to top)

Single GPU Training Demo

GPT-2

  • 20x larger model size on the same hardware

  • 120x larger model size on the same hardware (RTX 3080)

PaLM

  • 34x larger model size on the same hardware

(back to top)

Inference (Energon-AI) Demo

  • Energon-AI: 50% inference acceleration on the same hardware

  • OPT Serving: Try 175-billion-parameter OPT online services

  • BLOOM: Reduce hardware deployment costs of 176-billion-parameter BLOOM by more than 10 times.

(back to top)

Colossal-AI in the Real World

ChatGPT

A low-cost ChatGPT equivalent implementation process. [code] [blog]

  • Up to 7.73 times faster for single server training and 1.42 times faster for single-GPU inference

  • Up to 10.3x growth in model capacity on one GPU
  • A mini demo training process requires only 1.62GB of GPU memory (any consumer-grade GPU)

  • Increase the capacity of the fine-tuning model by up to 3.7 times on a single GPU
  • Keep in a sufficiently high running speed

(back to top)

AIGC

Acceleration of AIGC (AI-Generated Content) models such as Stable Diffusion v1 and Stable Diffusion v2.

  • Training: Reduce Stable Diffusion memory consumption by up to 5.6x and hardware cost by up to 46x (from A100 to RTX3060).

  • Inference: Reduce inference GPU memory consumption by 2.5x.

(back to top)

Biomedicine

Acceleration of AlphaFold Protein Structure

  • FastFold: accelerating training and inference on GPU Clusters, faster data processing, inference sequence containing more than 10000 residues.

  • xTrimoMultimer: accelerating structure prediction of protein monomers and multimer by 11x.

(back to top)

Installation

Install from PyPI

You can easily install Colossal-AI with the following command. By default, we do not build PyTorch extensions during installation.

pip install colossalai

However, if you want to build the PyTorch extensions during installation, you can set CUDA_EXT=1.

CUDA_EXT=1 pip install colossalai

Otherwise, CUDA kernels will be built during runtime when you actually need it.

We also keep release the nightly version to PyPI on a weekly basis. This allows you to access the unreleased features and bug fixes in the main branch. Installation can be made via

pip install colossalai-nightly

Download From Source

The version of Colossal-AI will be in line with the main branch of the repository. Feel free to raise an issue if you encounter any problem. :)

git clone https://github.com/hpcaitech/ColossalAI.git
cd ColossalAI

# install colossalai
pip install .

By default, we do not compile CUDA/C++ kernels. ColossalAI will build them during runtime. If you want to install and enable CUDA kernel fusion (compulsory installation when using fused optimizer):

CUDA_EXT=1 pip install .

(back to top)

Use Docker

Pull from DockerHub

You can directly pull the docker image from our DockerHub page. The image is automatically uploaded upon release.

Build On Your Own

Run the following command to build a docker image from Dockerfile provided.

Building Colossal-AI from scratch requires GPU support, you need to use Nvidia Docker Runtime as the default when doing docker build. More details can be found here. We recommend you install Colossal-AI from our project page directly.

cd ColossalAI
docker build -t colossalai ./docker

Run the following command to start the docker container in interactive mode.

docker run -ti --gpus all --rm --ipc=host colossalai bash

(back to top)

Community

Join the Colossal-AI community on Forum, Slack, and WeChat to share your suggestions, feedback, and questions with our engineering team.

Contributing

If you wish to contribute to this project, please follow the guideline in Contributing.

Thanks so much to all of our amazing contributors!

The order of contributor avatars is randomly shuffled.

(back to top)

CI/CD

We leverage the power of GitHub Actions to automate our development, release and deployment workflows. Please check out this documentation on how the automated workflows are operated.

Cite Us

@article{bian2021colossal,
  title={Colossal-AI: A Unified Deep Learning System For Large-Scale Parallel Training},
  author={Bian, Zhengda and Liu, Hongxin and Wang, Boxiang and Huang, Haichen and Li, Yongbin and Wang, Chuanrui and Cui, Fan and You, Yang},
  journal={arXiv preprint arXiv:2110.14883},
  year={2021}
}

Colossal-AI has been accepted as official tutorials by top conference SC, AAAI, PPoPP, CVPR, etc.

(back to top)