ColossalAI/examples/tutorial
Hongxin Liu b5f9e37c70
[legacy] clean up legacy code (#4743)
* [legacy] remove outdated codes of pipeline (#4692)

* [legacy] remove cli of benchmark and update optim (#4690)

* [legacy] remove cli of benchmark and update optim

* [doc] fix cli doc test

* [legacy] fix engine clip grad norm

* [legacy] remove outdated colo tensor (#4694)

* [legacy] remove outdated colo tensor

* [test] fix test import

* [legacy] move outdated zero to legacy (#4696)

* [legacy] clean up utils (#4700)

* [legacy] clean up utils

* [example] update examples

* [legacy] clean up amp

* [legacy] fix amp module

* [legacy] clean up gpc (#4742)

* [legacy] clean up context

* [legacy] clean core, constants and global vars

* [legacy] refactor initialize

* [example] fix examples ci

* [example] fix examples ci

* [legacy] fix tests

* [example] fix gpt example

* [example] fix examples ci

* [devops] fix ci installation

* [example] fix examples ci
2023-09-18 16:31:06 +08:00
..
auto_parallel [legacy] clean up legacy code (#4743) 2023-09-18 16:31:06 +08:00
fastfold Automated submodule synchronization (#4217) 2023-07-12 17:35:58 +08:00
hybrid_parallel [legacy] clean up legacy code (#4743) 2023-09-18 16:31:06 +08:00
large_batch_optimizer [legacy] clean up legacy code (#4743) 2023-09-18 16:31:06 +08:00
new_api [CI] fix some spelling errors (#3707) 2023-05-10 17:12:03 +08:00
opt [legacy] clean up legacy code (#4743) 2023-09-18 16:31:06 +08:00
sequence_parallel [legacy] clean up legacy code (#4743) 2023-09-18 16:31:06 +08:00
.gitignore [tutorial] added missing dummy dataloader (#1944) 2022-11-14 04:09:03 -06:00
README.md [doc] add Series A Funding and NeurIPS news (#4377) 2023-08-04 17:42:07 +08:00
download_cifar10.py [tutorial] added data script and updated readme (#1916) 2022-11-12 16:38:41 +08:00
requirements.txt [example] add example requirement (#2345) 2023-01-06 09:03:29 +08:00

README.md

Colossal-AI Tutorial Hands-on

This path is an abbreviated tutorial prepared for specific activities and may not be maintained in real time. For use of Colossal-AI, please refer to other examples and documents.

Introduction

Welcome to the Colossal-AI tutorial, which has been accepted as official tutorials by top conference NeurIPS, SC, AAAI, PPoPP, CVPR, ISC, NVIDIA GTC ,etc.

Colossal-AI, a unified deep learning system for the big model era, integrates many advanced technologies such as multi-dimensional tensor parallelism, sequence parallelism, heterogeneous memory management, large-scale optimization, adaptive task scheduling, etc. By using Colossal-AI, we could help users to efficiently and quickly deploy large AI model training and inference, reducing large AI model training budgets and scaling down the labor cost of learning and deployment.

Colossal-AI | Paper | Documentation | Issue | Slack

Table of Content

Discussion

Discussion about the Colossal-AI project is always welcomed! We would love to exchange ideas with the community to better help this project grow. If you think there is a need to discuss anything, you may jump to our Slack.

If you encounter any problem while running these tutorials, you may want to raise an issue in this repository.

🛠️ Setup environment

[video] You should use conda to create a virtual environment, we recommend python 3.8, e.g. conda create -n colossal python=3.8. This installation commands are for CUDA 11.3, if you have a different version of CUDA, please download PyTorch and Colossal-AI accordingly. You can refer to the Installation to set up your environment.

You can run colossalai check -i to verify if you have correctly set up your environment 🕹️.

If you encounter messages like please install with cuda_ext, do let me know as it could be a problem of the distribution wheel. 😥

Then clone the Colossal-AI repository from GitHub.

git clone https://github.com/hpcaitech/ColossalAI.git
cd ColossalAI/examples/tutorial