mirror of https://github.com/hpcaitech/ColossalAI
Hotfix/tutorial readme index (#1922)
* [tutorial] removed tutorial index in readme * [tutorial] removed tutorial index in readmepull/1924/head
parent
24cbee0ebe
commit
d43a671ad6
|
@ -1,4 +1,4 @@
|
|||
# Handson 3: Auto-Parallelism with ResNet
|
||||
# Auto-Parallelism with ResNet
|
||||
|
||||
## Prepare Dataset
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Handson 1: Multi-dimensional Parallelism with Colossal-AI
|
||||
# Multi-dimensional Parallelism with Colossal-AI
|
||||
|
||||
|
||||
## Install Titans Model Zoo
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Handson 4: Comparison of Large Batch Training Optimization
|
||||
# Comparison of Large Batch Training Optimization
|
||||
|
||||
## Prepare Dataset
|
||||
|
||||
|
|
|
@ -1 +1 @@
|
|||
# Handson 5: Fine-tuning and Serving for OPT from Hugging Face
|
||||
# Fine-tuning and Serving for OPT from Hugging Face
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Handson 2: Sequence Parallelism with BERT
|
||||
# Sequence Parallelism with BERT
|
||||
|
||||
In this example, we implemented BERT with sequence parallelism. Sequence parallelism splits the input tensor and intermediate
|
||||
activation along the sequence dimension. This method can achieve better memory efficiency and allows us to train with larger batch size and longer sequence length.
|
||||
|
@ -140,4 +140,3 @@ machine setting.
|
|||
launch_from_slurm` or `colossalai.launch_from_openmpi` as it is easier to use SLURM and OpenMPI
|
||||
to start multiple processes over multiple nodes. If you have your own launcher, you can fall back
|
||||
to the default `colossalai.launch` function.
|
||||
|
||||
|
|
Loading…
Reference in New Issue