mirror of https://github.com/hpcaitech/ColossalAI
205 lines
9.1 KiB
Markdown
205 lines
9.1 KiB
Markdown
# DreamBooth training example
|
||
|
||
[DreamBooth](https://arxiv.org/abs/2208.12242) is a method to personalize text2image models like stable diffusion given just a few(3~5) images of a subject.
|
||
The `train_dreambooth.py` script shows how to implement the training procedure and adapt it for stable diffusion.
|
||
|
||
## Installing the dependencies
|
||
|
||
Before running the scripts, make sure to install the library's training dependencies:
|
||
|
||
```bash
|
||
pip install -r requirements_colossalai.txt
|
||
```
|
||
|
||
## Dataset for Teyvat BLIP captions
|
||
Dataset used to train [Teyvat characters text to image model](https://github.com/hpcaitech/ColossalAI/tree/main/examples/images/diffusion).
|
||
|
||
BLIP generated captions for characters images from [genshin-impact fandom wiki](https://genshin-impact.fandom.com/wiki/Character#Playable_Characters)and [biligame wiki for genshin impact](https://wiki.biligame.com/ys/%E8%A7%92%E8%89%B2).
|
||
|
||
For each row the dataset contains `image` and `text` keys. `image` is a varying size PIL png, and `text` is the accompanying text caption. Only a train split is provided.
|
||
|
||
The `text` include the tag `Teyvat`, `Name`,`Element`, `Weapon`, `Region`, `Model type`, and `Description`, the `Description` is captioned with the [pre-trained BLIP model](https://github.com/salesforce/BLIP).
|
||
### Examples
|
||
|
||
<img src = "https://huggingface.co/datasets/Fazzie/Teyvat/resolve/main/data/Ganyu_001.png" title = "Ganyu_001.png" style="max-width: 20%;" >
|
||
|
||
> Teyvat, Name:Ganyu, Element:Cryo, Weapon:Bow, Region:Liyue, Model type:Medium Female, Description:an anime character with blue hair and blue eyes
|
||
|
||
<img src = "https://huggingface.co/datasets/Fazzie/Teyvat/resolve/main/data/Ganyu_002.png" title = "Ganyu_002.png" style="max-width: 20%;" >
|
||
|
||
> Teyvat, Name:Ganyu, Element:Cryo, Weapon:Bow, Region:Liyue, Model type:Medium Female, Description:an anime character with blue hair and blue eyes
|
||
|
||
<img src = "https://huggingface.co/datasets/Fazzie/Teyvat/resolve/main/data/Keqing_003.png" title = "Keqing_003.png" style="max-width: 20%;" >
|
||
|
||
> Teyvat, Name:Keqing, Element:Electro, Weapon:Sword, Region:Liyue, Model type:Medium Female, Description:a anime girl with long white hair and blue eyes
|
||
|
||
<img src = "https://huggingface.co/datasets/Fazzie/Teyvat/resolve/main/data/Keqing_004.png" title = "Keqing_004.png" style="max-width: 20%;" >
|
||
|
||
> Teyvat, Name:Keqing, Element:Electro, Weapon:Sword, Region:Liyue, Model type:Medium Female, Description:an anime character wearing a purple dress and cat ears
|
||
|
||
|
||
## Training
|
||
|
||
|
||
By accommodating model data in CPU and GPU and moving the data to the computing device when necessary, [Gemini](https://www.colossalai.org/docs/advanced_tutorials/meet_gemini), the Heterogeneous Memory Manager of [Colossal-AI](https://github.com/hpcaitech/ColossalAI) can breakthrough the GPU memory wall by using GPU and CPU memory (composed of CPU DRAM or nvme SSD memory) together at the same time. Moreover, the model scale can be further improved by combining heterogeneous training with the other parallel approaches, such as data parallel, tensor parallel and pipeline parallel .
|
||
|
||
The arguement `placement` can be `cpu`, `auto`, `cuda`, with `cpu` the GPU RAM required can be minimized to 6GB but will deceleration, with `cuda` you can also reduce GPU memory by half but accelerated training, with `auto` a more balanced solution for speed and memory can be obtained。
|
||
|
||
**___Note: Change the `resolution` to 768 if you are using the [stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2) 768x768 model.___**
|
||
|
||
```bash
|
||
export MODEL_NAME="CompVis/stable-diffusion-v1-4"
|
||
export INSTANCE_DIR="path-to-instance-images"
|
||
export OUTPUT_DIR="path-to-save-model"
|
||
|
||
torchrun --nproc_per_node 2 train_dreambooth_colossalai.py \
|
||
--pretrained_model_name_or_path=$MODEL_NAME \
|
||
--instance_data_dir=$INSTANCE_DIR \
|
||
--output_dir=$OUTPUT_DIR \
|
||
--instance_prompt="a photo of sks dog" \
|
||
--resolution=512 \
|
||
--train_batch_size=1 \
|
||
--gradient_accumulation_steps=1 \
|
||
--learning_rate=5e-6 \
|
||
--lr_scheduler="constant" \
|
||
--lr_warmup_steps=0 \
|
||
--max_train_steps=400 \
|
||
--placement="cuda"
|
||
```
|
||
|
||
### Training with prior-preservation loss
|
||
|
||
Prior-preservation is used to avoid overfitting and language-drift. Refer to the paper to learn more about it. For prior-preservation we first generate images using the model with a class prompt and then use those during training along with our data.
|
||
According to the paper, it's recommended to generate `num_epochs * num_samples` images for prior-preservation. 200-300 works well for most cases. The `num_class_images` flag sets the number of images to generate with the class prompt. You can place existing images in `class_data_dir`, and the training script will generate any additional images so that `num_class_images` are present in `class_data_dir` during training time.
|
||
|
||
```bash
|
||
export MODEL_NAME="CompVis/stable-diffusion-v1-4"
|
||
export INSTANCE_DIR="path-to-instance-images"
|
||
export CLASS_DIR="path-to-class-images"
|
||
export OUTPUT_DIR="path-to-save-model"
|
||
|
||
torchrun --nproc_per_node 2 train_dreambooth_colossalai.py \
|
||
--pretrained_model_name_or_path=$MODEL_NAME \
|
||
--instance_data_dir=$INSTANCE_DIR \
|
||
--class_data_dir=$CLASS_DIR \
|
||
--output_dir=$OUTPUT_DIR \
|
||
--with_prior_preservation --prior_loss_weight=1.0 \
|
||
--instance_prompt="a photo of sks dog" \
|
||
--class_prompt="a photo of dog" \
|
||
--resolution=512 \
|
||
--train_batch_size=1 \
|
||
--gradient_accumulation_steps=1 \
|
||
--learning_rate=5e-6 \
|
||
--lr_scheduler="constant" \
|
||
--lr_warmup_steps=0 \
|
||
--num_class_images=200 \
|
||
--max_train_steps=800
|
||
```
|
||
|
||
### Fine-tune text encoder with the UNet.
|
||
|
||
The script also allows to fine-tune the `text_encoder` along with the `unet`. It's been observed experimentally that fine-tuning `text_encoder` gives much better results especially on faces.
|
||
Pass the `--train_text_encoder` argument to the script to enable training `text_encoder`.
|
||
|
||
___Note: Training text encoder requires more memory, with this option the training won't fit on 16GB GPU. It needs at least 24GB VRAM.___
|
||
|
||
```bash
|
||
export MODEL_NAME="CompVis/stable-diffusion-v1-4"
|
||
export INSTANCE_DIR="path-to-instance-images"
|
||
export CLASS_DIR="path-to-class-images"
|
||
export OUTPUT_DIR="path-to-save-model"
|
||
|
||
accelerate launch train_dreambooth.py \
|
||
--pretrained_model_name_or_path=$MODEL_NAME \
|
||
--train_text_encoder \
|
||
--instance_data_dir=$INSTANCE_DIR \
|
||
--class_data_dir=$CLASS_DIR \
|
||
--output_dir=$OUTPUT_DIR \
|
||
--with_prior_preservation --prior_loss_weight=1.0 \
|
||
--instance_prompt="a photo of sks dog" \
|
||
--class_prompt="a photo of dog" \
|
||
--resolution=512 \
|
||
--train_batch_size=1 \
|
||
--use_8bit_adam \
|
||
--gradient_checkpointing \
|
||
--learning_rate=2e-6 \
|
||
--lr_scheduler="constant" \
|
||
--lr_warmup_steps=0 \
|
||
--num_class_images=200 \
|
||
--max_train_steps=800
|
||
```
|
||
|
||
## Inference
|
||
|
||
Once you have trained a model using above command, the inference can be done simply using the `StableDiffusionPipeline`. Make sure to include the `identifier`(e.g. sks in above example) in your prompt.
|
||
|
||
```python
|
||
from diffusers import StableDiffusionPipeline
|
||
import torch
|
||
|
||
model_id = "path-to-your-trained-model"
|
||
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")
|
||
|
||
prompt = "A photo of sks dog in a bucket"
|
||
image = pipe(prompt, num_inference_steps=50, guidance_scale=7.5).images[0]
|
||
|
||
image.save("dog-bucket.png")
|
||
```
|
||
|
||
## Dreambooth for the inpainting model
|
||
|
||
|
||
```bash
|
||
export MODEL_NAME="runwayml/stable-diffusion-inpainting"
|
||
export INSTANCE_DIR="path-to-instance-images"
|
||
export OUTPUT_DIR="path-to-save-model"
|
||
|
||
accelerate launch train_dreambooth_inpaint.py \
|
||
--pretrained_model_name_or_path=$MODEL_NAME \
|
||
--instance_data_dir=$INSTANCE_DIR \
|
||
--output_dir=$OUTPUT_DIR \
|
||
--instance_prompt="a photo of sks dog" \
|
||
--resolution=512 \
|
||
--train_batch_size=1 \
|
||
--gradient_accumulation_steps=1 \
|
||
--learning_rate=5e-6 \
|
||
--lr_scheduler="constant" \
|
||
--lr_warmup_steps=0 \
|
||
--max_train_steps=400
|
||
```
|
||
|
||
The script is also compatible with prior preservation loss and gradient checkpointing
|
||
|
||
## Fine-tune text encoder with the UNet.
|
||
|
||
The script also allows to fine-tune the `text_encoder` along with the `unet`. It's been observed experimentally that fine-tuning `text_encoder` gives much better results especially on faces.
|
||
Pass the `--train_text_encoder` argument to the script to enable training `text_encoder`.
|
||
|
||
___Note: Training text encoder requires more memory, with this option the training won't fit on 16GB GPU. It needs at least 24GB VRAM.___
|
||
|
||
```bash
|
||
export MODEL_NAME="runwayml/stable-diffusion-inpainting"
|
||
export INSTANCE_DIR="path-to-instance-images"
|
||
export CLASS_DIR="path-to-class-images"
|
||
export OUTPUT_DIR="path-to-save-model"
|
||
|
||
accelerate launch train_dreambooth_inpaint.py \
|
||
--pretrained_model_name_or_path=$MODEL_NAME \
|
||
--train_text_encoder \
|
||
--instance_data_dir=$INSTANCE_DIR \
|
||
--class_data_dir=$CLASS_DIR \
|
||
--output_dir=$OUTPUT_DIR \
|
||
--with_prior_preservation --prior_loss_weight=1.0 \
|
||
--instance_prompt="a photo of sks dog" \
|
||
--class_prompt="a photo of dog" \
|
||
--resolution=512 \
|
||
--train_batch_size=1 \
|
||
--use_8bit_adam \
|
||
--gradient_checkpointing \
|
||
--learning_rate=2e-6 \
|
||
--lr_scheduler="constant" \
|
||
--lr_warmup_steps=0 \
|
||
--num_class_images=200 \
|
||
--max_train_steps=800
|
||
```
|