You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
ColossalAI/examples/images/dreambooth
HELSON 48d33b1b17
[gemini] add get static torch model (#2356)
2 years ago
..
README.md [example] diffusion update diffusion,Dreamblooth (#2329) 2 years ago
colossalai.sh [example] fix save_load bug for dreambooth (#2280) 2 years ago
debug.py [example] support Dreamblooth (#2188) 2 years ago
dreambooth.sh [example] fix save_load bug for dreambooth (#2280) 2 years ago
inference.py [example] fix save_load bug for dreambooth (#2280) 2 years ago
requirement_colossalai.txt Update requirement_colossalai.txt (#2348) 2 years ago
requirements.txt [example] add example requirement (#2345) 2 years ago
train_dreambooth.py [example] support Dreamblooth (#2188) 2 years ago
train_dreambooth_colossalai.py [gemini] add get static torch model (#2356) 2 years ago
train_dreambooth_inpaint.py [example] support Dreamblooth (#2188) 2 years ago

README.md

DreamBooth by colossalai

DreamBooth is a method to personalize text2image models like stable diffusion given just a few(3~5) images of a subject. The train_dreambooth_colossalai.py script shows how to implement the training procedure and adapt it for stable diffusion.

By accommodating model data in CPU and GPU and moving the data to the computing device when necessary, Gemini, the Heterogeneous Memory Manager of Colossal-AI can breakthrough the GPU memory wall by using GPU and CPU memory (composed of CPU DRAM or nvme SSD memory) together at the same time. Moreover, the model scale can be further improved by combining heterogeneous training with the other parallel approaches, such as data parallel, tensor parallel and pipeline parallel.

Installing the dependencies

Before running the scripts, make sure to install the library's training dependencies:

pip install -r requirements_colossalai.txt

Install colossalai

pip install colossalai==0.2.0+torch1.12cu11.3 -f https://release.colossalai.org

From source

git clone https://github.com/hpcaitech/ColossalAI.git
python setup.py install

Dataset for Teyvat BLIP captions

Dataset used to train Teyvat characters text to image model.

BLIP generated captions for characters images from genshin-impact fandom wikiand biligame wiki for genshin impact.

For each row the dataset contains image and text keys. image is a varying size PIL png, and text is the accompanying text caption. Only a train split is provided.

The text include the tag Teyvat, Name,Element, Weapon, Region, Model type, and Description, the Description is captioned with the pre-trained BLIP model.

Training

The arguement placement can be cpu, auto, cuda, with cpu the GPU RAM required can be minimized to 4GB but will deceleration, with cuda you can also reduce GPU memory by half but accelerated training with auto a more balanced solution for speed and memory can be obtained。

Note: Change the resolution to 768 if you are using the stable-diffusion-2 768x768 model.

export MODEL_NAME="CompVis/stable-diffusion-v1-4"
export INSTANCE_DIR="path-to-instance-images"
export OUTPUT_DIR="path-to-save-model"

torchrun --nproc_per_node 2 train_dreambooth_colossalai.py \
  --pretrained_model_name_or_path=$MODEL_NAME  \
  --instance_data_dir=$INSTANCE_DIR \
  --output_dir=$OUTPUT_DIR \
  --instance_prompt="a photo of sks dog" \
  --resolution=512 \
  --train_batch_size=1 \
  --learning_rate=5e-6 \
  --lr_scheduler="constant" \
  --lr_warmup_steps=0 \
  --max_train_steps=400 \
  --placement="cuda"

Training with prior-preservation loss

Prior-preservation is used to avoid overfitting and language-drift. Refer to the paper to learn more about it. For prior-preservation we first generate images using the model with a class prompt and then use those during training along with our data. According to the paper, it's recommended to generate num_epochs * num_samples images for prior-preservation. 200-300 works well for most cases. The num_class_images flag sets the number of images to generate with the class prompt. You can place existing images in class_data_dir, and the training script will generate any additional images so that num_class_images are present in class_data_dir during training time.

export MODEL_NAME="CompVis/stable-diffusion-v1-4"
export INSTANCE_DIR="path-to-instance-images"
export CLASS_DIR="path-to-class-images"
export OUTPUT_DIR="path-to-save-model"

torchrun --nproc_per_node 2 train_dreambooth_colossalai.py \
  --pretrained_model_name_or_path=$MODEL_NAME  \
  --instance_data_dir=$INSTANCE_DIR \
  --class_data_dir=$CLASS_DIR \
  --output_dir=$OUTPUT_DIR \
  --with_prior_preservation --prior_loss_weight=1.0 \
  --instance_prompt="a photo of sks dog" \
  --class_prompt="a photo of dog" \
  --resolution=512 \
  --train_batch_size=1 \
  --learning_rate=5e-6 \
  --lr_scheduler="constant" \
  --lr_warmup_steps=0 \
  --max_train_steps=800 \
  --placement="cuda"

Inference

Once you have trained a model using above command, the inference can be done simply using the StableDiffusionPipeline. Make sure to include the identifier(e.g. sks in above example) in your prompt.

from diffusers import StableDiffusionPipeline
import torch

model_id = "path-to-save-model"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")

prompt = "A photo of sks dog in a bucket"
image = pipe(prompt, num_inference_steps=50, guidance_scale=7.5).images[0]

image.save("dog-bucket.png")