[diffusion] update readme (#2214)

pull/2217/head
HELSON 2022-12-28 16:06:48 +08:00 committed by GitHub
parent d96cc37e32
commit 78a89d9b41
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1 changed files with 5 additions and 5 deletions

View File

@ -52,7 +52,7 @@ export PACKAGE_NAME=pytorch
pip install .
```
### Install [Colossal-AI v0.1.10](https://colossalai.org/download/) From Our Official Website
### Install [Colossal-AI v0.1.12](https://colossalai.org/download/) From Our Official Website
```
pip install colossalai==0.1.12+torch1.12cu11.3 -f https://release.colossalai.org
@ -101,10 +101,10 @@ python main.py --logdir /tmp/ -t -b configs/train_colossalai.yaml
You can change the trainging config in the yaml file
- accelerator: acceleratortype, default 'gpu'
- devices: device number used for training, default 4
- max_epochs: max training epochs
- precision: usefp16 for training or not, default 16, you must use fp16 if you want to apply colossalai
- devices: device number used for training, default 8
- max_epochs: max training epochs, default 2
- precision: the precision type used in training, default 16 (fp16), you must use fp16 if you want to apply colossalai
- more information about the configuration of ColossalAIStrategy can be found [here](https://pytorch-lightning.readthedocs.io/en/latest/advanced/model_parallel.html#colossal-ai)
## Finetune Example
### Training on Teyvat Datasets