mirror of https://github.com/hpcaitech/ColossalAI
parent
7d8d825681
commit
57a3c4db6d
|
@ -57,7 +57,7 @@ You can run the `examples/train_rm.sh` to start a reward model training.
|
|||
|
||||
You can also use the following cmd to start training a reward model.
|
||||
```
|
||||
torchrun --standalone --nproc_per_node=4 train_reward_model.py
|
||||
torchrun --standalone --nproc_per_node=4 train_reward_model.py \
|
||||
--pretrain "/path/to/LLaMa-7B/" \
|
||||
--model 'llama' \
|
||||
--strategy colossalai_zero2 \
|
||||
|
|
|
@ -51,6 +51,7 @@ Please ensure you have downloaded HF-format model weights of LLaMA models.
|
|||
Usage:
|
||||
|
||||
```python
|
||||
import torch
|
||||
from transformers import LlamaForCausalLM
|
||||
|
||||
USE_8BIT = True # use 8-bit quantization; otherwise, use fp16
|
||||
|
|
Loading…
Reference in New Issue