diff --git a/applications/Chat/examples/README.md b/applications/Chat/examples/README.md index 49401ec30..6c02606ea 100644 --- a/applications/Chat/examples/README.md +++ b/applications/Chat/examples/README.md @@ -57,7 +57,7 @@ You can run the `examples/train_rm.sh` to start a reward model training. You can also use the following cmd to start training a reward model. ``` -torchrun --standalone --nproc_per_node=4 train_reward_model.py +torchrun --standalone --nproc_per_node=4 train_reward_model.py \ --pretrain "/path/to/LLaMa-7B/" \ --model 'llama' \ --strategy colossalai_zero2 \ diff --git a/applications/Chat/inference/README.md b/applications/Chat/inference/README.md index 6c23bc73c..434677c98 100644 --- a/applications/Chat/inference/README.md +++ b/applications/Chat/inference/README.md @@ -51,6 +51,7 @@ Please ensure you have downloaded HF-format model weights of LLaMA models. Usage: ```python +import torch from transformers import LlamaForCausalLM USE_8BIT = True # use 8-bit quantization; otherwise, use fp16