From 57a3c4db6d5af3f3d46dfcbc52afb55cb5415298 Mon Sep 17 00:00:00 2001 From: kingkingofall <83848390+kingkingofall@users.noreply.github.com> Date: Thu, 6 Apr 2023 10:58:53 +0800 Subject: [PATCH] [chat]fix readme (#3429) * fix stage 2 fix stage 2 * add torch --- applications/Chat/examples/README.md | 2 +- applications/Chat/inference/README.md | 1 + 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/applications/Chat/examples/README.md b/applications/Chat/examples/README.md index 49401ec30..6c02606ea 100644 --- a/applications/Chat/examples/README.md +++ b/applications/Chat/examples/README.md @@ -57,7 +57,7 @@ You can run the `examples/train_rm.sh` to start a reward model training. You can also use the following cmd to start training a reward model. ``` -torchrun --standalone --nproc_per_node=4 train_reward_model.py +torchrun --standalone --nproc_per_node=4 train_reward_model.py \ --pretrain "/path/to/LLaMa-7B/" \ --model 'llama' \ --strategy colossalai_zero2 \ diff --git a/applications/Chat/inference/README.md b/applications/Chat/inference/README.md index 6c23bc73c..434677c98 100644 --- a/applications/Chat/inference/README.md +++ b/applications/Chat/inference/README.md @@ -51,6 +51,7 @@ Please ensure you have downloaded HF-format model weights of LLaMA models. Usage: ```python +import torch from transformers import LlamaForCausalLM USE_8BIT = True # use 8-bit quantization; otherwise, use fp16