From 497a48287be149fa4b9c981b4660cbbae50a1d30 Mon Sep 17 00:00:00 2001 From: Shuo Zhang Date: Thu, 6 Jul 2023 16:02:22 +0800 Subject: [PATCH] Update README.md --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 978b11e..7f0f858 100644 --- a/README.md +++ b/README.md @@ -77,9 +77,9 @@ InternLM 7B and InternLM 7B Chat, trained using InternLM, have been open-sourced ### Import from Transformers To load the InternLM 7B Chat model using Transformers, use the following code: ```python ->>> from transformers import AutoTokenizer, AutoModel +>>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> tokenizer = AutoTokenizer.from_pretrained("internlm/internlm-chat-7b", trust_remote_code=True) ->>> model = AutoModel.from_pretrained("internlm/internlm-chat-7b", trust_remote_code=True, device='cuda') +>>> model = AutoModelForCausalLM.from_pretrained("internlm/internlm-chat-7b", trust_remote_code=True).cuda() >>> model = model.eval() >>> response, history = model.chat(tokenizer, "hello", history=[]) >>> print(response)