Merge pull request #4 from 00INDEX/patch-1

doc(README.md): update AutoModelForCausalLM
pull/6/head
Xingcheng Zhang 2023-07-06 16:23:57 +08:00 committed by GitHub
commit 64864bca3c
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1 changed files with 2 additions and 2 deletions

View File

@ -77,9 +77,9 @@ InternLM 7B and InternLM 7B Chat, trained using InternLM, have been open-sourced
### Import from Transformers
To load the InternLM 7B Chat model using Transformers, use the following code:
```python
>>> from transformers import AutoTokenizer, AutoModel
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
>>> tokenizer = AutoTokenizer.from_pretrained("internlm/internlm-chat-7b", trust_remote_code=True)
>>> model = AutoModel.from_pretrained("internlm/internlm-chat-7b", trust_remote_code=True, device='cuda')
>>> model = AutoModelForCausalLM.from_pretrained("internlm/internlm-chat-7b", trust_remote_code=True).cuda()
>>> model = model.eval()
>>> response, history = model.chat(tokenizer, "hello", history=[])
>>> print(response)