mirror of https://github.com/InternLM/InternLM
commit
64864bca3c
|
@ -77,9 +77,9 @@ InternLM 7B and InternLM 7B Chat, trained using InternLM, have been open-sourced
|
|||
### Import from Transformers
|
||||
To load the InternLM 7B Chat model using Transformers, use the following code:
|
||||
```python
|
||||
>>> from transformers import AutoTokenizer, AutoModel
|
||||
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
|
||||
>>> tokenizer = AutoTokenizer.from_pretrained("internlm/internlm-chat-7b", trust_remote_code=True)
|
||||
>>> model = AutoModel.from_pretrained("internlm/internlm-chat-7b", trust_remote_code=True, device='cuda')
|
||||
>>> model = AutoModelForCausalLM.from_pretrained("internlm/internlm-chat-7b", trust_remote_code=True).cuda()
|
||||
>>> model = model.eval()
|
||||
>>> response, history = model.chat(tokenizer, "hello", history=[])
|
||||
>>> print(response)
|
||||
|
|
Loading…
Reference in New Issue