mirror of https://github.com/InternLM/InternLM
update readme related to internlm-chat-7v-v1.1 (#214)
parent
58108413bd
commit
075648cd70
|
@ -40,6 +40,10 @@ InternLM は、70 億のパラメータを持つベースモデルと、実用
|
|||
|
||||
さらに、大規模な依存関係を必要とせずにモデルの事前学習をサポートする軽量な学習フレームワークが提供されます。単一のコードベースで、数千の GPU を持つ大規模クラスタでの事前学習と、単一の GPU での微調整をサポートし、顕著な性能最適化を達成します。InternLM は、1024GPU でのトレーニングにおいて 90% 近いアクセラレーション効率を達成しています。
|
||||
|
||||
## 新闻
|
||||
|
||||
InternLM-7B-Chat v1.1 は、コード インタプリタと関数呼び出し機能を備えてリリースされました。 [Lagent](https://github.com/InternLM/lagent) で試すことができます。
|
||||
|
||||
## InternLM-7B
|
||||
|
||||
### パフォーマンス評価
|
||||
|
@ -80,8 +84,8 @@ Transformers を使用して InternLM 7B チャットモデルをロードする
|
|||
|
||||
```python
|
||||
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
|
||||
>>> tokenizer = AutoTokenizer.from_pretrained("internlm/internlm-chat-7b", trust_remote_code=True)
|
||||
>>> model = AutoModelForCausalLM.from_pretrained("internlm/internlm-chat-7b", trust_remote_code=True).cuda()
|
||||
>>> tokenizer = AutoTokenizer.from_pretrained("internlm/internlm-chat-7b-v1.1", trust_remote_code=True)
|
||||
>>> model = AutoModelForCausalLM.from_pretrained("internlm/internlm-chat-7b-v1.1", trust_remote_code=True).cuda()
|
||||
>>> model = model.eval()
|
||||
>>> response, history = model.chat(tokenizer, "こんにちは", history=[])
|
||||
>>> print(response)
|
||||
|
|
|
@ -90,8 +90,8 @@ InternLM ,即书生·浦语大模型,包含面向实用场景的70亿参数
|
|||
|
||||
```python
|
||||
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
|
||||
>>> tokenizer = AutoTokenizer.from_pretrained("internlm/internlm-chat-7b", trust_remote_code=True)
|
||||
>>> model = AutoModelForCausalLM.from_pretrained("internlm/internlm-chat-7b", trust_remote_code=True).cuda()
|
||||
>>> tokenizer = AutoTokenizer.from_pretrained("internlm/internlm-chat-7b-v1.1", trust_remote_code=True)
|
||||
>>> model = AutoModelForCausalLM.from_pretrained("internlm/internlm-chat-7b-v1.1", trust_remote_code=True).cuda()
|
||||
>>> model = model.eval()
|
||||
>>> response, history = model.chat(tokenizer, "你好", history=[])
|
||||
>>> print(response)
|
||||
|
|
|
@ -47,8 +47,7 @@ Additionally, a lightweight training framework is offered to support model pre-t
|
|||
|
||||
## News
|
||||
|
||||
InternLM-7B-Chat v1.1 is released with code interpreter and function calling capability. You can try it with [Lagent](https://github.com/InternLM/lagent)
|
||||
-
|
||||
InternLM-7B-Chat v1.1 is released with code interpreter and function calling capability. You can try it with [Lagent](https://github.com/InternLM/lagent).
|
||||
|
||||
## InternLM-7B
|
||||
|
||||
|
@ -91,8 +90,8 @@ To load the InternLM 7B Chat model using Transformers, use the following code:
|
|||
|
||||
```python
|
||||
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
|
||||
>>> tokenizer = AutoTokenizer.from_pretrained("internlm/internlm-chat-7b", trust_remote_code=True)
|
||||
>>> model = AutoModelForCausalLM.from_pretrained("internlm/internlm-chat-7b", trust_remote_code=True).cuda()
|
||||
>>> tokenizer = AutoTokenizer.from_pretrained("internlm/internlm-chat-7b-v1.1", trust_remote_code=True)
|
||||
>>> model = AutoModelForCausalLM.from_pretrained("internlm/internlm-chat-7b-v1.1", trust_remote_code=True).cuda()
|
||||
>>> model = model.eval()
|
||||
>>> response, history = model.chat(tokenizer, "hello", history=[])
|
||||
>>> print(response)
|
||||
|
|
Loading…
Reference in New Issue