From 075648cd70140e56dc8fd8f9cea6b4f2d113a7a4 Mon Sep 17 00:00:00 2001 From: Kai Chen Date: Tue, 22 Aug 2023 08:08:44 +0800 Subject: [PATCH] update readme related to internlm-chat-7v-v1.1 (#214) --- README-ja-JP.md | 8 ++++++-- README-zh-Hans.md | 4 ++-- README.md | 7 +++---- 3 files changed, 11 insertions(+), 8 deletions(-) diff --git a/README-ja-JP.md b/README-ja-JP.md index 1a01d8f..6c3dab4 100644 --- a/README-ja-JP.md +++ b/README-ja-JP.md @@ -40,6 +40,10 @@ InternLM は、70 億のパラメータを持つベースモデルと、実用 さらに、大規模な依存関係を必要とせずにモデルの事前学習をサポートする軽量な学習フレームワークが提供されます。単一のコードベースで、数千の GPU を持つ大規模クラスタでの事前学習と、単一の GPU での微調整をサポートし、顕著な性能最適化を達成します。InternLM は、1024GPU でのトレーニングにおいて 90% 近いアクセラレーション効率を達成しています。 +## 新闻 + +InternLM-7B-Chat v1.1 は、コード インタプリタと関数呼び出し機能を備えてリリースされました。 [Lagent](https://github.com/InternLM/lagent) で試すことができます。 + ## InternLM-7B ### パフォーマンス評価 @@ -80,8 +84,8 @@ Transformers を使用して InternLM 7B チャットモデルをロードする ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM ->>> tokenizer = AutoTokenizer.from_pretrained("internlm/internlm-chat-7b", trust_remote_code=True) ->>> model = AutoModelForCausalLM.from_pretrained("internlm/internlm-chat-7b", trust_remote_code=True).cuda() +>>> tokenizer = AutoTokenizer.from_pretrained("internlm/internlm-chat-7b-v1.1", trust_remote_code=True) +>>> model = AutoModelForCausalLM.from_pretrained("internlm/internlm-chat-7b-v1.1", trust_remote_code=True).cuda() >>> model = model.eval() >>> response, history = model.chat(tokenizer, "こんにちは", history=[]) >>> print(response) diff --git a/README-zh-Hans.md b/README-zh-Hans.md index e7c0a45..75c362b 100644 --- a/README-zh-Hans.md +++ b/README-zh-Hans.md @@ -90,8 +90,8 @@ InternLM ,即书生·浦语大模型,包含面向实用场景的70亿参数 ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM ->>> tokenizer = AutoTokenizer.from_pretrained("internlm/internlm-chat-7b", trust_remote_code=True) ->>> model = AutoModelForCausalLM.from_pretrained("internlm/internlm-chat-7b", trust_remote_code=True).cuda() +>>> tokenizer = AutoTokenizer.from_pretrained("internlm/internlm-chat-7b-v1.1", trust_remote_code=True) +>>> model = AutoModelForCausalLM.from_pretrained("internlm/internlm-chat-7b-v1.1", trust_remote_code=True).cuda() >>> model = model.eval() >>> response, history = model.chat(tokenizer, "你好", history=[]) >>> print(response) diff --git a/README.md b/README.md index b7cd781..78116f8 100644 --- a/README.md +++ b/README.md @@ -47,8 +47,7 @@ Additionally, a lightweight training framework is offered to support model pre-t ## News -InternLM-7B-Chat v1.1 is released with code interpreter and function calling capability. You can try it with [Lagent](https://github.com/InternLM/lagent) -- +InternLM-7B-Chat v1.1 is released with code interpreter and function calling capability. You can try it with [Lagent](https://github.com/InternLM/lagent). ## InternLM-7B @@ -91,8 +90,8 @@ To load the InternLM 7B Chat model using Transformers, use the following code: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM ->>> tokenizer = AutoTokenizer.from_pretrained("internlm/internlm-chat-7b", trust_remote_code=True) ->>> model = AutoModelForCausalLM.from_pretrained("internlm/internlm-chat-7b", trust_remote_code=True).cuda() +>>> tokenizer = AutoTokenizer.from_pretrained("internlm/internlm-chat-7b-v1.1", trust_remote_code=True) +>>> model = AutoModelForCausalLM.from_pretrained("internlm/internlm-chat-7b-v1.1", trust_remote_code=True).cuda() >>> model = model.eval() >>> response, history = model.chat(tokenizer, "hello", history=[]) >>> print(response)