diff --git a/README-ja-JP.md b/README-ja-JP.md index 40a1167..5e5b7db 100644 --- a/README-ja-JP.md +++ b/README-ja-JP.md @@ -123,20 +123,20 @@ streamlit run web_demo.py 1. まず、LMDeploy をインストールする: -``` - python3 -m pip install lmdeploy +```shell +python3 -m pip install lmdeploy ``` 2. クイックデプロイには以下のコマンドを使用します: -``` - python3 -m lmdeploy.serve.turbomind.deploy InternLM-7B /path/to/internlm-7b/model hf +```shell +lmdeploy chat turbomind InternLM/internlm-chat-7b --model-name internlm-chat-7b ``` 3. モデルをエクスポートした後、以下のコマンドを使ってサーバーを起動し、デプロイされたモデルと会話することができます: -``` - python3 -m lmdeploy.serve.client {server_ip_addresss}:33337 +```shell +lmdeploy serve api_server InternLM/internlm-chat-7b --model-name internlm-chat-7b ``` [LMDeploy](https://github.com/InternLM/LMDeploy) は、InternLM をデプロイするための完全なワークフローを提供します。InternLM のデプロイの詳細については、[デプロイチュートリアル](https://github.com/InternLM/LMDeploy)を参照してください。 diff --git a/README-zh-Hans.md b/README-zh-Hans.md index f78e5f1..62d58d0 100644 --- a/README-zh-Hans.md +++ b/README-zh-Hans.md @@ -213,23 +213,22 @@ streamlit run web_demo.py 1. 首先安装 LMDeploy: - ``` + ```shell python3 -m pip install lmdeploy ``` +2. 直接在本地,通过命令行,交互式和 InternLM 对话: -2. 快速的部署命令如下: - - ``` - python3 -m lmdeploy.serve.turbomind.deploy InternLM-7B /path/to/internlm-7b/model hf + ```shell + lmdeploy chat turbomind InternLM/internlm-chat-7b --model-name internlm-chat-7b ``` -3. 在导出模型后,你可以直接通过如下命令启动服务一个服务并和部署后的模型对话 +1. 也可以使用如下命令启动推理服务: + ```shell + lmdeploy serve api_server InternLM/internlm-chat-7b --model-name internlm-chat-7b ``` - python3 -m lmdeploy.serve.client {server_ip_addresss}:33337 - ``` +请参考[此指南](https://github.com/InternLM/lmdeploy/blob/main/docs/en/restful_api.md)获取详细的api_server RESTful API信息,更多部署教程则可在[这里](https://github.com/InternLM/LMDeploy)找到。 -[LMDeploy](https://github.com/InternLM/LMDeploy) 支持了 InternLM 部署的完整流程,请参考 [部署教程](https://github.com/InternLM/LMDeploy) 了解 InternLM 的更多部署细节。 ## 微调&训练 diff --git a/README.md b/README.md index 76b5987..e509963 100644 --- a/README.md +++ b/README.md @@ -212,23 +212,22 @@ We use [LMDeploy](https://github.com/InternLM/LMDeploy) to complete the one-clic 1. First, install LMDeploy: -``` - python3 -m pip install lmdeploy +```shell +python3 -m pip install lmdeploy ``` -2. Use the following command for quick deployment: +2. Use the following command for iteractive communication with `internlm-chat-7b` model on localhost: -``` - python3 -m lmdeploy.serve.turbomind.deploy InternLM-7B /path/to/internlm-7b/model hf +```shell +lmdeploy chat turbomind InternLM/internlm-chat-7b --model-name internlm-chat-7b ``` -3. After exporting the model, you can start a server and have a conversation with the deployed model using the following command: +3. Besides chatting via command line, you can start lmdeploy `api_server` as below: +```shell +lmdeploy serve api_server InternLM/internlm-chat-7b --model-name internlm-chat-7b ``` - python3 -m lmdeploy.serve.client {server_ip_addresss}:33337 -``` - -[LMDeploy](https://github.com/InternLM/LMDeploy) provides a complete workflow for deploying InternLM. Please refer to the [deployment tutorial](https://github.com/InternLM/LMDeploy) for more details on deploying InternLM. +For a comprehensive understanding of the `api_server` RESTful API, kindly consult [this](https://github.com/InternLM/lmdeploy/blob/main/docs/en/restful_api.md) guide. For additional deployment tutorials, feel free to explore [here](https://github.com/InternLM/LMDeploy). ## Fine-tuning & Training