[Doc] update deployment guide to keep consistency with lmdeploy (#136)

* update deployment guide

* fix error
pull/159/head
lvhan028 2023-07-31 14:42:07 +08:00 committed by GitHub
parent 6b6295aea3
commit fbe6ef1da5
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 22 additions and 20 deletions

View File

@ -119,19 +119,20 @@ streamlit run web_demo.py
1. 首先安装 LMDeploy: 1. 首先安装 LMDeploy:
``` ```bash
python3 -m pip install lmdeploy python3 -m pip install lmdeploy
``` ```
2. 快速的部署命令如下: 2. 快速的部署命令如下:
``` ```bash
python3 -m lmdeploy.serve.turbomind.deploy InternLM-7B /path/to/internlm-7b/model hf python3 -m lmdeploy.serve.turbomind.deploy internlm-chat-7b /path/to/internlm-7b/model
``` ```
3. 在导出模型后,你可以直接通过如下命令启动服务一个服务并和部署后的模型对话 3. 在导出模型后,你可以直接通过如下命令启动服务并在客户端与AI对话
``` ```bash
bash workspace/service_docker_up.sh
python3 -m lmdeploy.serve.client {server_ip_addresss}:33337 python3 -m lmdeploy.serve.client {server_ip_addresss}:33337
``` ```

View File

@ -125,19 +125,20 @@ We use [LMDeploy](https://github.com/InternLM/LMDeploy) to complete the one-clic
1. First, install LMDeploy: 1. First, install LMDeploy:
``` ```bash
python3 -m pip install lmdeploy python3 -m pip install lmdeploy
``` ```
2. Use the following command for quick deployment: 2. Use the following command for quick deployment:
``` ```bash
python3 -m lmdeploy.serve.turbomind.deploy InternLM-7B /path/to/internlm-7b/model hf python3 -m lmdeploy.serve.turbomind.deploy internlm-chat-7b /path/to/internlm-chat-7b/model
``` ```
3. After exporting the model, you can start a server and have a conversation with the deployed model using the following command: 3. After exporting the model, you can start a server and have a conversation with the deployed model using the following command:
``` ```bash
bash workspace/service_docker_up.sh
python3 -m lmdeploy.serve.client {server_ip_addresss}:33337 python3 -m lmdeploy.serve.client {server_ip_addresss}:33337
``` ```