[Doc] update deployment guide to keep consistency with lmdeploy (#136)

* update deployment guide

* fix error
pull/159/head
lvhan028 2023-07-31 14:42:07 +08:00 committed by GitHub
parent 6b6295aea3
commit fbe6ef1da5
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 22 additions and 20 deletions

View File

@ -119,21 +119,22 @@ streamlit run web_demo.py
1. 首先安装 LMDeploy:
```
python3 -m pip install lmdeploy
```
```bash
python3 -m pip install lmdeploy
```
2. 快速的部署命令如下:
```
python3 -m lmdeploy.serve.turbomind.deploy InternLM-7B /path/to/internlm-7b/model hf
```
```bash
python3 -m lmdeploy.serve.turbomind.deploy internlm-chat-7b /path/to/internlm-7b/model
```
3. 在导出模型后,你可以直接通过如下命令启动服务一个服务并和部署后的模型对话
3. 在导出模型后,你可以直接通过如下命令启动服务并在客户端与AI对话
```
python3 -m lmdeploy.serve.client {server_ip_addresss}:33337
```
```bash
bash workspace/service_docker_up.sh
python3 -m lmdeploy.serve.client {server_ip_addresss}:33337
```
[LMDeploy](https://github.com/InternLM/LMDeploy) 支持了 InternLM 部署的完整流程,请参考 [部署教程](https://github.com/InternLM/LMDeploy) 了解 InternLM 的更多部署细节。

View File

@ -125,21 +125,22 @@ We use [LMDeploy](https://github.com/InternLM/LMDeploy) to complete the one-clic
1. First, install LMDeploy:
```
python3 -m pip install lmdeploy
```
```bash
python3 -m pip install lmdeploy
```
2. Use the following command for quick deployment:
```
python3 -m lmdeploy.serve.turbomind.deploy InternLM-7B /path/to/internlm-7b/model hf
```
```bash
python3 -m lmdeploy.serve.turbomind.deploy internlm-chat-7b /path/to/internlm-chat-7b/model
```
3. After exporting the model, you can start a server and have a conversation with the deployed model using the following command:
```
python3 -m lmdeploy.serve.client {server_ip_addresss}:33337
```
```bash
bash workspace/service_docker_up.sh
python3 -m lmdeploy.serve.client {server_ip_addresss}:33337
```
[LMDeploy](https://github.com/InternLM/LMDeploy) provides a complete workflow for deploying InternLM. Please refer to the [deployment tutorial](https://github.com/InternLM/LMDeploy) for more details on deploying InternLM.