mirror of https://github.com/InternLM/InternLM
[Doc] update deployment guide to keep consistency with lmdeploy (#136)
* update deployment guide * fix errorpull/159/head
parent
6b6295aea3
commit
fbe6ef1da5
|
@ -119,21 +119,22 @@ streamlit run web_demo.py
|
|||
|
||||
1. 首先安装 LMDeploy:
|
||||
|
||||
```
|
||||
python3 -m pip install lmdeploy
|
||||
```
|
||||
```bash
|
||||
python3 -m pip install lmdeploy
|
||||
```
|
||||
|
||||
2. 快速的部署命令如下:
|
||||
|
||||
```
|
||||
python3 -m lmdeploy.serve.turbomind.deploy InternLM-7B /path/to/internlm-7b/model hf
|
||||
```
|
||||
```bash
|
||||
python3 -m lmdeploy.serve.turbomind.deploy internlm-chat-7b /path/to/internlm-7b/model
|
||||
```
|
||||
|
||||
3. 在导出模型后,你可以直接通过如下命令启动服务一个服务并和部署后的模型对话
|
||||
3. 在导出模型后,你可以直接通过如下命令启动服务,并在客户端与AI对话
|
||||
|
||||
```
|
||||
python3 -m lmdeploy.serve.client {server_ip_addresss}:33337
|
||||
```
|
||||
```bash
|
||||
bash workspace/service_docker_up.sh
|
||||
python3 -m lmdeploy.serve.client {server_ip_addresss}:33337
|
||||
```
|
||||
|
||||
[LMDeploy](https://github.com/InternLM/LMDeploy) 支持了 InternLM 部署的完整流程,请参考 [部署教程](https://github.com/InternLM/LMDeploy) 了解 InternLM 的更多部署细节。
|
||||
|
||||
|
|
21
README.md
21
README.md
|
@ -125,21 +125,22 @@ We use [LMDeploy](https://github.com/InternLM/LMDeploy) to complete the one-clic
|
|||
|
||||
1. First, install LMDeploy:
|
||||
|
||||
```
|
||||
python3 -m pip install lmdeploy
|
||||
```
|
||||
```bash
|
||||
python3 -m pip install lmdeploy
|
||||
```
|
||||
|
||||
2. Use the following command for quick deployment:
|
||||
|
||||
```
|
||||
python3 -m lmdeploy.serve.turbomind.deploy InternLM-7B /path/to/internlm-7b/model hf
|
||||
```
|
||||
```bash
|
||||
python3 -m lmdeploy.serve.turbomind.deploy internlm-chat-7b /path/to/internlm-chat-7b/model
|
||||
```
|
||||
|
||||
3. After exporting the model, you can start a server and have a conversation with the deployed model using the following command:
|
||||
|
||||
```
|
||||
python3 -m lmdeploy.serve.client {server_ip_addresss}:33337
|
||||
```
|
||||
|
||||
```bash
|
||||
bash workspace/service_docker_up.sh
|
||||
python3 -m lmdeploy.serve.client {server_ip_addresss}:33337
|
||||
```
|
||||
|
||||
[LMDeploy](https://github.com/InternLM/LMDeploy) provides a complete workflow for deploying InternLM. Please refer to the [deployment tutorial](https://github.com/InternLM/LMDeploy) for more details on deploying InternLM.
|
||||
|
||||
|
|
Loading…
Reference in New Issue