doc: update requirements

pull/667/head
RangiLyu 2024-01-26 17:48:14 +08:00
parent 1cb9870cb3
commit 9e60ea0b64
8 changed files with 28 additions and 16 deletions

View File

@ -124,6 +124,12 @@ The release of InternLM2 series contains two model sizes: 7B and 20B. 7B models
- According to the released performance of 2024-01-17.
## Requirements
- Python >= 3.8
- PyTorch >= 1.12.0 (2.0.0 and above are recommended)
- Transformers >= 4.34
## Usages
We briefly show the usages with [Transformers](#import-from-transformers), [ModelScope](#import-from-modelscope), and [Web demos](#dialogue).
@ -183,7 +189,7 @@ print(response)
You can interact with the InternLM Chat 7B model through a frontend interface by running the following code:
```bash
pip install streamlit==1.24.0
pip install streamlit
pip install transformers>=4.34
streamlit run ./chat/web_demo.py
```
@ -192,7 +198,7 @@ streamlit run ./chat/web_demo.py
We use [LMDeploy](https://github.com/InternLM/LMDeploy) for fast deployment of InternLM.
With only 4 lines of codes, you can perform `internlm2-chat-7b` inference after `pip install lmdeploy`.
With only 4 lines of codes, you can perform `internlm2-chat-7b` inference after `pip install lmdeploy>=0.2.1`.
```python
from lmdeploy import pipeline

View File

@ -122,6 +122,12 @@ InternLM2 系列模型在本仓库正式发布,具有如下特性:
- 性能数据截止2024-01-17
## 依赖
- Python >= 3.8
- PyTorch >= 1.12.0 (推荐 2.0.0 和更高版本)
- Transformers >= 4.34
## 使用案例
接下来我们展示使用 [Transformers](#import-from-transformers)[ModelScope](#import-from-modelscope) 和 [Web demo](#dialogue) 进行推理。
@ -180,7 +186,7 @@ print(response)
可以通过以下代码启动一个前端的界面来与 InternLM Chat 7B 模型进行交互
```bash
pip install streamlit==1.24.0
pip install streamlit
pip install transformers>=4.34
streamlit run ./chat/web_demo.py
```
@ -189,7 +195,7 @@ streamlit run ./chat/web_demo.py
我们使用 [LMDeploy](https://github.com/InternLM/LMDeploy) 完成 InternLM 的一键部署。
通过 `pip install lmdeploy` 安装 LMDeploy 之后,只需 4 行代码,就可以实现离线批处理:
通过 `pip install lmdeploy>=0.2.1` 安装 LMDeploy 之后,只需 4 行代码,就可以实现离线批处理:
```python
from lmdeploy import pipeline

View File

@ -51,8 +51,8 @@ print(response)
You can interact with the InternLM Chat 7B model through a frontend interface by running the following code:
```bash
pip install streamlit==1.24.0
pip install transformers==4.30.2
pip install streamlit
pip install transformers>=4.34
streamlit run ./chat/web_demo.py
```

View File

@ -45,7 +45,7 @@ print(response)
可以通过以下代码启动一个前端的界面来与 InternLM2 Chat 7B 模型进行交互
```bash
pip install streamlit==1.24.0
pip install transformers==4.30.2
pip install streamlit
pip install transformers>=4.34
streamlit run ./web_demo.py
```

View File

@ -12,7 +12,7 @@ This article primarily highlights the basic usage of LMDeploy. For a comprehensi
Install lmdeploy with pip (python 3.8+)
```shell
pip install lmdeploy
pip install lmdeploy>=0.2.1
```
## Offline batch inference

View File

@ -12,7 +12,7 @@
使用 pippython 3.8+)安装 LMDeploy
```shell
pip install lmdeploy
pip install lmdeploy>=0.2.1
```
## 离线批处理

View File

@ -29,7 +29,7 @@ We recommend two projects to fine-tune InternLM.
- Install XTuner with DeepSpeed integration
```shell
pip install -U 'xtuner[deepspeed]'
pip install -U 'xtuner[deepspeed]>=0.1.13'
```
### Fine-tune

View File

@ -29,7 +29,7 @@
- 安装集成 DeepSpeed 版本的 XTuner
```shell
pip install -U 'xtuner[deepspeed]'
pip install -U 'xtuner[deepspeed]>=0.1.13'
```
### 微调