[doc]: update requirements (#667)

Co-authored-by: Wenwei Zhang <40779233+ZwwWayne@users.noreply.github.com>
pull/674/head
RangiLyu 2024-01-26 21:23:15 +08:00 committed by GitHub
parent 78bcb07f0e
commit 3599ddd0e4
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
9 changed files with 25 additions and 12 deletions

View File

@ -126,6 +126,12 @@ The release of InternLM2 series contains two model sizes: 7B and 20B. 7B models
- According to the released performance of 2024-01-17.
## Requirements
- Python >= 3.8
- PyTorch >= 1.12.0 (2.0.0 and above are recommended)
- Transformers >= 4.34
## Usages
We briefly show the usages with [Transformers](#import-from-transformers), [ModelScope](#import-from-modelscope), and [Web demos](#dialogue).
@ -187,7 +193,7 @@ print(response)
You can interact with the InternLM Chat 7B model through a frontend interface by running the following code:
```bash
pip install streamlit==1.24.0
pip install streamlit
pip install transformers>=4.34
streamlit run ./chat/web_demo.py
```
@ -196,7 +202,7 @@ streamlit run ./chat/web_demo.py
We use [LMDeploy](https://github.com/InternLM/LMDeploy) for fast deployment of InternLM.
With only 4 lines of codes, you can perform `internlm2-chat-7b` inference after `pip install lmdeploy`.
With only 4 lines of codes, you can perform `internlm2-chat-7b` inference after `pip install lmdeploy>=0.2.1`.
```python
from lmdeploy import pipeline

View File

@ -123,6 +123,12 @@ InternLM2 系列模型在本仓库正式发布,具有如下特性:
- 性能数据截止2024-01-17
## 依赖
- Python >= 3.8
- PyTorch >= 1.12.0 (推荐 2.0.0 和更高版本)
- Transformers >= 4.34
## 使用案例
接下来我们展示使用 [Transformers](#import-from-transformers)[ModelScope](#import-from-modelscope) 和 [Web demo](#dialogue) 进行推理。
@ -183,7 +189,7 @@ print(response)
可以通过以下代码启动一个前端的界面来与 InternLM Chat 7B 模型进行交互
```bash
pip install streamlit==1.24.0
pip install streamlit
pip install transformers>=4.34
streamlit run ./chat/web_demo.py
```
@ -192,7 +198,7 @@ streamlit run ./chat/web_demo.py
我们使用 [LMDeploy](https://github.com/InternLM/LMDeploy) 完成 InternLM 的一键部署。
通过 `pip install lmdeploy` 安装 LMDeploy 之后,只需 4 行代码,就可以实现离线批处理:
通过 `pip install lmdeploy>=0.2.1` 安装 LMDeploy 之后,只需 4 行代码,就可以实现离线批处理:
```python
from lmdeploy import pipeline

View File

@ -51,8 +51,8 @@ print(response)
You can interact with the InternLM Chat 7B model through a frontend interface by running the following code:
```bash
pip install streamlit==1.24.0
pip install transformers==4.30.2
pip install streamlit
pip install transformers>=4.34
streamlit run ./chat/web_demo.py
```

View File

@ -45,7 +45,7 @@ print(response)
可以通过以下代码启动一个前端的界面来与 InternLM2 Chat 7B 模型进行交互
```bash
pip install streamlit==1.24.0
pip install transformers==4.30.2
pip install streamlit
pip install transformers>=4.34
streamlit run ./web_demo.py
```

View File

@ -11,7 +11,7 @@ This article primarily highlights the basic usage of LMDeploy. For a comprehensi
Install lmdeploy with pip (python 3.8+)
```shell
pip install lmdeploy
pip install lmdeploy>=0.2.1
```
## Offline batch inference

View File

@ -11,7 +11,7 @@
使用 pippython 3.8+)安装 LMDeploy
```shell
pip install lmdeploy
pip install lmdeploy>=0.2.1
```
## 离线批处理

View File

@ -29,7 +29,7 @@ We recommend two projects to fine-tune InternLM.
- Install XTuner with DeepSpeed integration
```shell
pip install -U 'xtuner[deepspeed]'
pip install -U 'xtuner[deepspeed]>=0.1.13'
```
### Fine-tune

View File

@ -29,7 +29,7 @@
- 安装集成 DeepSpeed 版本的 XTuner
```shell
pip install -U 'xtuner[deepspeed]'
pip install -U 'xtuner[deepspeed]>=0.1.13'
```
### 微调

View File

@ -1,2 +1,3 @@
sentencepiece
streamlit
transformers>=4.34