mirror of https://github.com/InternLM/InternLM
[doc]: update requirements (#667)
Co-authored-by: Wenwei Zhang <40779233+ZwwWayne@users.noreply.github.com>pull/674/head
parent
78bcb07f0e
commit
3599ddd0e4
10
README.md
10
README.md
|
@ -126,6 +126,12 @@ The release of InternLM2 series contains two model sizes: 7B and 20B. 7B models
|
||||||
|
|
||||||
- According to the released performance of 2024-01-17.
|
- According to the released performance of 2024-01-17.
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
|
||||||
|
- Python >= 3.8
|
||||||
|
- PyTorch >= 1.12.0 (2.0.0 and above are recommended)
|
||||||
|
- Transformers >= 4.34
|
||||||
|
|
||||||
## Usages
|
## Usages
|
||||||
|
|
||||||
We briefly show the usages with [Transformers](#import-from-transformers), [ModelScope](#import-from-modelscope), and [Web demos](#dialogue).
|
We briefly show the usages with [Transformers](#import-from-transformers), [ModelScope](#import-from-modelscope), and [Web demos](#dialogue).
|
||||||
|
@ -187,7 +193,7 @@ print(response)
|
||||||
You can interact with the InternLM Chat 7B model through a frontend interface by running the following code:
|
You can interact with the InternLM Chat 7B model through a frontend interface by running the following code:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
pip install streamlit==1.24.0
|
pip install streamlit
|
||||||
pip install transformers>=4.34
|
pip install transformers>=4.34
|
||||||
streamlit run ./chat/web_demo.py
|
streamlit run ./chat/web_demo.py
|
||||||
```
|
```
|
||||||
|
@ -196,7 +202,7 @@ streamlit run ./chat/web_demo.py
|
||||||
|
|
||||||
We use [LMDeploy](https://github.com/InternLM/LMDeploy) for fast deployment of InternLM.
|
We use [LMDeploy](https://github.com/InternLM/LMDeploy) for fast deployment of InternLM.
|
||||||
|
|
||||||
With only 4 lines of codes, you can perform `internlm2-chat-7b` inference after `pip install lmdeploy`.
|
With only 4 lines of codes, you can perform `internlm2-chat-7b` inference after `pip install lmdeploy>=0.2.1`.
|
||||||
|
|
||||||
```python
|
```python
|
||||||
from lmdeploy import pipeline
|
from lmdeploy import pipeline
|
||||||
|
|
|
@ -123,6 +123,12 @@ InternLM2 系列模型在本仓库正式发布,具有如下特性:
|
||||||
|
|
||||||
- 性能数据截止2024-01-17
|
- 性能数据截止2024-01-17
|
||||||
|
|
||||||
|
## 依赖
|
||||||
|
|
||||||
|
- Python >= 3.8
|
||||||
|
- PyTorch >= 1.12.0 (推荐 2.0.0 和更高版本)
|
||||||
|
- Transformers >= 4.34
|
||||||
|
|
||||||
## 使用案例
|
## 使用案例
|
||||||
|
|
||||||
接下来我们展示使用 [Transformers](#import-from-transformers),[ModelScope](#import-from-modelscope) 和 [Web demo](#dialogue) 进行推理。
|
接下来我们展示使用 [Transformers](#import-from-transformers),[ModelScope](#import-from-modelscope) 和 [Web demo](#dialogue) 进行推理。
|
||||||
|
@ -183,7 +189,7 @@ print(response)
|
||||||
可以通过以下代码启动一个前端的界面来与 InternLM Chat 7B 模型进行交互
|
可以通过以下代码启动一个前端的界面来与 InternLM Chat 7B 模型进行交互
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
pip install streamlit==1.24.0
|
pip install streamlit
|
||||||
pip install transformers>=4.34
|
pip install transformers>=4.34
|
||||||
streamlit run ./chat/web_demo.py
|
streamlit run ./chat/web_demo.py
|
||||||
```
|
```
|
||||||
|
@ -192,7 +198,7 @@ streamlit run ./chat/web_demo.py
|
||||||
|
|
||||||
我们使用 [LMDeploy](https://github.com/InternLM/LMDeploy) 完成 InternLM 的一键部署。
|
我们使用 [LMDeploy](https://github.com/InternLM/LMDeploy) 完成 InternLM 的一键部署。
|
||||||
|
|
||||||
通过 `pip install lmdeploy` 安装 LMDeploy 之后,只需 4 行代码,就可以实现离线批处理:
|
通过 `pip install lmdeploy>=0.2.1` 安装 LMDeploy 之后,只需 4 行代码,就可以实现离线批处理:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
from lmdeploy import pipeline
|
from lmdeploy import pipeline
|
||||||
|
|
|
@ -51,8 +51,8 @@ print(response)
|
||||||
You can interact with the InternLM Chat 7B model through a frontend interface by running the following code:
|
You can interact with the InternLM Chat 7B model through a frontend interface by running the following code:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
pip install streamlit==1.24.0
|
pip install streamlit
|
||||||
pip install transformers==4.30.2
|
pip install transformers>=4.34
|
||||||
streamlit run ./chat/web_demo.py
|
streamlit run ./chat/web_demo.py
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
@ -45,7 +45,7 @@ print(response)
|
||||||
可以通过以下代码启动一个前端的界面来与 InternLM2 Chat 7B 模型进行交互
|
可以通过以下代码启动一个前端的界面来与 InternLM2 Chat 7B 模型进行交互
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
pip install streamlit==1.24.0
|
pip install streamlit
|
||||||
pip install transformers==4.30.2
|
pip install transformers>=4.34
|
||||||
streamlit run ./web_demo.py
|
streamlit run ./web_demo.py
|
||||||
```
|
```
|
||||||
|
|
|
@ -11,7 +11,7 @@ This article primarily highlights the basic usage of LMDeploy. For a comprehensi
|
||||||
Install lmdeploy with pip (python 3.8+)
|
Install lmdeploy with pip (python 3.8+)
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
pip install lmdeploy
|
pip install lmdeploy>=0.2.1
|
||||||
```
|
```
|
||||||
|
|
||||||
## Offline batch inference
|
## Offline batch inference
|
||||||
|
|
|
@ -11,7 +11,7 @@
|
||||||
使用 pip(python 3.8+)安装 LMDeploy
|
使用 pip(python 3.8+)安装 LMDeploy
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
pip install lmdeploy
|
pip install lmdeploy>=0.2.1
|
||||||
```
|
```
|
||||||
|
|
||||||
## 离线批处理
|
## 离线批处理
|
||||||
|
|
|
@ -29,7 +29,7 @@ We recommend two projects to fine-tune InternLM.
|
||||||
- Install XTuner with DeepSpeed integration
|
- Install XTuner with DeepSpeed integration
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
pip install -U 'xtuner[deepspeed]'
|
pip install -U 'xtuner[deepspeed]>=0.1.13'
|
||||||
```
|
```
|
||||||
|
|
||||||
### Fine-tune
|
### Fine-tune
|
||||||
|
|
|
@ -29,7 +29,7 @@
|
||||||
- 安装集成 DeepSpeed 版本的 XTuner
|
- 安装集成 DeepSpeed 版本的 XTuner
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
pip install -U 'xtuner[deepspeed]'
|
pip install -U 'xtuner[deepspeed]>=0.1.13'
|
||||||
```
|
```
|
||||||
|
|
||||||
### 微调
|
### 微调
|
||||||
|
|
|
@ -1,2 +1,3 @@
|
||||||
sentencepiece
|
sentencepiece
|
||||||
|
streamlit
|
||||||
transformers>=4.34
|
transformers>=4.34
|
||||||
|
|
Loading…
Reference in New Issue