diff --git a/README.md b/README.md index bfb84d0..0c81e46 100644 --- a/README.md +++ b/README.md @@ -126,6 +126,12 @@ The release of InternLM2 series contains two model sizes: 7B and 20B. 7B models - According to the released performance of 2024-01-17. +## Requirements + +- Python >= 3.8 +- PyTorch >= 1.12.0 (2.0.0 and above are recommended) +- Transformers >= 4.34 + ## Usages We briefly show the usages with [Transformers](#import-from-transformers), [ModelScope](#import-from-modelscope), and [Web demos](#dialogue). @@ -187,7 +193,7 @@ print(response) You can interact with the InternLM Chat 7B model through a frontend interface by running the following code: ```bash -pip install streamlit==1.24.0 +pip install streamlit pip install transformers>=4.34 streamlit run ./chat/web_demo.py ``` @@ -196,7 +202,7 @@ streamlit run ./chat/web_demo.py We use [LMDeploy](https://github.com/InternLM/LMDeploy) for fast deployment of InternLM. -With only 4 lines of codes, you can perform `internlm2-chat-7b` inference after `pip install lmdeploy`. +With only 4 lines of codes, you can perform `internlm2-chat-7b` inference after `pip install lmdeploy>=0.2.1`. ```python from lmdeploy import pipeline diff --git a/README_zh-CN.md b/README_zh-CN.md index 49cf811..8c2c94c 100644 --- a/README_zh-CN.md +++ b/README_zh-CN.md @@ -123,6 +123,12 @@ InternLM2 系列模型在本仓库正式发布,具有如下特性: - 性能数据截止2024-01-17 +## 依赖 + +- Python >= 3.8 +- PyTorch >= 1.12.0 (推荐 2.0.0 和更高版本) +- Transformers >= 4.34 + ## 使用案例 接下来我们展示使用 [Transformers](#import-from-transformers),[ModelScope](#import-from-modelscope) 和 [Web demo](#dialogue) 进行推理。 @@ -183,7 +189,7 @@ print(response) 可以通过以下代码启动一个前端的界面来与 InternLM Chat 7B 模型进行交互 ```bash -pip install streamlit==1.24.0 +pip install streamlit pip install transformers>=4.34 streamlit run ./chat/web_demo.py ``` @@ -192,7 +198,7 @@ streamlit run ./chat/web_demo.py 我们使用 [LMDeploy](https://github.com/InternLM/LMDeploy) 完成 InternLM 的一键部署。 -通过 `pip install lmdeploy` 安装 LMDeploy 之后,只需 4 行代码,就可以实现离线批处理: +通过 `pip install lmdeploy>=0.2.1` 安装 LMDeploy 之后,只需 4 行代码,就可以实现离线批处理: ```python from lmdeploy import pipeline diff --git a/chat/README.md b/chat/README.md index 48a953b..8ce04ea 100644 --- a/chat/README.md +++ b/chat/README.md @@ -51,8 +51,8 @@ print(response) You can interact with the InternLM Chat 7B model through a frontend interface by running the following code: ```bash -pip install streamlit==1.24.0 -pip install transformers==4.30.2 +pip install streamlit +pip install transformers>=4.34 streamlit run ./chat/web_demo.py ``` diff --git a/chat/README_zh-CN.md b/chat/README_zh-CN.md index 613de60..f687ee5 100644 --- a/chat/README_zh-CN.md +++ b/chat/README_zh-CN.md @@ -45,7 +45,7 @@ print(response) 可以通过以下代码启动一个前端的界面来与 InternLM2 Chat 7B 模型进行交互 ```bash -pip install streamlit==1.24.0 -pip install transformers==4.30.2 +pip install streamlit +pip install transformers>=4.34 streamlit run ./web_demo.py ``` diff --git a/chat/lmdeploy.md b/chat/lmdeploy.md index 80fe42c..be0f4fd 100644 --- a/chat/lmdeploy.md +++ b/chat/lmdeploy.md @@ -11,7 +11,7 @@ This article primarily highlights the basic usage of LMDeploy. For a comprehensi Install lmdeploy with pip (python 3.8+) ```shell -pip install lmdeploy +pip install lmdeploy>=0.2.1 ``` ## Offline batch inference diff --git a/chat/lmdeploy_zh_cn.md b/chat/lmdeploy_zh_cn.md index 7b47d7d..d337399 100644 --- a/chat/lmdeploy_zh_cn.md +++ b/chat/lmdeploy_zh_cn.md @@ -11,7 +11,7 @@ 使用 pip(python 3.8+)安装 LMDeploy ```shell -pip install lmdeploy +pip install lmdeploy>=0.2.1 ``` ## 离线批处理 diff --git a/finetune/README.md b/finetune/README.md index 06df0e0..d41efe2 100644 --- a/finetune/README.md +++ b/finetune/README.md @@ -29,7 +29,7 @@ We recommend two projects to fine-tune InternLM. - Install XTuner with DeepSpeed integration ```shell - pip install -U 'xtuner[deepspeed]' + pip install -U 'xtuner[deepspeed]>=0.1.13' ``` ### Fine-tune diff --git a/finetune/README_zh-CN.md b/finetune/README_zh-CN.md index 742ff05..574f8a5 100644 --- a/finetune/README_zh-CN.md +++ b/finetune/README_zh-CN.md @@ -29,7 +29,7 @@ - 安装集成 DeepSpeed 版本的 XTuner ```shell - pip install -U 'xtuner[deepspeed]' + pip install -U 'xtuner[deepspeed]>=0.1.13' ``` ### 微调 diff --git a/requirements.txt b/requirements.txt index 5e44167..7b62e88 100644 --- a/requirements.txt +++ b/requirements.txt @@ -1,2 +1,3 @@ sentencepiece +streamlit transformers>=4.34