mirror of https://github.com/InternLM/InternLM
[CI]: Add flash_attn installation in testcase and update transformers requirement (#746)
Co-authored-by: zhulin1 <zhulin1@pjlab.org.cn>pull/753/head
parent
b91949b918
commit
aa7336172b
|
@ -13,7 +13,7 @@ jobs:
|
||||||
runs-on: [t_cluster]
|
runs-on: [t_cluster]
|
||||||
strategy:
|
strategy:
|
||||||
matrix:
|
matrix:
|
||||||
transformers-version: [4.34.0, latest]
|
transformers-version: [4.38.0, latest]
|
||||||
steps:
|
steps:
|
||||||
- name: mask env
|
- name: mask env
|
||||||
run: |
|
run: |
|
||||||
|
@ -27,15 +27,17 @@ jobs:
|
||||||
conda create -n internlm-model-latest --clone ${CONDA_BASE_ENV}
|
conda create -n internlm-model-latest --clone ${CONDA_BASE_ENV}
|
||||||
source activate internlm-model-latest
|
source activate internlm-model-latest
|
||||||
pip install transformers==${{ matrix.transformers-version }}
|
pip install transformers==${{ matrix.transformers-version }}
|
||||||
pip install sentencepiece auto-gptq==0.6.0 lmdeploy[all]
|
|
||||||
srun -p ${SLURM_PARTITION} --kill-on-bad-exit=1 --job-name=${GITHUB_RUN_ID}-${GITHUB_JOB} --gpus-per-task=2 pytest -s -v --color=yes ./tests/test_hf_model.py
|
|
||||||
conda deactivate
|
|
||||||
- name: load_latest_hf_model
|
- name: load_latest_hf_model
|
||||||
if: matrix.transformers-version == 'latest'
|
if: matrix.transformers-version == 'latest'
|
||||||
run: |
|
run: |
|
||||||
conda create -n internlm-model-latest --clone ${CONDA_BASE_ENV}
|
conda create -n internlm-model-latest --clone ${CONDA_BASE_ENV}
|
||||||
source activate internlm-model-latest
|
source activate internlm-model-latest
|
||||||
pip install transformers
|
pip install transformers
|
||||||
|
- name: run_test
|
||||||
|
run: |
|
||||||
|
source activate internlm-model-latest
|
||||||
|
pip install torch==2.2.2 torchvision==0.17.2 --index-url https://download.pytorch.org/whl/cu118
|
||||||
|
pip install /mnt/petrelfs/qa-caif-cicd/resource/flash_attn-2.5.8+cu118torch2.2cxx11abiFALSE-cp310-cp310-linux_x86_64.whl
|
||||||
pip install sentencepiece auto-gptq==0.6.0 lmdeploy[all]
|
pip install sentencepiece auto-gptq==0.6.0 lmdeploy[all]
|
||||||
srun -p ${SLURM_PARTITION} --kill-on-bad-exit=1 --job-name=${GITHUB_RUN_ID}-${GITHUB_JOB} --gpus-per-task=2 pytest -s -v --color=yes ./tests/test_hf_model.py
|
srun -p ${SLURM_PARTITION} --kill-on-bad-exit=1 --job-name=${GITHUB_RUN_ID}-${GITHUB_JOB} --gpus-per-task=2 pytest -s -v --color=yes ./tests/test_hf_model.py
|
||||||
conda deactivate
|
conda deactivate
|
||||||
|
|
|
@ -138,7 +138,7 @@ The release of InternLM2 series contains two model sizes: 7B and 20B. 7B models
|
||||||
|
|
||||||
- Python >= 3.8
|
- Python >= 3.8
|
||||||
- PyTorch >= 1.12.0 (2.0.0 and above are recommended)
|
- PyTorch >= 1.12.0 (2.0.0 and above are recommended)
|
||||||
- Transformers >= 4.34
|
- Transformers >= 4.38
|
||||||
|
|
||||||
## Usages
|
## Usages
|
||||||
|
|
||||||
|
@ -147,7 +147,7 @@ The chat models adopt [chatml format](./chat/chat_format.md) to support both cha
|
||||||
To ensure a better usage effect, please make sure that the installed transformers library version meets the following requirements before performing inference with [Transformers](#import-from-transformers) or [ModelScope](#import-from-modelscope):
|
To ensure a better usage effect, please make sure that the installed transformers library version meets the following requirements before performing inference with [Transformers](#import-from-transformers) or [ModelScope](#import-from-modelscope):
|
||||||
|
|
||||||
```
|
```
|
||||||
transformers >= 4.34
|
transformers >= 4.38
|
||||||
```
|
```
|
||||||
|
|
||||||
### Import from Transformers
|
### Import from Transformers
|
||||||
|
@ -202,7 +202,7 @@ You can interact with the InternLM Chat 7B model through a frontend interface by
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
pip install streamlit
|
pip install streamlit
|
||||||
pip install transformers>=4.34
|
pip install transformers>=4.38
|
||||||
streamlit run ./chat/web_demo.py
|
streamlit run ./chat/web_demo.py
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
@ -135,7 +135,7 @@ InternLM2 系列模型在本仓库正式发布,具有如下特性:
|
||||||
|
|
||||||
- Python >= 3.8
|
- Python >= 3.8
|
||||||
- PyTorch >= 1.12.0 (推荐 2.0.0 和更高版本)
|
- PyTorch >= 1.12.0 (推荐 2.0.0 和更高版本)
|
||||||
- Transformers >= 4.34
|
- Transformers >= 4.38
|
||||||
|
|
||||||
## 使用案例
|
## 使用案例
|
||||||
|
|
||||||
|
@ -144,7 +144,7 @@ InternLM2 系列模型在本仓库正式发布,具有如下特性:
|
||||||
为了保障更好的使用效果,在用 [Transformers](#import-from-transformers) 或 [ModelScope](#import-from-modelscope) 进行推理前,请确保安装的 transformers 库版本满足以下要求:
|
为了保障更好的使用效果,在用 [Transformers](#import-from-transformers) 或 [ModelScope](#import-from-modelscope) 进行推理前,请确保安装的 transformers 库版本满足以下要求:
|
||||||
|
|
||||||
```
|
```
|
||||||
transformers >= 4.34
|
transformers >= 4.38
|
||||||
```
|
```
|
||||||
|
|
||||||
### 通过 Transformers 加载
|
### 通过 Transformers 加载
|
||||||
|
@ -198,7 +198,7 @@ print(response)
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
pip install streamlit
|
pip install streamlit
|
||||||
pip install transformers>=4.34
|
pip install transformers>=4.38
|
||||||
streamlit run ./chat/web_demo.py
|
streamlit run ./chat/web_demo.py
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
@ -52,7 +52,7 @@ You can interact with the InternLM Chat 7B model through a frontend interface by
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
pip install streamlit
|
pip install streamlit
|
||||||
pip install transformers>=4.34
|
pip install transformers>=4.38
|
||||||
streamlit run ./chat/web_demo.py
|
streamlit run ./chat/web_demo.py
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
@ -46,6 +46,6 @@ print(response)
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
pip install streamlit
|
pip install streamlit
|
||||||
pip install transformers>=4.34
|
pip install transformers>=4.38
|
||||||
streamlit run ./web_demo.py
|
streamlit run ./web_demo.py
|
||||||
```
|
```
|
||||||
|
|
|
@ -1,3 +1,3 @@
|
||||||
sentencepiece
|
sentencepiece
|
||||||
streamlit
|
streamlit
|
||||||
transformers>=4.34
|
transformers>=4.38
|
||||||
|
|
Loading…
Reference in New Issue