diff --git a/assets/compass_support.svg b/assets/compass_support.svg
index 02d2cc5..9a77df2 100644
--- a/assets/compass_support.svg
+++ b/assets/compass_support.svg
@@ -1 +1 @@
-
\ No newline at end of file
+
diff --git a/assets/license.svg b/assets/license.svg
index 8e072ee..91f9344 100644
--- a/assets/license.svg
+++ b/assets/license.svg
@@ -1 +1 @@
-
\ No newline at end of file
+
diff --git a/assets/logo.svg b/assets/logo.svg
index 0d74a61..a921584 100644
--- a/assets/logo.svg
+++ b/assets/logo.svg
@@ -20,4 +20,4 @@
-
\ No newline at end of file
+
diff --git a/chat/openaoe.md b/chat/openaoe.md
index 6038b44..056e59e 100644
--- a/chat/openaoe.md
+++ b/chat/openaoe.md
@@ -12,10 +12,10 @@ Currently already supported LLMs: [InternLM2-Chat-7B](https://huggingface.co/int
We provide three different ways to run OpenAOE: `run by pip`, `run by docker` and `run by source code` as well.
-### Run by pip
+### Run by pip
#### **Install**
```shell
-pip install -U openaoe
+pip install -U openaoe
```
#### **Start**
```shell
@@ -65,7 +65,7 @@ python -m main -f /path/to/your/config-template.yaml
```
> [!TIP]
-> `/path/to/your/config.yaml` is the configuration file loaded by OpenAOE at startup,
+> `/path/to/your/config.yaml` is the configuration file loaded by OpenAOE at startup,
> which contains the relevant configuration information for the LLMs,
> including: API URLs, AKSKs, Tokens, etc.
> A template configuration yaml file can be found in `openaoe/backend/config/config.yaml`.
diff --git a/chat/openaoe_zh_cn.md b/chat/openaoe_zh_cn.md
index 7a9fc83..3640838 100644
--- a/chat/openaoe_zh_cn.md
+++ b/chat/openaoe_zh_cn.md
@@ -17,7 +17,7 @@
> 需要 python >= 3.9
#### **安装**
```shell
-pip install -U openaoe
+pip install -U openaoe
```
#### **运行**
```shell
@@ -50,7 +50,7 @@ docker run -p 10099:10099 -v /path/to/your/config-template.yaml:/app/config-temp
```shell
git clone https://github.com/internlm/OpenAOE
```
-2. [_可选_] (如果前端代码发生变动)重新构建前端项目
+2. [_可选_] (如果前端代码发生变动)重新构建前端项目
```shell
cd open-aoe/openaoe/frontend
npm install
diff --git a/chat/web_demo.py b/chat/web_demo.py
index a5c160a..c2a18df 100644
--- a/chat/web_demo.py
+++ b/chat/web_demo.py
@@ -11,11 +11,10 @@ from dataclasses import asdict
import streamlit as st
import torch
+from tools.transformers.interface import GenerationConfig, generate_interactive
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.utils import logging
-from tools.transformers.interface import GenerationConfig, generate_interactive
-
logger = logging.get_logger(__name__)
@@ -109,9 +108,15 @@ def main():
):
# Display robot response in chat message container
message_placeholder.markdown(cur_response + "▌")
- message_placeholder.markdown(cur_response)
+ message_placeholder.markdown(cur_response) # pylint: disable=undefined-loop-variable
# Add robot response to chat history
- st.session_state.messages.append({"role": "robot", "content": cur_response, "avatar": robot_avator})
+ st.session_state.messages.append(
+ {
+ "role": "robot",
+ "content": cur_response, # pylint: disable=undefined-loop-variable
+ "avatar": robot_avator,
+ }
+ )
torch.cuda.empty_cache()
diff --git a/model_cards/internlm2_20b.md b/model_cards/internlm2_20b.md
index 693c26c..cecd717 100644
--- a/model_cards/internlm2_20b.md
+++ b/model_cards/internlm2_20b.md
@@ -38,5 +38,5 @@ We have evaluated InternLM2 on several important benchmarks using the open-sourc
| MBPP(Sanitized) | 51.8 | 51.4 | 63.0 | 65.8 | 78.9 | 79.0 |
-- The evaluation results were obtained from [OpenCompass](https://github.com/open-compass/opencompass) , and evaluation configuration can be found in the configuration files provided by [OpenCompass](https://github.com/open-compass/opencompass).
+- The evaluation results were obtained from [OpenCompass](https://github.com/open-compass/opencompass) , and evaluation configuration can be found in the configuration files provided by [OpenCompass](https://github.com/open-compass/opencompass).
- The evaluation data may have numerical differences due to the version iteration of [OpenCompass](https://github.com/open-compass/opencompass), so please refer to the latest evaluation results of [OpenCompass](https://github.com/open-compass/opencompass).
diff --git a/model_cards/internlm2_7b.md b/model_cards/internlm2_7b.md
index 89da02f..5abd4b7 100644
--- a/model_cards/internlm2_7b.md
+++ b/model_cards/internlm2_7b.md
@@ -38,5 +38,5 @@ We have evaluated InternLM2 on several important benchmarks using the open-sourc
| MBPP(Sanitized) | 51.8 | 51.4 | 63.0 | 65.8 | 78.9 | 79.0 |
-- The evaluation results were obtained from [OpenCompass](https://github.com/open-compass/opencompass) , and evaluation configuration can be found in the configuration files provided by [OpenCompass](https://github.com/open-compass/opencompass).
+- The evaluation results were obtained from [OpenCompass](https://github.com/open-compass/opencompass) , and evaluation configuration can be found in the configuration files provided by [OpenCompass](https://github.com/open-compass/opencompass).
- The evaluation data may have numerical differences due to the version iteration of [OpenCompass](https://github.com/open-compass/opencompass), so please refer to the latest evaluation results of [OpenCompass](https://github.com/open-compass/opencompass).