Browse Source

Update README

pull/154/head
duzx16 2 years ago
parent
commit
8f29459f9a
  1. 9
      README.md
  2. 17
      README_en.md

9
README.md

@ -9,7 +9,12 @@ ChatGLM-6B 使用了和 ChatGPT 相似的技术,针对中文问答和对话进
*Read this in [English](README_en.md).* *Read this in [English](README_en.md).*
## 硬件需求 ## 更新信息
**[2023/03/19]** 增加流式输出接口`stream_chat`,已更新到网页版和命令行demo
## 使用方式
### 硬件需求
| **量化等级** | **最低 GPU 显存** | | **量化等级** | **最低 GPU 显存** |
| -------------- | ----------------- | | -------------- | ----------------- |
@ -17,8 +22,6 @@ ChatGLM-6B 使用了和 ChatGPT 相似的技术,针对中文问答和对话进
| INT8 | 10 GB | | INT8 | 10 GB |
| INT4 | 6 GB | | INT4 | 6 GB |
## 使用方式
### 环境安装 ### 环境安装
使用 pip 安装依赖:`pip install -r requirements.txt`,其中 `transformers` 库版本推荐为 `4.26.1`,但理论上不低于 `4.23.1` 即可。 使用 pip 安装依赖:`pip install -r requirements.txt`,其中 `transformers` 库版本推荐为 `4.26.1`,但理论上不低于 `4.23.1` 即可。

17
README_en.md

@ -6,16 +6,19 @@ ChatGLM-6B is an open bilingual language model based on [General Language Model
ChatGLM-6B uses technology similar to ChatGPT, optimized for Chinese QA and dialogue. The model is trained for about 1T tokens of Chinese and English corpus, supplemented by supervised fine-tuning, feedback bootstrap, and reinforcement learning wit human feedback. With only about 6.2 billion parameters, the model is able to generate answers that are in line with human preference. ChatGLM-6B uses technology similar to ChatGPT, optimized for Chinese QA and dialogue. The model is trained for about 1T tokens of Chinese and English corpus, supplemented by supervised fine-tuning, feedback bootstrap, and reinforcement learning wit human feedback. With only about 6.2 billion parameters, the model is able to generate answers that are in line with human preference.
## Hardware Requirements ## Update
**[2023/03/19]** Add streaming output function `stream_chat`, already applied in web and CLI demo
| **Quantization Level** | **GPU Memory** |
| ---------------------------- | -------------------- |
| FP16(no quantization) | 13 GB |
| INT8 | 10 GB |
| INT4 | 6 GB |
## Getting Started ## Getting Started
### Hardware Requirements
| **Quantization Level** | **GPU Memory** |
|------------------------|----------------|
| FP16(no quantization) | 13 GB |
| INT8 | 10 GB |
| INT4 | 6 GB |
### Environment Setup ### Environment Setup
Install the requirements with pip: `pip install -r requirements.txt`. `transformers` library version is recommended to be `4.26.1`, but theoretically any version no lower than `4.23.1` is acceptable. Install the requirements with pip: `pip install -r requirements.txt`. `transformers` library version is recommended to be `4.26.1`, but theoretically any version no lower than `4.23.1` is acceptable.

Loading…
Cancel
Save