mirror of https://github.com/THUDM/ChatGLM-6B
Merge 1b57684158
into ee7fa65ebd
commit
3b58035c32
|
@ -4,7 +4,7 @@
|
|||
|
||||
ChatGLM-6B is an open bilingual language model based on [General Language Model (GLM)](https://github.com/THUDM/GLM) framework, with 6.2 billion parameters. With the quantization technique, users can deploy locally on consumer-grade graphics cards (only 6GB of GPU memory is required at the INT4 quantization level).
|
||||
|
||||
ChatGLM-6B uses technology similar to ChatGPT, optimized for Chinese QA and dialogue. The model is trained for about 1T tokens of Chinese and English corpus, supplemented by supervised fine-tuning, feedback bootstrap, and reinforcement learning wit human feedback. With only about 6.2 billion parameters, the model is able to generate answers that are in line with human preference.
|
||||
ChatGLM-6B uses technology similar to ChatGPT, optimized for Chinese QA and dialogue. The model is trained for about 1T tokens of Chinese and English corpus, supplemented by supervised fine-tuning, feedback bootstrap, and reinforcement learning with human feedback. With only about 6.2 billion parameters, the model is able to generate answers that are in line with human preference.
|
||||
|
||||
Try the [online demo](https://huggingface.co/spaces/ysharma/ChatGLM-6b_Gradio_Streaming) on Huggingface Spaces.
|
||||
|
||||
|
@ -32,7 +32,7 @@ If you have other good projects, please refer to the above format to add to READ
|
|||
|
||||
### Environment Setup
|
||||
|
||||
Install the requirements with pip: `pip install -r requirements.txt`. `transformers` library version is recommended to be `4.26.1`, but theoretically any version no lower than `4.23.1` is acceptable.
|
||||
Install the requirements with pip: `pip install -r requirements.txt`. `transformers` suggested library version is `4.26.1`, but theoretically any version greater than `4.23.1` is acceptable.
|
||||
|
||||
### Usage
|
||||
|
||||
|
|
Loading…
Reference in New Issue