mirror of https://github.com/THUDM/ChatGLM-6B
Add chatglm-6b-int4-qe
parent
955d475079
commit
6b13f660bc
|
@ -13,7 +13,7 @@ ChatGLM-6B 使用了和 ChatGPT 相似的技术,针对中文问答和对话进
|
|||
*Read this in [English](README_en.md).*
|
||||
|
||||
## 更新信息
|
||||
**[2023/03/23]** 增加API部署,感谢 [@LemonQu-GIT](https://github.com/LemonQu-GIT)
|
||||
**[2023/03/23]** 增加API部署(感谢 [@LemonQu-GIT](https://github.com/LemonQu-GIT)),增加Embedding量化模型[ChatGLM-6B-INT4-QE](https://huggingface.co/THUDM/chatglm-6b-int4-qe)
|
||||
|
||||
**[2023/03/19]** 增加流式输出接口 `stream_chat`,已更新到网页版和命令行 Demo。修复输出中的中文标点。增加量化后的模型 [ChatGLM-6B-INT4](https://huggingface.co/THUDM/chatglm-6b-int4)
|
||||
|
||||
|
@ -133,6 +133,13 @@ model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True).ha
|
|||
model = AutoModel.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True).half().cuda()
|
||||
```
|
||||
|
||||
**[2023/03/24]** 我们进一步提供了对Embedding量化后的模型,模型参数仅占用4.3 GB显存:
|
||||
```python
|
||||
model = AutoModel.from_pretrained("THUDM/chatglm-6b-int4-qe", trust_remote_code=True).half().cuda()
|
||||
```
|
||||
|
||||
|
||||
|
||||
### CPU 部署
|
||||
如果你没有 GPU 硬件的话,也可以在 CPU 上进行推理,但是推理速度会更慢。使用方法如下(需要大概 32GB 内存)
|
||||
```python
|
||||
|
|
|
@ -9,7 +9,7 @@ ChatGLM-6B uses technology similar to ChatGPT, optimized for Chinese QA and dial
|
|||
Try the [online demo](https://huggingface.co/spaces/ysharma/ChatGLM-6b_Gradio_Streaming) on Huggingface Spaces.
|
||||
|
||||
## Update
|
||||
**[2023/03/23]** Add API deployment, thanks to [@LemonQu-GIT](https://github.com/LemonQu-GIT)
|
||||
**[2023/03/23]** Add API deployment, thanks to [@LemonQu-GIT](https://github.com/LemonQu-GIT). Add embedding-quantized model [ChatGLM-6B-INT4-QE](https://huggingface.co/THUDM/chatglm-6b-int4-qe)
|
||||
|
||||
**[2023/03/19]** Add streaming output function `stream_chat`, already applied in web and CLI demo. Fix Chinese punctuations in output. Add quantized model [ChatGLM-6B-INT4](https://huggingface.co/THUDM/chatglm-6b-int4).
|
||||
|
||||
|
@ -129,6 +129,11 @@ Model quantization brings a certain performance decline. After testing, ChatGLM-
|
|||
model = AutoModel.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True).half().cuda()
|
||||
```
|
||||
|
||||
**[2023/03/24]** We further provide an embedding-quantized model whose model parameters only cost 4.3GB GPU memory
|
||||
```python
|
||||
model = AutoModel.from_pretrained("THUDM/chatglm-6b-int4-qe", trust_remote_code=True).half().cuda()
|
||||
```
|
||||
|
||||
### CPU Deployment
|
||||
|
||||
If your computer is not equipped with GPU, you can also conduct inference on CPU, but the inference speed is slow (and taking about 32GB of memory):
|
||||
|
|
Loading…
Reference in New Issue