Add examples to README.md
26
README.md
|
@ -91,6 +91,32 @@ model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True).ha
|
|||
|
||||
模型量化会带来一定的性能损失,经过测试,ChatGLM-6B 在 4-bit 量化下仍然能够进行自然流畅的生成,使用 [GPT-Q](https://arxiv.org/abs/2210.17323) 等量化方案可以进一步压缩量化精度/提升相同量化精度下的模型性能,我们期待开源社区为本项目提供对应 Pull Request。
|
||||
|
||||
<details><summary><b>ChatGLM-6B示例</b></summary>
|
||||
|
||||
![](examples/self-introduction.png)
|
||||
|
||||
![](examples/blog-outline.png)
|
||||
|
||||
![](examples/ad-writing.png)
|
||||
|
||||
![](examples/ad-writing-2.png)
|
||||
|
||||
![](examples/comments-writing.png)
|
||||
|
||||
![](examples/email-writing-1.png)
|
||||
|
||||
![](examples/email-writing-2.png)
|
||||
|
||||
![](examples/information-extraction.png)
|
||||
|
||||
![](examples/role-play.png)
|
||||
|
||||
![](examples/sport.png)
|
||||
|
||||
![](examples/tour-guide.png)
|
||||
|
||||
</details>
|
||||
|
||||
## 协议
|
||||
|
||||
本仓库的代码依照 [Apache-2.0](LICENSE) 协议开源,ChatGLM-6B 模型的权重的使用则需要遵循 [Model License](MODEL_LICENSE)。
|
||||
|
|
After Width: | Height: | Size: 123 KiB |
After Width: | Height: | Size: 159 KiB |
After Width: | Height: | Size: 162 KiB |
After Width: | Height: | Size: 260 KiB |
After Width: | Height: | Size: 230 KiB |
After Width: | Height: | Size: 224 KiB |
After Width: | Height: | Size: 131 KiB |
After Width: | Height: | Size: 278 KiB |
After Width: | Height: | Size: 231 KiB |
After Width: | Height: | Size: 292 KiB |
After Width: | Height: | Size: 332 KiB |