From ca00327581e19b0fad5c58bd3ca29f75f23cfeca Mon Sep 17 00:00:00 2001
From: duzx16
- π Join our Slack and WeChat + π ε ε ₯ζ们η Slack ε WeChat
## δ»η» diff --git a/README_en.md b/README_en.md index 632a22a..0d4f1ac 100644 --- a/README_en.md +++ b/README_en.md @@ -1,5 +1,13 @@ # ChatGLM-6B + +
+ π Blog β’ π€ HF Repo β’ π¦ Twitter β’ π [GLM@ACL 22] [GitHub] β’ π [GLM-130B@ICLR 23] [GitHub]
+
+ π Join our Slack and WeChat +
+ ## Introduction ChatGLM-6B is an open bilingual language model based on [General Language Model (GLM)](https://github.com/THUDM/GLM) framework, with 6.2 billion parameters. With the quantization technique, users can deploy locally on consumer-grade graphics cards (only 6GB of GPU memory is required at the INT4 quantization level). diff --git a/resources/WECHAT.md b/resources/WECHAT.md index ffe3ec5..c9ee867 100644 --- a/resources/WECHAT.md +++ b/resources/WECHAT.md @@ -1,3 +1,7 @@ - +ζ«η ε ³ζ³¨ε ¬δΌε·οΌε ε ₯γChatGLMδΊ€ζ΅ηΎ€γ
+Scan the QR code to follow the official account and join the "ChatGLM Discussion Group"
+