mirror of https://github.com/THUDM/ChatGLM2-6B
				
				
				
			Merge branch 'main' of github.com:THUDM/ChatGLM2-6B
						commit
						5486f4f170
					
				|  | @ -34,6 +34,9 @@ ChatGLM2-6B 开源模型旨在与开源社区一起推动大模型技术发展 | ||||||
| * [fastllm](https://github.com/ztxz16/fastllm/): 全平台加速推理方案,单GPU批量推理每秒可达10000+token,手机端最低3G内存实时运行(骁龙865上约4~5 token/s) | * [fastllm](https://github.com/ztxz16/fastllm/): 全平台加速推理方案,单GPU批量推理每秒可达10000+token,手机端最低3G内存实时运行(骁龙865上约4~5 token/s) | ||||||
| * [chatglm.cpp](https://github.com/li-plus/chatglm.cpp): 类似 llama.cpp 的 CPU 量化加速推理方案,实现 Mac 笔记本上实时对话 | * [chatglm.cpp](https://github.com/li-plus/chatglm.cpp): 类似 llama.cpp 的 CPU 量化加速推理方案,实现 Mac 笔记本上实时对话 | ||||||
| 
 | 
 | ||||||
|  | 支持 ChatGLM-6B 和相关应用在线训练的示例项目: | ||||||
|  | * [ChatGLM2-6B 的部署与微调教程](https://www.heywhale.com/mw/project/64984a7b72ebe240516ae79c) | ||||||
|  | 
 | ||||||
| ## 评测结果 | ## 评测结果 | ||||||
| 我们选取了部分中英文典型数据集进行了评测,以下为 ChatGLM2-6B 模型在 [MMLU](https://github.com/hendrycks/test) (英文)、[C-Eval](https://cevalbenchmark.com/static/leaderboard.html)(中文)、[GSM8K](https://github.com/openai/grade-school-math)(数学)、[BBH](https://github.com/suzgunmirac/BIG-Bench-Hard)(英文) 上的测评结果。在 [evaluation](./evaluation/README.md) 中提供了在 C-Eval 上进行测评的脚本。 | 我们选取了部分中英文典型数据集进行了评测,以下为 ChatGLM2-6B 模型在 [MMLU](https://github.com/hendrycks/test) (英文)、[C-Eval](https://cevalbenchmark.com/static/leaderboard.html)(中文)、[GSM8K](https://github.com/openai/grade-school-math)(数学)、[BBH](https://github.com/suzgunmirac/BIG-Bench-Hard)(英文) 上的测评结果。在 [evaluation](./evaluation/README.md) 中提供了在 C-Eval 上进行测评的脚本。 | ||||||
| 
 | 
 | ||||||
|  |  | ||||||
|  | @ -26,6 +26,9 @@ Although the model strives to ensure the compliance and accuracy of data at each | ||||||
| Open source projects that accelerate ChatGLM2: | Open source projects that accelerate ChatGLM2: | ||||||
| * [chatglm.cpp](https://github.com/li-plus/chatglm.cpp): Real-time CPU inference on a MacBook accelerated by quantization, similar to llama.cpp. | * [chatglm.cpp](https://github.com/li-plus/chatglm.cpp): Real-time CPU inference on a MacBook accelerated by quantization, similar to llama.cpp. | ||||||
| 
 | 
 | ||||||
|  | Example projects supporting online training of ChatGLM-6B and related applications: | ||||||
|  | * [ChatGLM-6B deployment and fine-tuning tutorial](https://www.heywhale.com/mw/project/64984a7b72ebe240516ae79c) | ||||||
|  | 
 | ||||||
| ## Evaluation | ## Evaluation | ||||||
| We selected some typical Chinese and English datasets for evaluation. Below are the evaluation results of the ChatGLM2-6B model on [MMLU](https://github.com/hendrycks/test) (English), [C-Eval](https://cevalbenchmark.com/static/leaderboard.html) (Chinese), [GSM8K](https://github.com/openai/grade-school-math) (Mathematics), [BBH](https://github.com/suzgunmirac/BIG-Bench-Hard) (English). | We selected some typical Chinese and English datasets for evaluation. Below are the evaluation results of the ChatGLM2-6B model on [MMLU](https://github.com/hendrycks/test) (English), [C-Eval](https://cevalbenchmark.com/static/leaderboard.html) (Chinese), [GSM8K](https://github.com/openai/grade-school-math) (Mathematics), [BBH](https://github.com/suzgunmirac/BIG-Bench-Hard) (English). | ||||||
| 
 | 
 | ||||||
|  |  | ||||||
		Loading…
	
		Reference in New Issue
	
	 duzx16
						duzx16