[evaluation] improvement on evaluation (#3862)

* fix a bug when the config file contains one category but the answer file doesn't contains that category

* fix Chinese prompt file

* support gpt-3.5-turbo and gpt-4 evaluation

* polish and update README

* resolve pr comments

---------

Co-authored-by: Yuanchen Xu <yuanchen.xu00@gmail.com>
pull/3868/head
Yuanchen 2023-05-30 11:48:41 +08:00 committed by GitHub
parent b0474878bf
commit 2506e275b8
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
7 changed files with 335 additions and 142 deletions

View File

@ -4,7 +4,9 @@ In this directory, we introduce how you can evaluate your model with our pipelin
evaluation of Chinese capability and the one for English capability is under preparation.
## Installation
To start model evaluation, you need to install required packages which listed in `requirements.txt` under `evaluate` folder.
```shell
pip install -r requirements.txt
```
@ -12,84 +14,92 @@ pip install -r requirements.txt
## Evaluation Pipeline
The whole evaluation pipeline consists of two methods:
1. `GPT Evaluation`: evaluates model predictions using the GPT-3.5.
1. `GPT Evaluation`: evaluates model predictions using GPT models.
* Compare the performance of two different models (battle).
* Rate model according to pre-defined metrics using prompting design.
* Rate the model according to pre-defined metrics using prompting design.
2. `Automatic Evaluation`: evaluates model predictions using automatic metrics.
### Evaluation Category
The model capability is seperated into 10 evaluation categories, which refers to the user case mentioned in InstructGPT.
Following table introduces each category:
| Evaluation Category | Description |
|:-------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------|
| Roleplay | Given certain characteristic, the capability of chatting as the character |
| Chat | Conduct multiple rounds of dialogue, the capability of understanding and memorization of previous rounds of dialogue |
| Open QA | Given an open question, the capability of answering questions in opened-ended way |
| Closed QA | Given a closed question, the capability of answering questions with limited scope (such as single/multiple choice question) |
| Brainstorming | Given a question requiring divergent answers, the capability of divergent answering and listing in points |
| Generation | Given generation task, the capability of generating in high quality and human-written way (such as writing an email) |
| Rewriting | Given rewriting task, the capability of rewriting sentences to meet task requirements (such as active and passive switches, translation) |
| Classification | Given classification task, the capability of accurate classification |
| Extraction | Given extraction task, the capability of extracting required information |
| Summarization | Given a paragraph or passage, the capability of summarization |
Our evaluation pipeline examines the model's capability using 10 categories of questions. The following table introduces each category:
To better understand each evaluation category, here are some prompt examples provided.
| Evaluation Category | <center>Description</center> |
| :-----------------: | :----------------------------------------------------------- |
| Brainstorming | Models are asked to generate a range of creative and diverse ideas according to the question. The capability of creativity is required. |
| Chat | Models are asked to continue a multi-round dialogue given the roles involved. The capability of understanding, memorizing previous rounds of the dialogue and answering according to the persona provided is required. |
| Classification | Models are asked to do classification tasks. The capability of accurate classification is required. |
| Closed QA | Models are asked to answer a closed QA question. The capability of answering questions with limited scope (such as single/multiple choice question) is required. |
| Extraction | Models are asked to extract information from a given material. The capability of extracting required information is required. |
| Generation | Models are asked to generate an email, letter, article, etc. The capability of generating texts in a high quality and human-written way is required. |
| Open QA | Models are asked to answer an open QA question(without context provided). The capability of answering questions with the models' own knowledge base is required. |
| Roleplay | Models are asked to play the role provided. The capability of engaging in the scenario and effectively interacting with the user is required. |
| Rewriting | Models are asked to do rewriting tasks such as translation and grammar correction. The capability of rewriting according to different instructions is required. |
| Summarization | Models are asked to summarize the given paragraph or passage. The capability of summarization is required. |
To better understand each evaluation category, here are some example questions provided.
| Evaluation Category | Chinese Example | English Example |
|:-------------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Roleplay | **Example 1**<br/>我想让你担任Android开发工程师面试官。我将成为候选人您将向我询问Android开发工程师职位的面试问题。我希望你只作为面试官回答。不要一次写出所有的问题。我希望你只对我进行采访。问我问题等待我的回答。不要写解释。像面试官一样一个一个问我等我回答。我的第一句话是“面试官你好”。 <br/><br/>**Example 2**<br/>我想让你扮演讲故事的角色。你会想出引人入胜、富有想象力和吸引观众的有趣故事。它可以是童话故事、教育故事或任何其他类型的有潜力的故事以吸引人们的注意力和想象力。根据目标受众,您可以为您的讲故事环节选择特定的主题或主题,例如,如果是儿童,那么您可以谈论动物;如果是成人,那么基于历史的故事可能会更好地吸引他们等。我的第一个请求是我需要一个关于毅力的有趣故事。 | **Example 1**<br/>I want you to act as a journalist. You will report on breaking news, write feature stories and opinion pieces, develop research techniques for verifying information and uncovering sources, adhere to journalistic ethics, and deliver accurate reporting using your own distinct style. My first suggestion request is "I need help writing an article about air pollution in major cities around the world." <br/><br/>**Example 2:**<br/> I want you to act as a debater. I will provide you with some topics related to current events and your task is to research both sides of the debates, present valid arguments for each side, refute opposing points of view, and draw persuasive conclusions based on evidence. Your goal is to help people come away from discussion with increased knowledge and insight into the topic at hand. My first request is "I want an opinion piece about Deno." |
| Chat | **Example 1**<br/>基于以下角色信息完成一段对话。小王20岁是一名大学生小张22岁是一名大学生。<br/> 小王:小张,我们一起去看电影吧? 小张:好啊,你想看什么电影呢? 小王:我听说最近有一部很好看的科幻电影。你觉得怎么样?<br/>小张:<br/><br/>**Example 2:**<br/>基于以下角色信息完成一段对话。小明是一名医生,一位老年病患者想要停药,但他对病情有所忽视并有担忧;王叔叔是老年病患者的儿子,希望能够听取医生的建议。<br/>小明:你好,王叔叔,我了解你想要让你父亲停药。<br/>王叔叔:是的,我父亲已经吃了那么久的药,我担心药物对他的身体会有副作用。<br/>小明: | **Example 1**<br/>Human: Who is the highest try scorer in the current Sydney Roosters team?<br/>AI: As an AI language model, I don't have access to current statistics. However, you can check the current Sydney Roosters team's official website or relevant sports news websites to find the answer.<br/>Human: Thanks for the suggestion, but can you recommend a reliable sports news website that I can check?<br/>AI:<br/><br/>**Example 2:**<br/>Complete a dialogue based on the following role information. <br/>A: Elementary student B: Teacher<br/>B: Good morning, Student A. Today we're going to learn about addition and subtraction.<br/>A: Teacher, I already know this very well. Why do I need to learn it again?<br/>B: |
| Open QA | **Example 1**<br/>请问万有引力定律由谁提出的?<br/><br/>**Example 2**<br/>哪些国家参与了第一次世界大战? | **Example 1**<br/>Who are the indigenous people of New Zealand?<br/><br/>**Example 2**<br/>How do you take the derivative of the sin function? |
| Closed QA | **Example 1**<br/>请从以下选项中选择正确答案。以下哪个是世界上最高山峰? <br/>A. 长城 <br/>B. 泰山 <br/>C. 珠穆朗玛峰 <br/>D. 黄山<br/><br/>**Example 2**<br/>请从以下选项中选择一个最佳答案回答下面的问题。问题:非洲最高的山是哪座山?<br/> 选项: <br/>A. 麦金利山 <br/>B. 喜马拉雅山 <br/>C. 乞力马扎罗山 | **Example 1**<br/>Answer the following question:<br/>What shape is the Earth?<br/>A) A circle<br/>B) A sphere<br/>C) An ellipse<br/>D) A plane<br/><br/>**Example 2**<br/>Choose the correct classification for the following question:<br/>"What type of restaurant is 'Burger King'"?<br/>fast food<br/>family style<br/>formal dining<br/>buffet<br/> |
| Brainstorming | **Example 1**<br/>请介绍一下人工智能的多个领域。<br/><br/>**Example 2**<br/>请给出管理家庭财务的3个小技巧。<br/> | **Example 1**<br/>What are 10 science fiction books I should read next?<br/><br/>**Example 2**<br/>List five ideas for how to regain enthusiasm for my career. |
| Generation | **Example 1**<br/>请撰写一篇文章,介绍如何通过改善生活习惯来预防疾病和延长寿命。<br/><br/>**Example 2**<br/>请根据以下情节撰写一篇短篇小说:一名年轻人被困在一个荒岛上,他必须想办法生存下去直到被救援。但他很快发现自己并不孤单。 | **Example 1**<br/>Can you help me write a formal email to a potential business partner proposing a joint venture?<br/><br/>**Example 2**<br/>Please use the appropriate format to write a formal letter of recommendation for a student applying to a prestigious computer science graduate program at a university. |
| Rewriting | **Example 1**<br/>将以下句子改为被动语态:<br/>"他们正在洗车"<br/><br/>**Example 2**<br/>将以下文本翻译成英语:<br/>“这个周末我要去海边玩” | **Example 1**<br/>Translate the following text into English: <br/>"我最喜欢的季节是春天,因为我可以看到美丽的花朵。"<br/><br/>**Example 2**<br/>Please correct the following sentences and give them the correct sentence.<br/>"Their going to the party there." |
| Classification | **Example 1**<br/>新闻标题:今日立夏,有一上联,立夏万物并秀,下联怎么对?<br/>请根据以上新闻标题判断新闻所属的分类,你需要从文化,娱乐,体育,财经,房产,教育,科技,旅游,游戏,军事这十类中选择一个答案。<br/><br/> **Example 2**<br/>新闻标题:赵丽颖很久没有登上微博热搜了,但你们别急,她只是在憋大招而已。<br/>请根据新闻标题判断新闻所属的分类,你需要从文化,娱乐,体育,财经,房产,教育,科技,旅游,游戏,军事这十类中选择一个答案。 | **Example 1**<br/>Classify the given email as spam or non-spam.<br/>"Hello, this is an email reminding you to pay your property fees"<br/><br/>**Example 2**<br/>Classify the following text as news, ads or forum posts<br/>"The latest iPhone 13 is now available, shop now!" |
| Extraction | **Example 1**<br/>根据以下新闻文本提取新闻报道时间例如回答时按照格式“新闻报道时间2007年8月10日”<br/>新闻文本如下2007-4-7中新网4月7日电据中国消防在线消息4月4日晚上7时30分左右湖南长潭高速公路上发生一起6车连环相撞失火事故。长株潭三地消防部门共出动消防车21台警力100余人。经过消防官兵近2个小时奋力扑救大火被成功扑灭。据初步调查有1人在此次事故中死亡。<br/><br/>**Example 2**<br/>根据以下新闻文本提取新闻报道时间例如回答时按照格式“新闻报道时间2007年8月10日”<br/>新闻文本如下2014年1月15日据外媒《俄罗斯报》报道称位于北半球的澳大利亚现在正处于炎热的夏季而近日也到了高温酷暑的时候当地时间1月14日晚澳大利亚南部一夜间发生至少250起火灾。受炎热天气及雷雨天气影响澳大利亚南部一夜间发生至少250起火灾灾情多集中在维多利亚州。火灾发生后救援人员立即展开救灾行动。目前大部分起火点火势已被控制。 | **Example 1**<br/>Extract all phenotypes of the following text:<br/>"The 55-year-old patient has fever and hypertension."<br/><br/>**Example 2**<br/>Extract the location mentioned in the following text:<br/>"The student graduated from Harvard university, which is located in Boston" |
| Summarization | **Example 1**<br/>请简要总结概括以下段落材料。<br/>新华社快讯斯里兰卡政府部门21日说首都科伦坡包括教堂、酒店等多个地点当天发生的爆炸已导致至少70人死亡另有260多人受伤。<br/><br/> **Example 2**<br/>请简要总结概括以下段落材料。<br/>近期参与京雄高铁站站房建设的中铁十二局因在施工过程中存在环境违法行为被雄安新区公开通报。通报发出后引起社会广泛关注。近日人民网记者从雄安新区相关部门及中铁十二局获悉新区有关部门已经集中约谈了中铁十二局等24个参与雄安建设的项目单位。对于约谈内容和结果中铁十二局有关宣传负责人回应“具体内容不清楚最好找雄安新区相关部门了解情况。”新区有关部门负责人表示此前涉及的环境违法行为中铁十二局已基本整改到位但约谈内容和结果暂不公开接下来将按部就班推进环境治理工作。原题为《雄安新区中铁十二局涉环境违法已基本整改到位》 | **Example 1**<br/>Please provide a summary based on the following news<br/>"China plans to launch its first space station core module in 2022, an important development in the country's space program. The space station, called Tianhe, will include three modules: a core module, an experiment module and an astronomy module. The first launch of the core module will be used to test and verify the basic functions of the station, as well as to conduct related scientific research and technology experiments. "<br/><br/>**Example 2**<br/>What information is provided in the table below? Summarize the core information in it<br/>"Ranking, Player Name, Team, Position, Salary (in millions of dollars)<br/>1, LeBron James, Los Angeles Lakers, SF, 45.0<br/>2, Stephen Curry, Golden State Warriors, PG, 43.5" |
| Evaluation Category | <center>Chinese Example</center> | <center>English Example</center> |
| :-----------------: | :----------------------------------------------------------- | :----------------------------------------------------------- |
| Brainstorming | **Example 1:**<br/>请介绍一下人工智能的多个领域。<br/><br/>**Example 2:**<br/>请给出管理家庭财务的3个小技巧。<br/> | **Example 1:**<br/>How can I improve my memory? Any useful techniques you can suggest?<br/><br/>**Example 2:**<br/>What are some ways to increase productivity while working from home? |
| Chat | **Example 1:**<br/>基于以下角色信息完成一段对话。小张是一名新手爱好者,对养鸡有浓厚的兴趣。老李是一名有丰富经验的养鸡大师。<br/>小张:您好,老李,我最近开始对养鸡感兴趣了,想请教您一些问题。 <br/>老李:你好,小张,我很乐意帮助你。你想问些什么? <br/>小张:我想知道如何确定鸡的品种和性别? <br/>老李:确切的品种可以通过鸡的外貌特征来确定,而性别一般是通过鸡卵的大小和形状来判断。还有什么问题吗?<br/> 小张:<br/>**Example 2:**<br/>基于以下角色信息完成一段对话。小明是一名医生,一位老年病患者想要停药,但他对病情有所忽视并有担忧;王叔叔是老年病患者的儿子,希望能够听取医生的建议。<br/>小明:你好,王叔叔,我了解你想要让你父亲停药。<br/>王叔叔:是的,我父亲已经吃了那么久的药,我担心药物对他的身体会有副作用。<br/>小明: | **Example 1:**<br/>Complete a conversation based on the following character information. Amy is a 30-year-old chef who runs her own restaurant. Jack is a food blogger who specializes in reviewing local restaurants.<br/>Amy: Hi Jack, I heard that you're a food blogger. Nice to meet you. <br/>Jack: Hi Amy, yes I am. Your restaurant has been receiving a lot of good reviews lately. <br/>Amy: Yes, we use only fresh and quality ingredients, and every dish is carefully crafted. <br/>Jack: <br/>**Example 2:**<br/>Complete a dialogue based on the following role information. A: Elementary student B: Teacher<br/>B: Good morning, Student A. Today we're going to learn about addition and subtraction.<br/>A: Teacher, I already know this very well. Why do I need to learn it again?<br/>B: |
| Classification | **Example 1:**<br/>新闻标题:今日立夏,有一上联,立夏万物并秀,下联怎么对?<br/>请根据以上新闻标题判断新闻所属的分类,你需要从文化,娱乐,体育,财经,房产,教育,科技,旅游,游戏,军事这十类中选择一个答案。<br/><br/> **Example 2:**<br/>新闻标题:赵丽颖很久没有登上微博热搜了,但你们别急,她只是在憋大招而已。<br/>请根据新闻标题判断新闻所属的分类,你需要从文化,娱乐,体育,财经,房产,教育,科技,旅游,游戏,军事这十类中选择一个答案。 | **Example 1:**<br/>Title: Fighting for Love (2020) <br/>Description: Jasmine got obsessed with a man and now he's obsessed with her. Steamy nights, kisses and rules being broken awaits them. She turned his whole world upside down and now he's doing it to hers. In this free fall, can they survive each others love?\"<br/>Based on the above information, determine which genre the work of art belongs to. You can only choose one from \"sport\", \"horror\", \"drama\", \"history\", \"romance\", \"biography\", \"science fiction\", \"comedy\", \"animation\", \"documentary\", \"music\" and \"news\".<br/><br/>**Example2:** <br/>Title: Summer Breeze: The Isley Brothers Greatest Hits Live (2005)<br/>Description: Filmed in the US in 2005 and captured in excellent form led by Ron Isley's vocals and Ernie Isley's hard edged guitar. Virtually every track is a hit including Shout, Who's That Lady, Twist And Shout, Summer Breeze and Harvest For The World.<br/>Based on the above information, determine which genre the work of art belongs to. You can only choose one from \"sport\", \"horror\", \"drama\", \"history\", \"romance\", \"biography\", \"science fiction\", \"comedy\", \"animation\", \"documentary\", \"music\" and \"news\"." |
| Closed QA | **Example 1:**<br/>请从以下选项中选择正确答案。以下哪个是世界上最高山峰? <br/>A. 长城 <br/>B. 泰山 <br/>C. 珠穆朗玛峰 <br/>D. 黄山<br/><br/>**Example 2:**<br/>请从以下选项中选择一个最佳答案回答下面的问题。问题:非洲最高的山是哪座山?<br/> 选项: <br/>A. 麦金利山 <br/>B. 喜马拉雅山 <br/>C. 乞力马扎罗山 | **Example 1:**<br/>Which of the following options is NOT a primary color?<br/>(a) yellow<br/>(b) blue<br/>(c) orange<br/>(d) red<br/>**Example 2:**<br/>Choose the correct option to complete the following sentence: \"Harry Potter and the Chamber of Secrets\" is the ________ book in the Harry Potter series.<br/>(A) first<br/>(B) second<br/>(C) third<br/>(D) fourth |
| Extraction | **Example 1:**<br/>根据以下新闻文本提取新闻报道时间例如回答时按照格式“新闻报道时间2007年8月10日”<br/>新闻文本如下2007-4-7中新网4月7日电据中国消防在线消息4月4日晚上7时30分左右湖南长潭高速公路上发生一起6车连环相撞失火事故。长株潭三地消防部门共出动消防车21台警力100余人。经过消防官兵近2个小时奋力扑救大火被成功扑灭。据初步调查有1人在此次事故中死亡。<br/><br/>**Example 2:**<br/>根据以下新闻文本提取新闻报道时间例如回答时按照格式“新闻报道时间2007年8月10日”<br/>新闻文本如下2014年1月15日据外媒《俄罗斯报》报道称位于北半球的澳大利亚现在正处于炎热的夏季而近日也到了高温酷暑的时候当地时间1月14日晚澳大利亚南部一夜间发生至少250起火灾。受炎热天气及雷雨天气影响澳大利亚南部一夜间发生至少250起火灾灾情多集中在维多利亚州。火灾发生后救援人员立即展开救灾行动。目前大部分起火点火势已被控制。 | **Example 1:**<br/>Ernest Hemingway, an American literary giant known for his spare and direct writing style, has penned timeless works such as 'The Old Man and the Sea', 'For Whom the Bell Tolls', and 'A Farewell to Arms', which have made a profound impact on the literary world and continue to be widely read and admired today.<br/>Extract the name of the author mentioned above.<br/><br/>**Example 2:**<br/>In the epic fantasy series 'A Song of Ice and Fire', George R.R. Martin weaves a complex web of political intrigue, war, and magic across the fictional continents of Westeros and Essos. Martin's richly developed characters and intricate plotlines have captivated readers worldwide, much like his other acclaimed works such as 'A Clash of Kings' and 'A Storm of Swords'.<br/>Extract the name of the author in the above material. |
| Generation | **Example 1:**<br/>请撰写一篇文章,介绍如何通过改善生活习惯来预防疾病和延长寿命。<br/><br/>**Example 2:**<br/>请根据以下情节撰写一篇短篇小说:一名年轻人被困在一个荒岛上,他必须想办法生存下去直到被救援。但他很快发现自己并不孤单。 | **Example 1:**<br/>Write a descriptive paragraph about an island to relax and unwind, including details about the location and atmosphere.<br/><br/>**Example 2:**<br/>Can you help me write a persuasive email to my colleagues encouraging them to participate in a charitable fundraising event? |
| Open QA | **Example 1:**<br/>请问万有引力定律由谁提出的?<br/><br/>**Example 2:**<br/>哪些国家参与了第一次世界大战? | **Example 1:**<br/>What are the four basic tastes of the human palate?<br/><br/>**Example 2:**<br/>Who painted the The Scream? |
| Rewriting | **Example 1:**<br/>请将以下句子改为正确的语序。 <br/>生日快乐你祝他了吗?<br/><br/>**Example 2:**<br/>将以下文本翻译成英语:<br/>“这个周末我要去海边玩” | **Example 1:**<br/>Please translate the following sentences, which are a mixture of Chinese and English, into full English. <br/>我需要买一些healthy snacks比如nuts和dried fruits作为我的office的午餐.<br/><br/>**Example 2:**<br/>Please rewrite the sentence using an inverted sentence structure.<br/>We won't begin our journey until the sun sets. |
| Roleplay | **Example 1:**<br/>我想让你担任Android开发工程师面试官。我将成为候选人您将向我询问Android开发工程师职位的面试问题。我希望你只作为面试官回答。不要一次写出所有的问题。我希望你只对我进行采访。问我问题等待我的回答。不要写解释。像面试官一样一个一个问我等我回答。我的第一句话是“面试官你好”。 <br/><br/>**Example 2:**<br/>我想让你扮演讲故事的角色。你会想出引人入胜、富有想象力和吸引观众的有趣故事。它可以是童话故事、教育故事或任何其他类型的有潜力的故事以吸引人们的注意力和想象力。根据目标受众,您可以为您的讲故事环节选择特定的主题或主题,例如,如果是儿童,那么您可以谈论动物;如果是成人,那么基于历史的故事可能会更好地吸引他们等。我的第一个请求是我需要一个关于毅力的有趣故事。 | **Example 1:**<br/>Assume the role of a marriage counselor. Develop a series of communication exercises for a couple who are experiencing difficulties in their relationship. These exercises should promote active listening, empathy, and effective expression of emotions. Your first assignment is to provide a set of three exercises that focus on resolving conflicts and rebuilding trust. <br/><br/>**Example 2: **<br/>I want you to act as a travel agent. I will tell you my desired destination, travel dates, and budget, and it will be your job to suggest the best travel itinerary for me. Your recommendations should include the best transportation options, hotel accommodations, and any popular tourist attractions nearby. My first request is "I want to plan a trip to Tokyo for a week, with a budget of $2000. I want to explore the culture and food of the city." |
| Summarization | **Example 1:**<br/>请简要总结概括以下段落材料。<br/>当地时间29日泰国卫生部通报新增143名新冠肺炎确诊病例和1名死亡病例。截止到当地时间29日上午泰国累计确诊病例1388例其中泰国籍1172例非泰国籍216例。死亡病例累计7例。原题为《泰国新增143例新冠肺炎确诊病例累计确诊1388例》<br/><br/> **Example 2:**<br/>请简要总结概括以下段落材料。<br/>近期参与京雄高铁站站房建设的中铁十二局因在施工过程中存在环境违法行为被雄安新区公开通报。通报发出后引起社会广泛关注。近日人民网记者从雄安新区相关部门及中铁十二局获悉新区有关部门已经集中约谈了中铁十二局等24个参与雄安建设的项目单位。对于约谈内容和结果中铁十二局有关宣传负责人回应“具体内容不清楚最好找雄安新区相关部门了解情况。”新区有关部门负责人表示此前涉及的环境违法行为中铁十二局已基本整改到位但约谈内容和结果暂不公开接下来将按部就班推进环境治理工作。原题为《雄安新区中铁十二局涉环境违法已基本整改到位》 | **Example 1:**<br/>The 21 year-old-woman was treated by paramedics after the kitchen fire in Botfield Road in Shifnal, Shropshire. West Mercia Police said it is treating Wednesday morning's incident as arson and are appealing for any witnesses to contact them.The 50-year-old man has been arrested on suspicion of arson with intent to endanger life. For more on this and other stories from Shropshire.<br/>Please briefly summarize the above material within 20 words.<br/><br/>**Example 2:**<br/>South Wales Police were called to a property in Heolgerrig, Merthyr Tydfil, at about 13:40 BST on Sunday. The child was airlifted to Prince Charles Hospital but died shortly afterwards. Police are investigating the circumstances surrounding the incident and have appealed for witnesses. The girl's family are being supported by specially trained officers.<br/>Please briefly summarize the above material within 20 words. |
### Evaluation Metrics
#### GPT Evaluation
Use GPT-3.5 to evaluate the prediction of different models, and pre-define evaluation metrics for different categories. There are 10 pre-defined evaluation metrics and you can refer to the table below:
| Evaluation Metric | Prompt Words | CoT |
|:-----------------------:|:-------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Language organization | 语言组织(1-5):答案语言是否流畅、连贯,使用正确的语法,具有一定逻辑性,使用恰当的连接词、过渡词等等。 | 1. 阅读答案,并检查是否有语法错误、用词不当或其他显著的错误。<br/> 2.检查答案是否具有逻辑性,能够按照合理的顺序传达信息并且能够自圆其说<br/> 3. 确定答案是否与问题或主题相关,并且能够传达清晰的信息。<br/> 4. 检查答案是否连贯,是否使用适当的转换和过渡来保持句子和段落之间的连贯性。<br/> 5. 检查答案是否具有明确的结构和组织方式,使得读者可以轻松理解信息的层次和结构。<br/> 6. 根据以上因素综合评估答案的语言组织并给出一个1到5的分数其中5表示语言组织非常好而1表示语言组织非常差。 |
| Relevance | 切题(1-5):答案内容是否切题,不答非所问,并且严格遵照题目要求。 | 1. 阅读题目,确定题目所问的问题是什么,以及需要回答哪些方面的问题。<br/> 2. 阅读答案,确认答案是否直接回答了题目所问的问题。<br/> 3. 检查答案是否严格遵照了题目的要求,包括答题方式、答题长度、答题格式等等。<br/> 4. 根据以上因素综合评估答案的切题程度并给出一个1到5的分数其中5表示答案非常切题而1表示答案完全没有切题。 |
| Creativity | 创意性(1-5):某些头脑风暴问题可能需要答案具有创意,提出新的思路。 | 1. 仔细阅读所提供的头脑风暴问题,确保你理解问题的要点和背景。<br/> 2. 根据你的知识和经验,判断所提供的答案是否可行。如果答案不可行,则创意性评分可能会受到影响。<br/> 3. 考虑答案中是否包含新颖的想法或独特的思路。答案可能与已知的解决方案有所重叠,但仍然可以被认为是有创意的,只要它提供了新的角度或方法来解决问题。<br/> 4. 根据答案的创意性给出一个1到5的评分。如果答案缺乏创意则应给出一个较低的评分。如果答案具有创意并提供了新的思路应给出一个较高的评分。 |
| Practicality | 实用性(1-5):某些头脑风暴问题可能需要答案提出实用的建议或解决方法。 | 1. 仔细阅读所提供的头脑风暴问题,确保你理解问题的要点和背景。<br/> 2. 根据你的知识和经验,判断所提供的答案是否可行。如果答案不可行,则实用性评分可能会受到影响。<br/> 3. 考虑答案中提出的建议或解决方法是否实用并可行。答案可能看起来很好,但如果无法实现或应用,则实用性评分可能会受到影响。<br/> 4. 根据答案的实用性给出一个1到5的评分。如果答案缺乏实用性则应给出一个较低的评分。如果答案提出了实用的建议或解决方法并且可以很好地解决问题则应给出一个较高的评分。 |
| Correctness | 正确性(1-5):答案应该符合常识、生活实际等等 | 1. 仔细阅读所提供的头脑风暴问题,确保你理解问题的要点和背景。<br/> 2. 根据你的知识和经验,判断所提供的答案是否可行。如果答案不可行,则正确性评分可能会受到影响。<br/> 3. 考虑答案中所提供的信息是否正确、符合常识、生活实际等等。如果答案中存在明显的错误或不合理之处,则正确性评分可能会受到影响。<br/> 4. 根据答案的正确性给出一个1到5的评分。如果答案存在明显的错误或不合理之处则应给出一个较低的评分。如果答案正确、符合常识、生活实际等等则应给出一个较高的评分。 |
| Naturalness | 自然(1-5):答案是否自然,并且符合问题给定的身份。 | 1. 阅读题目,确定题目提供的身份信息。<br/> 2. 检查答案内容是否符合题目给定的身份。<br/> 3. 根据以上因素对该回答的自然性进行打分分数从1到5其中1表示不自然5表示非常自然并符合问题给定的身份。 |
| Engagingness | 参与感(1-5):答案是否对前面的对话内容做出了恰当的反应,是否理解对话的语境和背景。 | 1. 阅读题目,确定对话的语境和背景。<br/> 2. 检查答案是否充分理解对话的语境和背景,能否自然地融入到对话中而不显得突兀。<br/> 3. 根据以上因素对该回答的参与感进行打分分数从1到5其中1表示没有参与感5表示非常有参与感并且恰当地理解了对话的语境和背景。 |
| Reasonableness | 合理性(1-5):答案是否能够与前面的对话内容形成逻辑上的衔接,是否符合常理,能否在这个上下文中合理存在。 | 1. 阅读题目,确定对话的主题以及问题期望的回答方向。<br/> 2. 判断答案是否能够与前面的对话内容形成逻辑上的衔接,是否符合常理,能否在这个上下文中合理存在。<br/> 3. 根据以上因素对该回答的合理性进行打分分数从1到5其中1表示不合理5表示非常合理并且能够与前面的对话内容形成逻辑上的衔接并符合常理。 |
| Diversity | 多样性(1-5):答案使用语言是否优美,具有有一定的创造性和想象力。然而,回答也应该保持合理和适度,不要过于夸张或离题。 | 1. 仔细阅读整个回答,确保完全理解回答所表达的内容和主题。<br/> 2. 在阅读回答的同时,注意语言的质量,例如措辞是否正确,语言是否生动等。<br/> 3. 检查回答的创造性和想象力,看看回答是否能够吸引人阅读下去。<br/> 4. 检查回答的合理性和适度看看回答是否夸张或离题。5. 将多样性的评分打分在1到5之间5分表示回答的质量很好能够吸引人阅读1分表示回答的内容生硬或者有离题的问题。 |
| Fidelity | 保真度(1-5):答案是否能够严格遵守角色的设定回答给定的请求。 | 1. 仔细阅读问题,了解角色在问题中的设定和表现,包括职业、背景、观点、性格等方面。<br/> 阅读题目的请求,确认回答请求时需要注意的细节。<br/> 3. 对比提供的回答与该角色的设定,评估回答是否能够严格遵守角色的设定。<br/> 4. 结合以上评估结果给出保真度的评分范围从1到5分其中1分表示回答与角色设定完全不符5分表示回答完全符合角色设定且满足给定请求。 |
| Conciseness | 简明扼要(1-5):答案是否简明扼要,没有冗余内容。 | 1. 阅读题目,提取出材料的重点。<br/> 2. 阅读该总结,并注意其中的主要观点和信息。<br/> 3. 评估总结的长度。一个简明扼要的总结通常应该在几句话或几段文字内传达关键信息,而不是冗长的段落或文章。<br/> 4. 检查总结是否包含与主要观点无关的信息或冗余信息。<br/> 5. 确定总结涵盖了材料中的关键信息,并且没有忽略任何重要细节。<br/> 6. 给总结打出1-5的分数其中5表示总结简明扼要没有冗余内容而1表示总结冗长或包含不必要的信息难以理解或记忆。根据您的判断打出适当的得分。 |
GPT evaluation uses GPT models to evaluate the prediction of different models and different pre-defined evaluation metrics are applied to different categories. The following table shows the 11 pre-defined evaluation metrics in Chinese:
GPT-3.5 evaluates the quality of model predictions based on the given prompt words and gives a score between 1-5.
| Evaluation Metric | <center>Prompt Words</center> | <center>CoT(Chain-of-Thought)</center> |
| :-------------------: | :----------------------------------------------------------- | :----------------------------------------------------------- |
| Language organization | 语言组织(1-5):答案语言是否流畅、连贯,使用正确的语法,具有一定逻辑性,使用恰当的连接词、过渡词等等。 | 1. 阅读答案,并检查是否有语法错误、用词不当或其他显著的错误。<br/> 2.检查答案是否具有逻辑性,能够按照合理的顺序传达信息并且能够自圆其说<br/> 3. 确定答案是否与问题或主题相关,并且能够传达清晰的信息。<br/> 4. 检查答案是否连贯,是否使用适当的转换和过渡来保持句子和段落之间的连贯性。<br/> 5. 检查答案是否具有明确的结构和组织方式,使得读者可以轻松理解信息的层次和结构。<br/> 6. 根据以上因素综合评估答案的语言组织并给出一个1到5的分数其中5表示语言组织非常好而1表示语言组织非常差。 |
| Relevance | 切题(1-5):答案内容是否切题,不答非所问,并且严格遵照题目要求。 | 1. 阅读题目,确定题目所问的问题是什么,以及需要回答哪些方面的问题。<br/> 2. 阅读答案,确认答案是否直接回答了题目所问的问题。<br/> 3. 检查答案是否严格遵照了题目的要求,包括答题方式、答题长度、答题格式等等。<br/> 4. 根据以上因素综合评估答案的切题程度并给出一个1到5的分数其中5表示答案非常切题而1表示答案完全没有切题。 |
| Creativity | 创意性(1-5):某些头脑风暴问题可能需要答案具有创意,提出新的思路。 | 1. 仔细阅读所提供的头脑风暴问题,确保你理解问题的要点和背景。<br/> 2. 根据你的知识和经验,判断所提供的答案是否可行。如果答案不可行,则创意性评分可能会受到影响。<br/> 3. 考虑答案中是否包含新颖的想法或独特的思路。答案可能与已知的解决方案有所重叠,但仍然可以被认为是有创意的,只要它提供了新的角度或方法来解决问题。<br/> 4. 根据答案的创意性给出一个1到5的评分。如果答案缺乏创意则应给出一个较低的评分。如果答案具有创意并提供了新的思路应给出一个较高的评分。 |
| Practicality | 实用性(1-5):某些头脑风暴问题可能需要答案提出实用的建议或解决方法。 | 1. 仔细阅读所提供的头脑风暴问题,确保你理解问题的要点和背景。<br/> 2. 根据你的知识和经验,判断所提供的答案是否可行。如果答案不可行,则实用性评分可能会受到影响。<br/> 3. 考虑答案中提出的建议或解决方法是否实用并可行。答案可能看起来很好,但如果无法实现或应用,则实用性评分可能会受到影响。<br/> 4. 根据答案的实用性给出一个1到5的评分。如果答案缺乏实用性则应给出一个较低的评分。如果答案提出了实用的建议或解决方法并且可以很好地解决问题则应给出一个较高的评分。 |
| Correctness | 正确性(1-5):答案应该符合常识、生活实际等等 | 1. 仔细阅读所提供的头脑风暴问题,确保你理解问题的要点和背景。<br/> 2. 根据你的知识和经验,判断所提供的答案是否可行。如果答案不可行,则正确性评分可能会受到影响。<br/> 3. 考虑答案中所提供的信息是否正确、符合常识、生活实际等等。如果答案中存在明显的错误或不合理之处,则正确性评分可能会受到影响。<br/> 4. 根据答案的正确性给出一个1到5的评分。如果答案存在明显的错误或不合理之处则应给出一个较低的评分。如果答案正确、符合常识、生活实际等等则应给出一个较高的评分。 |
| Naturalness | 自然(1-5):答案是否自然,并且符合问题给定的身份。 | 1. 阅读题目,确定题目提供的身份信息。<br/> 2. 检查答案内容是否符合题目给定的身份。<br/> 3. 根据以上因素对该回答的自然性进行打分分数从1到5其中1表示不自然5表示非常自然并符合问题给定的身份。 |
| Engagingness | 参与感(1-5):答案是否对前面的对话内容做出了恰当的反应,是否理解对话的语境和背景。 | 1. 阅读题目,确定对话的语境和背景。<br/> 2. 检查答案是否充分理解对话的语境和背景,能否自然地融入到对话中而不显得突兀。<br/> 3. 根据以上因素对该回答的参与感进行打分分数从1到5其中1表示没有参与感5表示非常有参与感并且恰当地理解了对话的语境和背景。 |
| Reasonableness | 合理性(1-5):答案是否能够与前面的对话内容形成逻辑上的衔接,是否符合常理,能否在这个上下文中合理存在。 | 1. 阅读题目,确定对话的主题以及问题期望的回答方向。<br/> 2. 判断答案是否能够与前面的对话内容形成逻辑上的衔接,是否符合常理,能否在这个上下文中合理存在。<br/> 3. 根据以上因素对该回答的合理性进行打分分数从1到5其中1表示不合理5表示非常合理并且能够与前面的对话内容形成逻辑上的衔接并符合常理。 |
| Diversity | 多样性(1-5):答案使用语言是否优美,具有有一定的创造性和想象力。然而,回答也应该保持合理和适度,不要过于夸张或离题。 | 1. 仔细阅读整个回答,确保完全理解回答所表达的内容和主题。<br/> 2. 在阅读回答的同时,注意语言的质量,例如措辞是否正确,语言是否生动等。<br/> 3. 检查回答的创造性和想象力,看看回答是否能够吸引人阅读下去。<br/> 4. 检查回答的合理性和适度看看回答是否夸张或离题。5. 将多样性的评分打分在1到5之间5分表示回答的质量很好能够吸引人阅读1分表示回答的内容生硬或者有离题的问题。 |
| Fidelity | 保真度(1-5):答案是否能够严格遵守角色的设定回答给定的请求。 | 1. 仔细阅读问题,了解角色在问题中的设定和表现,包括职业、背景、观点、性格等方面。<br/> 阅读题目的请求,确认回答请求时需要注意的细节。<br/> 3. 对比提供的回答与该角色的设定,评估回答是否能够严格遵守角色的设定。<br/> 4. 结合以上评估结果给出保真度的评分范围从1到5分其中1分表示回答与角色设定完全不符5分表示回答完全符合角色设定且满足给定请求。 |
| Conciseness | 简明扼要(1-5):答案是否简明扼要,没有冗余内容。 | 1. 阅读题目,提取出材料的重点。<br/> 2. 阅读该总结,并注意其中的主要观点和信息。<br/> 3. 评估总结的长度。一个简明扼要的总结通常应该在几句话或几段文字内传达关键信息,而不是冗长的段落或文章。<br/> 4. 检查总结是否包含与主要观点无关的信息或冗余信息。<br/> 5. 确定总结涵盖了材料中的关键信息,并且没有忽略任何重要细节。<br/> 6. 给总结打出1-5的分数其中5表示总结简明扼要没有冗余内容而1表示总结冗长或包含不必要的信息难以理解或记忆。根据您的判断打出适当的得分。 |
GPT models evaluate the quality of model predictions based on the given prompt words and gives a score between 1-5.
#### Automatic Evaluation
Automated metrics evaluate the capability of a model by comparing model predictions with reference answers.
There are two ways to obtain reference answers:
* For instruction coming from human-designed problems, the reference answers are generated by GPT-3.5, such as roleplay, chat.
* For instruction related with classic NLP problems, the reference answers are collected from open-sourced dataset with target answers, such as classification, extraction, summarization.
There are 5 types of automatic evaluation metrics listed in the table below:
| Automatic Evaluation Metric | Description |
|:-----------------------------------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| BLEU-n | Measure the accuracy between prediction and reference.<br/> BLEU-1 (Unigram) evaluates accuracy in word level<br/> BLEU-n (n-gram) evaluate the fluency in sentence level. |
| Automatic Evaluation Metric | <center>Description</center> |
| :---------------------------------: | :----------------------------------------------------------- |
| BLEU-n | Measure the accuracy between prediction and reference.<br/> BLEU-1 (Unigram) evaluates accuracy in word level.<br/> BLEU-n (n-gram) evaluate the fluency in sentence level. |
| ROUGE | ROUGE-N measures the number of matching n-grams between prediction and reference. <br/> ROUGE-L measures the number of matching longest common subsequence (LCS) between prediction and reference. |
| Distinct | Measure the diversity of generation text by counting the unique n-grams. |
| BERTScore | Measure the semantic similarity between tokens of predictions and references with BERT. |
| Precision<br/> Recall<br/> F1 Score | Measure the number of overlaps between prediction and reference (design for classification and extraction categories) |
| Distinct | Measure the diversity of generation text by counting the unique n-grams. |
| BERTScore | Measure the semantic similarity between tokens of predictions and references with BERT. |
| Precision<br/> Recall<br/> F1 Score | Measure the number of overlaps between prediction and reference (design for classification and extraction categories). |
## Evaluation Process
### Data Format
#### Target Answers / Predictions
A JSON file contains one list. Each element in the list is a target answer / prediction record for one instruction / question.
An element should have the following fields:
@ -103,7 +113,8 @@ An element should have the following fields:
If the `input` has a target answer, the `output` can be empty. Otherwise, we generate answers from GPT-3.5 as the `output`, and the `target` field is empty.
Example:
```
```json
[
{
"category": "brainstorming",
@ -138,7 +149,8 @@ An element should have the following fields:
* `id` (int, compulsory): The ID of the instruction / question.
Example:
```
```json
[
{
"category": "brainstorming",
@ -159,34 +171,79 @@ Example:
]
```
### Evaluation
#### Configuration
The configuration file `config_cn.json` can control how evaluate the performance of the model.
The following is an example showing the config structure:
### Prompt
#### Battle Prompt
The following is the Chinese battle prompt. In the battle prompt, the question and answers from two different models are fed into the prompt template. You can find an example battle prompt file in `prompt/battle_prompt`.
```json
{
"id": 1,
"system_prompt": "你是一个检查回答质量的好助手。",
"prompt_template": "[问题]\n{question}\n\n[1号AI助手的答案]\n{answer_1}\n\n[1号AI助手答案终止]\n\n[2号AI助手的答 案]\n{answer_2}\n\n[2号AI助手答案终止]\n\n[要求]\n{prompt}\n\n",
"prompt": "我们需要你评价这两个AI助手回答的性能。\n请对他们的回答的有用性、相关性、准确性、详细程度进行评分。每个AI助手都会得到一个1到10分的总分分数越高表示整体表现越好。\n请首先输出一行该行只包含两个数值分别表示1号和2号AI助手的分数。这两个分数之间要有一个空格。在随后的一行中请对你的评价作出全面的解释避免任何潜在的偏见并确保AI助手回答的顺序不会影响您的判断。"
}
```
#### Evaluation Prompt
The following is an example of a Chinese GPT evaluation prompt. In an evaluation prompt, you should define your metrics in `metrics` and provide CoT(Chain-of-Thought) in `CoT`. You can find an example evaluation prompt file in `prompt/evaluation_prompt`.
```json
{
"brainstorming": {
"id": 1,
"category": "brainstorming",
"metrics": {
"language organization": "语言组织(1-5):答案语言是否流畅、连贯,使用正确的语法,具有一定逻辑性,使用恰当的连接词、过渡词等等。"
},
"CoT": {
"language organization": "1. 阅读答案,并检查是否有语法错误、用词不当或其他显著的错误。\n2. 检查答案是否具有逻辑性,能够按照合理的顺序传达信息并且能够自圆其说。\n3. 确定答案是否与问题或主题相关,并且能够传达清晰的信息。\n4. 检查答案是否连贯,是否使用适当的转换和过渡来保持句子和段落之间的连贯性。\n5. 检查答案是否具有明确的结构和组织方式,使得读者可以轻松理解信息的层次和结构。\n6. 根据以上因素综合评估答案的语言组织并给出一个1到5的分数其中5表示语言组织非常好而1表示语言组织非常差。\n\n语言组织"
},
"prompt": "你是一个好助手。请你为下面“头脑风暴”问题的答案打分。\n\n问题如下\n\n{question}\n\n答案如下\n\n{answer}\n\n评分的指标如下\n\n{metric}\n\n请你遵照以下的评分步骤\n\n{steps}"
}
}
```
`"metrics"`: the metrics that can be used in GPT evaluation. This field determines which metrics can be added to your config file.
`"CoT"`: evaluation steps you prompt to GPT models for each metric defined in `"metrics"`.
### Evaluation
#### Configuration
The following is an example of a Chinese config file. The configuration file can control how the pipeline evaluates the model. You need to specify GPT evaluation metrics and automatic metrics in key `GPT` and `Metrics`. You can find an example Chinese config file in `config`.
```json
{
"language": "cn",
"category": {
"brainstorming": {
"GPT-3.5": ["relevance", "creativity", "practicality", "correctness"],
"GPT": ["relevance", "creativity", "practicality", "correctness"],
"Metrics": ["Distinct"]
},
"chat": {
"GPT-3.5": [ "relevance", "naturalness", "engagingness", "reasonableness"],
"GPT": [ "relevance", "naturalness", "engagingness", "reasonableness"],
"Metrics": ["Distinct"]
}
}
}
```
`"language"`: evaluate the model capability in which language, we only support Chinese `"cn"` for now.
`"category"`: evaluate the model capability in which category/categories.
`"GPT-3.5"`: config metrics for GPT-3.5 evaluation.
`"Metrics"`: config metrics for automatic metrics evaluation.
`"language"`: the language used to evaluate the model capability. We only support Chinese `"cn"` for now.
`"category"`: the category/categories needed to evaluate the model capability.
`"GPT"`: the metrics you want to use for GPT evaluation.
`"Metrics"`: the metrics you want to use for automatic metrics evaluation.
You can create your config file based on available settings listed in following table.
| "category" | "GPT-3.5" | "Metrics" |
|:----------------:|:-----------------------:|:-----------:|
| "category" | "GPT" | "Metrics" |
| :--------------: | :---------------------: | :---------: |
| "brainstorming" | "language organization" | "BLEU" |
| "chat" | "relevance" | "ROUGE" |
| "classification" | "creativity" | "Distinct" |
@ -194,16 +251,19 @@ You can create your config file based on available settings listed in following
| "extraction" | "correctness" | "Precision" |
| "generation" | "naturalness" | "Recall" |
| "open_qa" | "engagingness" | "F1 score" |
| "rewriting" | "reasonableness" |
| "roleplay" | "diversity" |
| "summarization" | "fidelity" |
| | "conciseness" |
| "rewriting" | "reasonableness" | |
| "roleplay" | "diversity" | |
| "summarization" | "fidelity" | |
| | "conciseness" | |
> **NOTE:** For categories which don't have standard answers such as `brainstorming`, you should avoid using automatic metrics such as `BLEU` and `ROUGE` which are based on similarity measures and you should use `Distinct` instead in your config file.
#### Evaluate
After setting the configuration file, you can evaluate the model using `eval.py`.
After setting the configuration file, you can evaluate the model using `eval.py`. If you want to make comparisons between answers of two different models, you should specify two answer files in the argument `answer_file_list` and two model names in the argument `model_name_list`. If you want to evaluate one answer file, the length of both `answer_file_list` and `model_name_list` should be 1 and the program will perform evaluation using automatic metrics and GPT models.
An example script is provided as follows:
```shell
python eval.py \
--config_file "path to the config file" \
@ -212,14 +272,40 @@ python eval.py \
--target_file "path to the target answer file" \
--answer_file_list "path to the answer files of at most 2 models" \
--model_name_list "the names of at most 2 models" \
--gpt_model "which GPT model to use for evaluation" \
--save_path "path to save results" \
--openai_key "your openai key" \
```
## FAQ
<details><summary><b>How can I add a new GPT evaluation metric?</b></summary>
For example, if you want to add a new metric `persuasiveness` into category `brainstorming`, you should add the metric definition and its corresponding CoT(Chain-of-thought) in the evaluation prompt file in `prompt/evaluation_promt`. The CoT can be generated using ChatGPT. You can prompt ChatGPT to generate evaluation steps for the new metric.
```json
{
"brainstorming": {
"id": 1,
"category": "brainstorming",
"metrics": {
"persuasiveness": "说服力(1-5)XXX"
},
"CoT": {
"persuasiveness": "XXX\n\n说服力"
},
"prompt": "你是一个好助手。请你为下面“头脑风暴”问题的答案打分。\n\n问题如下\n\n{question}\n\n答案如下\n\n{answer}\n\n评分的指标如下\n\n{metric}\n\n请你遵照以下的评分步骤\n\n{steps}"
}
}
```
</details>
## To Do
- [ ] Add evaluation for English capability
- [ ] Support UniEval
- [ ] Support GPT-4 evaluation
- [x] Support GPT-4 evaluation
## Citations
@ -232,15 +318,6 @@ python eval.py \
year = {2023}
}
@misc{ouyang2022training,
title={Training language models to follow instructions with human feedback},
author={Long Ouyang and Jeff Wu and Xu Jiang and Diogo Almeida and Carroll L. Wainwright and Pamela Mishkin and Chong Zhang and Sandhini Agarwal and Katarina Slama and Alex Ray and John Schulman and Jacob Hilton and Fraser Kelton and Luke Miller and Maddie Simens and Amanda Askell and Peter Welinder and Paul Christiano and Jan Leike and Ryan Lowe},
year={2022},
eprint={2203.02155},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@misc{liu2023geval,
title={G-Eval: NLG Evaluation using GPT-4 with Better Human Alignment},
author={Yang Liu and Dan Iter and Yichong Xu and Shuohang Wang and Ruochen Xu and Chenguang Zhu},

View File

@ -2,7 +2,7 @@
"language": "cn",
"category": {
"brainstorming": {
"GPT-3.5": [
"GPT": [
"language organization",
"relevance",
"creativity",
@ -14,7 +14,7 @@
]
},
"chat": {
"GPT-3.5": [
"GPT": [
"language organization",
"relevance",
"naturalness",
@ -26,7 +26,7 @@
]
},
"classification": {
"GPT-3.5": [
"GPT": [
"language organization",
"relevance",
"correctness"
@ -38,7 +38,7 @@
]
},
"closed_qa": {
"GPT-3.5": [
"GPT": [
"language organization",
"relevance",
"correctness"
@ -50,7 +50,7 @@
]
},
"extraction": {
"GPT-3.5": [
"GPT": [
"language organization",
"relevance",
"correctness"
@ -62,7 +62,7 @@
]
},
"generation": {
"GPT-3.5": [
"GPT": [
"language organization",
"relevance",
"diversity"
@ -74,7 +74,7 @@
]
},
"open_qa": {
"GPT-3.5": [
"GPT": [
"language organization",
"relevance",
"correctness"
@ -84,7 +84,7 @@
]
},
"rewriting": {
"GPT-3.5": [
"GPT": [
"language organization",
"relevance",
"correctness"
@ -96,7 +96,7 @@
]
},
"roleplay": {
"GPT-3.5": [
"GPT": [
"language organization",
"relevance",
"fidelity",
@ -107,7 +107,7 @@
]
},
"summarization": {
"GPT-3.5": [
"GPT": [
"language organization",
"relevance",
"correctness",

View File

@ -39,7 +39,8 @@ def main(args):
"No prompt file for gpt evaluation provided. Please specify the prompt file for gpt evaluation!")
# initialize evaluator
evaluator = Evaluator(metrics_per_category, battle_prompt, gpt_evaluation_prompt)
evaluator = Evaluator(metrics_per_category, battle_prompt, gpt_evaluation_prompt, args.gpt_model,
config["language"])
if len(args.model_name_list) == 2:
answers1 = jload(args.answer_file_list[0])
answers2 = jload(args.answer_file_list[1])
@ -87,6 +88,10 @@ if __name__ == '__main__':
default=[],
required=True,
help='the names of at most 2 models')
parser.add_argument('--gpt_model',
default="gpt-3.5-turbo",
choices=["text-davinci-003", "gpt-3.5-turbo", "gpt-4"],
help='which GPT model to use for evaluation')
parser.add_argument('--save_path', type=str, default="results", help='path to save evaluation results')
parser.add_argument('--openai_key', type=str, default=None, required=True, help='Your openai key')
args = parser.parse_args()

View File

@ -14,13 +14,15 @@ class Evaluator(object):
"""
def __init__(self, params: Dict[str, Any], battle_prompt: Dict[str, Any], gpt_evaluation_prompt: Dict[str,
Any]) -> None:
def __init__(self, params: Dict[str, Any], battle_prompt: Dict[str, Any], gpt_evaluation_prompt: Dict[str, Any],
gpt_model: str, language: str) -> None:
self.params = params
self.battle_prompt = battle_prompt
self.gpt_evaluation_prompt = gpt_evaluation_prompt
self.gpt_model = gpt_model
self.language = language
self.automatic_metric_stats = dict()
self.gpt35_evaluation_results = dict()
self.gpt_evaluation_results = dict()
self.battle_results = []
def battle(self, answers1: List[Dict], answers2: List[Dict]) -> None:
@ -63,6 +65,10 @@ class Evaluator(object):
# automatic evaluation
for category in self.params:
if len(answers_per_category[category]) == 0:
print(f"Category {category} specified in your config doesn't have corresponding answers!")
continue
category_metrics = self.params[category]["Metrics"]
self.automatic_metric_stats[category] = {}
@ -74,17 +80,21 @@ class Evaluator(object):
for metric in category_metrics:
self.automatic_metric_stats[category].update(switch(metric=metric))
# gpt35 evaluation
# gpt evaluation
for category in self.params:
category_metrics = self.params[category]["GPT-3.5"]
if len(answers_per_category[category]) == 0:
print(f"Category {category} specified in your config doesn't have corresponding answers!")
continue
category_metrics = self.params[category]["GPT"]
prompt = self.gpt_evaluation_prompt.get(category, None)
if prompt is None:
print(f"No prompt for category {category}! Use prompt for category general now.")
prompt = self.gpt_evaluation_prompt["general"]
self.gpt35_evaluation_results[category] = gpt_evaluate.gpt35_evaluate(answers_per_category[category],
prompt, category_metrics, category)
self.gpt_evaluation_results[category] = gpt_evaluate.evaluate(answers_per_category[category], prompt,
category_metrics, category, self.gpt_model)
def save(self, path: str, model_name_list: List[str]) -> None:
"""
@ -106,10 +116,10 @@ class Evaluator(object):
# Save evaluation results for GPT-3.5 evaluation metrics.
all_evaluations = []
base_save_path = os.path.join(path, "gpt_evaluate", "gpt35_evaluate_results")
base_save_path = os.path.join(path, "gpt_evaluate", "gpt_evaluate_results")
evaluation_results_save_path = os.path.join(base_save_path, "evaluation_results")
for category, evaluations in self.gpt35_evaluation_results.items():
for category, evaluations in self.gpt_evaluation_results.items():
jdump(
evaluations,
os.path.join(evaluation_results_save_path, model_name_list[0],
@ -121,10 +131,10 @@ class Evaluator(object):
# Start to calculate scores and save statistics.
evaluation_statistics_save_path = os.path.join(base_save_path, "evaluation_statistics")
gpt_evaluate.save_gpt35_evaluation_statistics(model_name_list[0], all_evaluations,
evaluation_statistics_save_path)
gpt_evaluate.save_gpt_evaluation_statistics(model_name_list[0], all_evaluations,
evaluation_statistics_save_path)
# Save charts and csv.
evaluation_analyses_save_path = os.path.join(base_save_path, "evaluation_analyses")
gpt_evaluate.analyze_gpt35_evaluation_statistics(evaluation_statistics_save_path,
evaluation_analyses_save_path)
gpt_evaluate.analyze_gpt_evaluation_statistics(evaluation_statistics_save_path,
evaluation_analyses_save_path)

View File

@ -16,7 +16,7 @@ from utils import jdump, jload
def get_battle_result(sys_prompt: str, user_prompt: str, id: int, max_tokens: int = 2048) -> Dict[str, Any]:
"""
Get evaluation from GPT-4.
Get battle evaluation from GPT-4.
Args:
sys_prompt: prompt for the system.
@ -51,7 +51,7 @@ def get_battle_result(sys_prompt: str, user_prompt: str, id: int, max_tokens: in
except Exception as e:
print(e)
time.sleep(1)
print(f" Evaluation {id} failed after {MAX_API_RETRY} retries.")
print(f"Evaluation {id} failed after {MAX_API_RETRY} retries.")
return {"evaluation": "", "id": id}
@ -233,12 +233,77 @@ def save_battle_results(evaluations: List[Dict], name1: str, name2: str, save_pa
print(f"Model {name2} average score: {ans2_score/(len(evaluations)-invalid_count):.2f}")
def get_gpt35_evaluation(prompt: Dict[str, Any],
inst: Dict[str, Any],
metrics: List[str],
max_tokens: int = 2048) -> Dict[str, Any]:
def get_gpt_evaluation_without_logprobs(prompt: Dict[str, Any],
inst: Dict[str, Any],
metrics: List[str],
model: str = "gpt-3.5-turbo",
max_tokens: int = 2048) -> Dict[str, Any]:
"""
Use GPT-3.5 to evaluate one model answer.
Use chat models(gpt-3.5-turbo or gpt-4) to evaluate one model answer.
Args:
prompt: a dictionary including prompt template, CoT and metrics.
inst: the instruction that is needed to be evaluated.
metrics: the metrics for evaluation.
model: the model used to evaluate answers.
max_tokens: the maximum number of tokens to generate in the chat completion.
Returns:
An evaluation of one answer.
"""
MAX_API_RETRY = 3
question = (inst["instruction"] if inst["input"] == "" else inst["instruction"] + " " + inst["input"])
answer = inst["output"]
inst["evaluation"] = {}
for metric in metrics:
if prompt["metrics"].get(metric, None) is None:
raise Exception(
f"Unsupported metric {metric} for category {inst['category']}! You should add this metric in the prompt file!"
)
for i in range(MAX_API_RETRY):
try:
response = openai.ChatCompletion.create(
model=model,
messages=[
{
"role":
"user",
"content":
prompt["prompt"].format(
question=question,
answer=answer,
metric=prompt["metrics"][metric],
steps=prompt["CoT"][metric],
),
},
],
temperature=0,
max_tokens=max_tokens,
)
inst["evaluation"][metric] = {
"response": response["choices"][0]["message"]["content"],
"logprobs": None,
}
break
except Exception as e:
print(e)
time.sleep(1)
if metric not in inst["evaluation"]:
print(f"Evaluation {inst['id']} for metric {metric} failed after {MAX_API_RETRY} retries.")
inst["evaluation"][metric] = {}
return inst
def get_gpt_evaluation_with_logprobs(prompt: Dict[str, Any],
inst: Dict[str, Any],
metrics: List[str],
max_tokens: int = 2048) -> Dict[str, Any]:
"""
Use completion model(text-davinci-003) to evaluate one model answer.
Only completion models can return log probabilities.
Args:
prompt: a dictionary including prompt template, CoT and metrics.
@ -283,23 +348,22 @@ def get_gpt35_evaluation(prompt: Dict[str, Any],
except Exception as e:
print(e)
time.sleep(1)
if metric not in inst["evaluation"]:
print(f"Evaluation {inst['id']} for metric {metric} failed after {MAX_API_RETRY} retries.")
inst["evaluation"][metric] = {}
return inst
def gpt35_evaluate(
answers: List[Dict],
prompt: Dict[str, Any],
metrics: List[str],
category: str,
) -> List[Dict]:
def evaluate(answers: List[Dict], prompt: Dict[str, Any], metrics: List[str], category: str, model: str) -> List[Dict]:
"""
Use GPT-3.5 to evaluate model answers and save evaluation results.
Use GPT models to evaluate model answers and save evaluation results.
Args:
answers: model answers.
prompt: prompt for GPT-3.5 evaluation.
metrics: metrics for GPT-3.5 evaluation.
prompt: prompt for GPT evaluation.
metrics: metrics for GPT evaluation.
category: the category of the model answers for evaluation.
model: the specific GPT model used to evaluate answers.
Returns:
Evaluations of the given answers.
@ -315,7 +379,12 @@ def gpt35_evaluate(
with concurrent.futures.ThreadPoolExecutor(max_workers=4) as executor:
futures = []
for inst in answers:
future = executor.submit(get_gpt35_evaluation, prompt, inst, metrics, 1)
# Completion models can return log probabilities.
if model == "text-davinci-003":
future = executor.submit(get_gpt_evaluation_with_logprobs, prompt, inst, metrics, 1)
else:
future = executor.submit(get_gpt_evaluation_without_logprobs, prompt, inst, metrics, model, 1)
futures.append(future)
for future in tqdm.tqdm(
@ -334,20 +403,19 @@ def gpt35_evaluate(
def calculate_scores_form_logprobs(logprobs: Dict[str, Any]) -> float:
"""
Calculate score from log probabilities returned by text-davinci-003.
Only openai.Completion can return logprobs.
Calculate the score according to log probabilities returned by text-davinci-003.
Calculation formula:
score = sum(score_i * exp(value)) where score_i is the score which corresponds to the key(predicted token) and value is its log probability.
Ref: https://arxiv.org/abs/2303.16634
This paper proposes NLG evaluation methods using GPT-3.5(logprobs returned by openai api) and GPT-4(logprobs obtained by sampling).
This paper proposes NLG evaluation methods using text-davinci-003(log probabilities returned by completion models) and GPT-4(probabilities obtained by sampling).
Args:
logprobs: logprobs returned by openai.Completion.
Returns:
Score of one answer.
The score of one answer.
"""
# GPT-3.5 only returns score of 1 to 5.
@ -369,7 +437,31 @@ def calculate_scores_form_logprobs(logprobs: Dict[str, Any]) -> float:
return score
def save_gpt35_evaluation_statistics(model_name: str, evaluations: List[Dict], save_path: str) -> None:
def calculate_scores_form_response(response: str, evaluation: Dict[str, Any]) -> int:
"""
Calculate the score from the response returned by gpt-3.5-turbo or gpt-4.
Different from text-davinci-003, this fuction directly calculates the score according to the plain response returned by gpt-3.5-turbo or gpt-4.
Although text-davinci-003 can return log probabilities, it costs ten times as much as gpt-3.5-turbo.
Args:
response: logprobs returned by openai.Completion.
evaluation: the evaluation corresponds to the question.
Returns:
The score of one answer.
"""
try:
results = re.findall(r"\d", response)
if len(results) == 1:
return int(results[0])
else:
raise Exception(f"Invalid score pair. Got {evaluation}.")
except Exception as e:
return 0
def save_gpt_evaluation_statistics(model_name: str, evaluations: List[Dict], save_path: str) -> None:
"""
Generate statistics for one model.
@ -396,7 +488,15 @@ def save_gpt35_evaluation_statistics(model_name: str, evaluations: List[Dict], s
scores = {metric: [] for metric in metrics}
for evaluation in data:
for metric in metrics:
scores[metric].append(calculate_scores_form_logprobs(evaluation["evaluation"][metric]["logprobs"][0]))
if evaluation["evaluation"][metric] == {}:
# This means after 3 retries, the server still returns an error and we set the score to 0.
scores[metric].append(0)
elif evaluation["evaluation"][metric]["logprobs"] is not None:
scores[metric].append(
calculate_scores_form_logprobs(evaluation["evaluation"][metric]["logprobs"][0]))
else:
scores[metric].append(
calculate_scores_form_response(evaluation["evaluation"][metric]["response"], evaluation))
statistics = {}
for metric in metrics:
@ -414,7 +514,7 @@ def save_gpt35_evaluation_statistics(model_name: str, evaluations: List[Dict], s
)
def analyze_gpt35_evaluation_statistics(statistics_path: str, save_path: str) -> None:
def analyze_gpt_evaluation_statistics(statistics_path: str, save_path: str) -> None:
"""
Analyze and visualize all GPT-3.5 evaluation statistics in the given directory.
@ -474,7 +574,7 @@ def analyze_gpt35_evaluation_statistics(statistics_path: str, save_path: str) ->
os.makedirs(save_path)
frame_all = pd.DataFrame(frame_all)
frame_all.to_csv(os.path.join(save_path, "gpt35_evaluation_statistics.csv"))
frame_all.to_csv(os.path.join(save_path, "gpt_evaluation_statistics.csv"))
for category in tqdm.tqdm(
frame_per_category.keys(),

View File

@ -1,5 +1,5 @@
[
{
{
"brainstorming": {
"id": 1,
"category": "brainstorming",
"metrics": {
@ -18,7 +18,7 @@
},
"prompt": "你是一个好助手。请你为下面“头脑风暴”问题的答案打分。\n\n问题如下\n\n{question}\n\n答案如下\n\n{answer}\n\n评分的指标如下\n\n{metric}\n\n请你遵照以下的评分步骤\n\n{steps}"
},
{
"chat": {
"id": 2,
"category": "chat",
"metrics": {
@ -37,7 +37,7 @@
},
"prompt": "你是一个好助手。请你为下面的“补全对话”问题的答案打分。\n\n问题如下\n\n{question}\n\n答案如下\n\n{answer}\n\n评分的指标如下\n\n{metric}\n\n请你遵照以下的评分步骤\n\n{steps}"
},
{
"classification": {
"id": 3,
"category": "classification",
"metrics": {
@ -52,7 +52,7 @@
},
"prompt": "你是一个好助手。请你为下面的“分类“问题的答案打分。\n\n问题如下\n\n{question}\n\n答案如下\n\n{answer}\n\n评分的指标如下\n\n{metric}\n\n请你遵照以下的评分步骤\n\n{steps}"
},
{
"closed_qa": {
"id": 4,
"category": "closed_qa",
"metrics": {
@ -67,7 +67,7 @@
},
"prompt": "你是一个好助手。请你为下面问题的答案打分。\n\n问题如下\n\n{question}\n\n需要你评分的答案如下\n\n{answer}\n\n评分的指标如下\n\n{metric}\n\n请你遵照以下的评分步骤\n\n{steps}"
},
{
"extraction": {
"id": 5,
"category": "extraction",
"metrics": {
@ -82,7 +82,7 @@
},
"prompt": "你是一个好助手。请你为下面的“提取”问题的答案打分。\n\n问题如下\n\n{question}\n\n答案如下\n\n{answer}\n\n评分的指标如下\n\n{metric}\n\n请你遵照以下的评分步骤\n\n{steps}"
},
{
"generation": {
"id": 6,
"category": "generation",
"metrics": {
@ -97,7 +97,7 @@
},
"prompt": "你是一个好助手。请你为下面的“生成”问题的答案打分。\n\n问题如下\n\n{question}\n\n答案如下\n\n{answer}\n\n评分的指标如下\n\n{metric}\n\n请你遵照以下的评分步骤\n\n{steps}"
},
{
"open_qa": {
"id": 7,
"category": "open_qa",
"metrics": {
@ -112,7 +112,7 @@
},
"prompt": "你是一个好助手。请你为下面的问题的答案打分。\n\n问题如下\n\n{question}\n\n答案如下\n\n{answer}\n\n评分的指标如下\n\n{metric}\n\n请你遵照以下的评分步骤\n\n{steps}"
},
{
"rewriting": {
"id": 8,
"category": "rewriting",
"metrics": {
@ -127,7 +127,7 @@
},
"prompt": "你是一个好助手。请你为下面的问题的答案打分。\n\n问题如下\n\n{question}\n\n答案如下\n\n{answer}\n\n评分的指标如下\n\n{metric}\n\n请你遵照以下的评分步骤\n\n{steps}"
},
{
"roleplay": {
"id": 9,
"category": "roleplay",
"metrics": {
@ -144,7 +144,7 @@
},
"prompt": "你是一个好助手。请你为下面的“角色扮演”问题的答案打分。\n\n问题如下\n\n{question}\n\n答案如下\n\n{answer}\n\n评分的指标如下\n\n{metric}\n\n请你遵照以下的评分步骤\n\n{steps}"
},
{
"summarization": {
"id": 10,
"category": "summarization",
"metrics": {
@ -161,7 +161,7 @@
},
"prompt": "你是一个好助手。请你为下面的“总结”问题的答案打分。\n\n问题如下\n\n{question}\n\n答案如下\n\n{answer}\n\n评分的指标如下\n\n{metric}\n\n请你遵照以下的评分步骤\n\n{steps}"
},
{
"general": {
"id": 11,
"category": "general",
"metrics": {
@ -176,4 +176,4 @@
},
"prompt": "你是一个好助手。请你为下面问题的答案打分。\n\n问题如下\n\n{question}\n\n需要你评分的答案如下\n\n{answer}\n\n评分的指标如下\n\n{metric}\n\n请你遵照以下的评分步骤\n\n{steps}"
}
]
}

View File

@ -57,6 +57,7 @@ def get_data_per_category(data, categories):
data_per_category = {category: [] for category in categories}
for item in data:
category = item["category"]
data_per_category[category].append(item)
if category in categories:
data_per_category[category].append(item)
return data_per_category