From 94f000515b3f5700934072c39890f45cf419eebc Mon Sep 17 00:00:00 2001 From: binmakeswell Date: Tue, 14 Feb 2023 23:07:30 +0800 Subject: [PATCH] [doc] add Quick Preview (#2706) --- applications/ChatGPT/README.md | 25 +++++++++++++++++++++++-- 1 file changed, 23 insertions(+), 2 deletions(-) diff --git a/applications/ChatGPT/README.md b/applications/ChatGPT/README.md index dce59ad4b..43085f3ab 100644 --- a/applications/ChatGPT/README.md +++ b/applications/ChatGPT/README.md @@ -1,6 +1,6 @@ -# RLHF - ColossalAI +# RLHF - Colossal-AI -Implementation of RLHF (Reinforcement Learning with Human Feedback) powered by ColossalAI. It supports distributed training and offloading, which can fit extremly large models. +Implementation of RLHF (Reinforcement Learning with Human Feedback) powered by Colossal-AI. It supports distributed training and offloading, which can fit extremly large models. More details can be found in the [blog](https://www.hpc-ai.tech/blog/colossal-ai-chatgpt).

@@ -60,6 +60,27 @@ We also support training reward model with true-world data. See `examples/train_ - [ ] integrate with Ray - [ ] support more RL paradigms, like Implicit Language Q-Learning (ILQL) +## Quick Preview +

+ +

+ +- Up to 7.73 times faster for single server training and 1.42 times faster for single-GPU inference + +

+ +

+ +- Up to 10.3x growth in model capacity on one GPU +- A mini demo training process requires only 1.62GB of GPU memory (any consumer-grade GPU) + +

+ +

+ +- Increase the capacity of the fine-tuning model by up to 3.7 times on a single GPU +- Keep in a sufficiently high running speed + ## Citations ```bibtex