diff --git a/README-zh-Hans.md b/README-zh-Hans.md
index ae9c273b5..a3a5a81ac 100644
--- a/README-zh-Hans.md
+++ b/README-zh-Hans.md
@@ -84,7 +84,11 @@ Colossal-AI为您提供了一系列并行训练组件。我们的目标是让您
### GPT-2
-- 降低11倍GPU显存占用,或超线性扩展
+- 降低11倍GPU显存占用,或超线性扩展(张量并行)
+
+
+
+- 能训练接近11倍大小的模型(ZeRO)
### BERT
diff --git a/README.md b/README.md
index fa0dc30b3..cd32412d1 100644
--- a/README.md
+++ b/README.md
@@ -85,8 +85,11 @@ distributed training in a few lines.
### GPT-2
-- 11x lower GPU RAM, or superlinear scaling
+- 11x lower GPU RAM, or superlinear scaling with Tensor Parallel
+
+
+- 10.7x larger model size with ZeRO
### BERT