Browse Source

[example] llama3 (#5631)

* release llama3

* [release] llama3

* [release] llama3

* [release] llama3

* [release] llama3
pull/5638/head
binmakeswell 7 months ago committed by GitHub
parent
commit
f4c5aafe29
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
  1. 14
      README.md
  2. 10
      docs/README-zh-Hans.md
  3. 6
      examples/language/llama/README.md
  4. 0
      examples/language/llama/benchmark.py
  5. 0
      examples/language/llama/requirements.txt
  6. 0
      examples/language/llama/scripts/benchmark_70B/3d.sh
  7. 0
      examples/language/llama/scripts/benchmark_70B/gemini.sh
  8. 0
      examples/language/llama/scripts/benchmark_70B/gemini_auto.sh
  9. 0
      examples/language/llama/scripts/benchmark_7B/gemini.sh
  10. 0
      examples/language/llama/scripts/benchmark_7B/gemini_auto.sh
  11. 0
      examples/language/llama/test_ci.sh

14
README.md

@ -52,7 +52,7 @@
<li>
<a href="#Parallel-Training-Demo">Parallel Training Demo</a>
<ul>
<li><a href="#LLaMA2">LLaMA 1/2</a></li>
<li><a href="#LLaMA3">LLaMA 1/2/3 </a></li>
<li><a href="#MoE">MoE</a></li>
<li><a href="#GPT-3">GPT-3</a></li>
<li><a href="#GPT-2">GPT-2</a></li>
@ -270,13 +270,21 @@ Acceleration of [AlphaFold Protein Structure](https://alphafold.ebi.ac.uk/)
<p align="right">(<a href="#top">back to top</a>)</p>
## Parallel Training Demo
### LLaMA3
<p align="center">
<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/examples/images/LLaMA3-70B-H100.png" width=600/>
</p>
- 70 billion parameter LLaMA3 model training accelerated by 18%
[[code]](https://github.com/hpcaitech/ColossalAI/tree/main/examples/language/llama)
### LLaMA2
<p align="center">
<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/llama2_pretraining.png" width=600/>
</p>
- 70 billion parameter LLaMA2 model training accelerated by 195%
[[code]](https://github.com/hpcaitech/ColossalAI/tree/main/examples/language/llama2)
[[code]](https://github.com/hpcaitech/ColossalAI/tree/main/examples/language/llama)
[[blog]](https://www.hpc-ai.tech/blog/70b-llama2-training)
### LLaMA1
@ -285,7 +293,7 @@ Acceleration of [AlphaFold Protein Structure](https://alphafold.ebi.ac.uk/)
</p>
- 65-billion-parameter large model pretraining accelerated by 38%
[[code]](https://github.com/hpcaitech/ColossalAI/tree/example/llama/examples/language/llama)
[[code]](https://github.com/hpcaitech/ColossalAI/tree/main/examples/language/llama)
[[blog]](https://www.hpc-ai.tech/blog/large-model-pretraining)
### MoE

10
docs/README-zh-Hans.md

@ -51,7 +51,7 @@
<li>
<a href="#并行训练样例展示">并行训练样例展示</a>
<ul>
<li><a href="#LLaMA2">LLaMA 1/2</a></li>
<li><a href="#LLaMA3">LLaMA 1/2/3</a></li>
<li><a href="#MoE">MoE</a></li>
<li><a href="#GPT-3">GPT-3</a></li>
<li><a href="#GPT-2">GPT-2</a></li>
@ -261,6 +261,14 @@ Colossal-AI 为您提供了一系列并行组件。我们的目标是让您的
<p align="right">(<a href="#top">返回顶端</a>)</p>
## 并行训练样例展示
### LLaMA3
<p align="center">
<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/examples/images/LLaMA3-70B-H100.png" width=600/>
</p>
- 700亿参数LLaMA3训练加速18%
[[code]](https://github.com/hpcaitech/ColossalAI/tree/main/examples/language/llama)
### LLaMA2
<p align="center">
<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/llama2_pretraining.png" width=600/>

6
examples/language/llama2/README.md → examples/language/llama/README.md

@ -1,4 +1,10 @@
# Pretraining LLaMA-1/2/3: best practices for building LLaMA-1/2/3-like base models
### LLaMA3
<p align="center">
<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/examples/images/LLaMA3-70B-H100.png" width=600/>
</p>
- 70 billion parameter LLaMA3 model training accelerated by 18%
### LLaMA2
<p align="center">

0
examples/language/llama2/benchmark.py → examples/language/llama/benchmark.py

0
examples/language/llama2/requirements.txt → examples/language/llama/requirements.txt

0
examples/language/llama2/scripts/benchmark_70B/3d.sh → examples/language/llama/scripts/benchmark_70B/3d.sh

0
examples/language/llama2/scripts/benchmark_70B/gemini.sh → examples/language/llama/scripts/benchmark_70B/gemini.sh

0
examples/language/llama2/scripts/benchmark_70B/gemini_auto.sh → examples/language/llama/scripts/benchmark_70B/gemini_auto.sh

0
examples/language/llama2/scripts/benchmark_7B/gemini.sh → examples/language/llama/scripts/benchmark_7B/gemini.sh

0
examples/language/llama2/scripts/benchmark_7B/gemini_auto.sh → examples/language/llama/scripts/benchmark_7B/gemini_auto.sh

0
examples/language/llama2/test_ci.sh → examples/language/llama/test_ci.sh

Loading…
Cancel
Save