# Pretraining LLaMA: best practices for building LLaMA-like base models

- 65-billion-parameter large model pretraining accelerated by 38% [[code]](https://github.com/hpcaitech/ColossalAI/tree/example/llama/examples/language/llama) [[blog]](https://www.hpc-ai.tech/blog/large-model-pretraining) > Since the main branch is being updated, in order to maintain the stability of the code, this example is temporarily kept as an [independent branch](https://github.com/hpcaitech/ColossalAI/tree/example/llama/examples/language/llama).