[doc] add potential solution for OOM in llama2 example (#4699)

pull/4704/head
Baizhou Zhang 1 year ago committed by GitHub
parent 9c2feb2f0b
commit 068372a738
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -149,6 +149,9 @@ Finally, run the following command to start training:
```bash
bash gemini.sh
```
If you encounter out-of-memory(OOM) error during training with script `gemini.sh`, changing to script `gemini_auto.sh` might be a solution, since gemini_auto will set a upper limit on GPU memory usage through offloading part of the model parameters and optimizer states back to CPU memory. But there's a trade-off: `gemini_auto.sh` will be a bit slower, since more data are transmitted between CPU and GPU.
#### c. Results
If you run the above command successfully, you will get the following results:
`max memory usage: 55491.10 MB, throughput: 24.26 samples/s, TFLOPS/GPU: 167.43`.

Loading…
Cancel
Save