ColossalAI/examples/community/fp8/mnist
Hongxin Liu 079bf3cb26
[misc] update pre-commit and run all files (#4752)
* [misc] update pre-commit

* [misc] run pre-commit

* [misc] remove useless configuration files

* [misc] ignore cuda for clang-format
2023-09-19 14:20:26 +08:00
..
README.md [example] reorganize for community examples (#3557) 2023-04-14 16:27:48 +08:00
main.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00

README.md

Basic MNIST Example with optional FP8 of TransformerEngine

TransformerEngine is a library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper GPUs, to provide better performance with lower memory utilization in both training and inference.

Thanks for the contribution to this tutorial from NVIDIA.

python main.py
python main.py --use-te   # Linear layers from TransformerEngine
python main.py --use-fp8  # FP8 + TransformerEngine for Linear layers

We are working to integrate it with Colossal-AI and will finish it soon.