ColossalAI/colossalai/amp/naive_amp
Hongxin Liu ae02d4e4f7
[bf16] add bf16 support (#3882)
* [bf16] add bf16 support for fused adam (#3844)

* [bf16] fused adam kernel support bf16

* [test] update fused adam kernel test

* [test] update fused adam test

* [bf16] cpu adam and hybrid adam optimizers support bf16 (#3860)

* [bf16] implement mixed precision mixin and add bf16 support for low level zero (#3869)

* [bf16] add mixed precision mixin

* [bf16] low level zero optim support bf16

* [text] update low level zero test

* [text] fix low level zero grad acc test

* [bf16] add bf16 support for gemini (#3872)

* [bf16] gemini support bf16

* [test] update gemini bf16 test

* [doc] update gemini docstring

* [bf16] add bf16 support for plugins (#3877)

* [bf16] add bf16 support for legacy zero (#3879)

* [zero] init context support bf16

* [zero] legacy zero support bf16

* [test] add zero bf16 test

* [doc] add bf16 related docstring for legacy zero
2023-06-05 15:58:31 +08:00
..
grad_scaler [zero] fix gradient clipping in hybrid parallelism (#2521) 2023-01-29 15:09:57 +08:00
mixed_precision_mixin [bf16] add bf16 support (#3882) 2023-06-05 15:58:31 +08:00
__init__.py [NFC] polish colossalai/amp/naive_amp/__init__.py code style (#1905) 2022-11-11 17:49:18 +08:00
_fp16_optimizer.py [setup] support pre-build and jit-build of cuda kernels (#2374) 2023-01-06 20:50:26 +08:00
_utils.py [NFC] polish colossalai/amp/naive_amp/_utils.py code style (#1816) 2022-11-09 12:08:47 +08:00
naive_amp.py [amp] add gradient clipping for unit tests (#2283) 2023-01-04 11:59:56 +08:00