ColossalAI/tests/test_fp8
Hongxin Liu 8241c0c054
[fp8] support gemini plugin (#5978)
* [fp8] refactor hook

* [fp8] support gemini plugin

* [example] add fp8 option for llama benchmark
2024-08-09 14:09:48 +08:00
..
test_all_to_all_single.py [fp8]support all2all fp8 (#5953) 2024-08-06 16:58:23 +08:00
test_fp8_all_to_all.py [Feature] llama shardformer fp8 support (#5938) 2024-08-05 10:05:47 +08:00
test_fp8_all_to_all_single.py [Feature] llama shardformer fp8 support (#5938) 2024-08-05 10:05:47 +08:00
test_fp8_allgather_flat.py [Feature] llama shardformer fp8 support (#5938) 2024-08-05 10:05:47 +08:00
test_fp8_allreduce.py [Feature] llama shardformer fp8 support (#5938) 2024-08-05 10:05:47 +08:00
test_fp8_cast.py [Feature]: support FP8 communication in DDP, FSDP, Gemini (#5928) 2024-08-08 15:55:01 +08:00
test_fp8_ddp_comm_hook.py [Feature]: support FP8 communication in DDP, FSDP, Gemini (#5928) 2024-08-08 15:55:01 +08:00
test_fp8_fsdp_comm_hook.py [Feature]: support FP8 communication in DDP, FSDP, Gemini (#5928) 2024-08-08 15:55:01 +08:00
test_fp8_gather.py [Feature] llama shardformer fp8 support (#5938) 2024-08-05 10:05:47 +08:00
test_fp8_hook.py [fp8] support gemini plugin (#5978) 2024-08-09 14:09:48 +08:00
test_fp8_linear.py [fp8] add fp8 linear (#5967) 2024-08-07 15:41:49 +08:00
test_fp8_reduce_scatter.py [Feature] llama shardformer fp8 support (#5938) 2024-08-05 10:05:47 +08:00