ColossalAI/tests/test_shardformer
botbw c54c4fcd15
[hotfix] moe hybrid parallelism benchmark & follow-up fix (#6048)
* [example] pass use_fp8_comm flag to all plugins

* [example] add mixtral benchmark

* [moe] refine assertion and check

* [moe] fix mixtral & add more tests

* [moe] consider checking dp * sp group and moe_dp_group

* [mixtral] remove gate tp & add more tests

* [deepseek] fix tp & sp for deepseek

* [mixtral] minor fix

* [deepseek] add deepseek benchmark
2024-09-10 17:30:53 +08:00
..
test_hybrid_parallel_grad_clip_norm [MoE/ZeRO] Moe refactor with zero refactor (#5821) 2024-06-28 14:00:08 +08:00
test_layer [Feature] Zigzag Ring attention (#5905) 2024-08-16 13:56:38 +08:00
test_model [hotfix] moe hybrid parallelism benchmark & follow-up fix (#6048) 2024-09-10 17:30:53 +08:00
__init__.py [shardformer] adapted T5 and LLaMa test to use kit (#4049) 2023-07-04 16:05:01 +08:00
test_flash_attention.py [Feature] Zigzag Ring attention (#5905) 2024-08-16 13:56:38 +08:00
test_shard_utils.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
test_with_torch_ddp.py [misc] refactor launch API and tensor constructor (#5666) 2024-04-29 10:40:11 +08:00