ColossalAI/colossalai/booster/plugin
botbw 8e718a1421
[gemini] fixes for benchmarking (#5847)
* [gemini] fix missing return

* [gemini] fix missing arg pass

* [gemini] use gather tensor instead of list

* [test] enable flash attention for benchmark by default

* [test] enable flash attention for benchmark by default

---------

Co-authored-by: genghaozhe <939857490@qq.com>
2024-06-26 15:52:09 +08:00
..
__init__.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
dp_plugin_base.py [llama] fix dataloader for hybrid parallel (#5358) 2024-02-05 15:14:56 +08:00
gemini_plugin.py [gemini] fixes for benchmarking (#5847) 2024-06-26 15:52:09 +08:00
hybrid_parallel_plugin.py [Feature] optimize PP overlap (#5735) 2024-06-26 14:48:02 +08:00
low_level_zero_plugin.py [Feature] auto-cast optimizers to distributed version (#5746) 2024-05-24 17:24:16 +08:00
moe_hybrid_parallel_plugin.py [shardformer] Sequence Parallelism Optimization (#5533) 2024-04-03 17:15:47 +08:00
plugin_base.py [lora] add lora APIs for booster, support lora for TorchDDP (#4981) 2024-04-28 10:51:27 +08:00
pp_plugin_base.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
torch_ddp_plugin.py [misc] refactor launch API and tensor constructor (#5666) 2024-04-29 10:40:11 +08:00
torch_fsdp_plugin.py [lora] add lora APIs for booster, support lora for TorchDDP (#4981) 2024-04-28 10:51:27 +08:00