mirror of https://github.com/hpcaitech/ColossalAI
![]() * [moe] add mixtral block for single expert * [moe] mixtral block fwd support uneven ep * [moe] mixtral block bwd support uneven ep * [moe] add mixtral moe layer * [moe] simplify replace * [meo] support save sharded mixtral * [meo] support load sharded mixtral * [meo] support save sharded optim * [meo] integrate moe manager into plug * [meo] fix optimizer load * [meo] fix mixtral layer |
||
---|---|---|
.. | ||
__init__.py | ||
dp_plugin_base.py | ||
gemini_plugin.py | ||
hybrid_parallel_plugin.py | ||
low_level_zero_plugin.py | ||
moe_hybrid_parallel_plugin.py | ||
plugin_base.py | ||
pp_plugin_base.py | ||
torch_ddp_plugin.py | ||
torch_fsdp_plugin.py |