ColossalAI/colossalai/booster/plugin
Hongxin Liu dc2cdaf3e8
[shardformer] optimize seq parallelism (#6086)
* [shardformer] optimize seq parallelism

* [shardformer] fix gpt2 fused linear col

* [plugin] update gemini plugin

* [plugin] update moe hybrid plugin

* [test] update gpt2 fused linear test

* [shardformer] fix gpt2 fused linear reduce
2024-10-11 13:44:40 +08:00
..
__init__.py [shardformer] fix the moe (#5883) 2024-07-03 20:02:19 +08:00
dp_plugin_base.py [llama] fix dataloader for hybrid parallel (#5358) 2024-02-05 15:14:56 +08:00
gemini_plugin.py [shardformer] optimize seq parallelism (#6086) 2024-10-11 13:44:40 +08:00
hybrid_parallel_plugin.py [shardformer] optimize seq parallelism (#6086) 2024-10-11 13:44:40 +08:00
low_level_zero_plugin.py [doc] FP8 training and communication document (#6050) 2024-09-14 11:01:05 +08:00
moe_hybrid_parallel_plugin.py [shardformer] optimize seq parallelism (#6086) 2024-10-11 13:44:40 +08:00
plugin_base.py [lora] add lora APIs for booster, support lora for TorchDDP (#4981) 2024-04-28 10:51:27 +08:00
pp_plugin_base.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
torch_ddp_plugin.py [doc] FP8 training and communication document (#6050) 2024-09-14 11:01:05 +08:00
torch_fsdp_plugin.py [fp8] Merge feature/fp8_comm to main branch of Colossalai (#6016) 2024-08-22 09:21:34 +08:00