Making large AI models cheaper, faster and more accessible
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
 
 
Edenzzzz 5f8c0a0ac3
[Feature] auto-cast optimizers to distributed version (#5746)
6 months ago
..
__init__.py
dp_plugin_base.py
gemini_plugin.py [gemini] async grad chunk reduce (all-reduce&reduce-scatter) (#5713) 6 months ago
hybrid_parallel_plugin.py [Feature] auto-cast optimizers to distributed version (#5746) 6 months ago
low_level_zero_plugin.py [Feature] auto-cast optimizers to distributed version (#5746) 6 months ago
moe_hybrid_parallel_plugin.py [shardformer] Sequence Parallelism Optimization (#5533) 8 months ago
plugin_base.py [lora] add lora APIs for booster, support lora for TorchDDP (#4981) 7 months ago
pp_plugin_base.py
torch_ddp_plugin.py [misc] refactor launch API and tensor constructor (#5666) 7 months ago
torch_fsdp_plugin.py [lora] add lora APIs for booster, support lora for TorchDDP (#4981) 7 months ago