mirror of https://github.com/hpcaitech/ColossalAI
![]() * support existing sharded and unsharded parameters in zero * add unitest for moe-zero model init * polish moe gradient handler |
||
---|---|---|
.. | ||
init_ctx | ||
shard_utils | ||
sharded_model | ||
sharded_optim | ||
sharded_param | ||
__init__.py |