mirror of https://github.com/hpcaitech/ColossalAI
d921ce8391
* [shardformer] embedding support inplace sharding * [shardformer] linear support inplace sharding * [shardformer] layernorm support inplace sharding * [shardformer] qkv support inplace sharding * [test] update shardformer layer test * [shardformer] fix shared param sharding * [shardformer] fix bert policy * [shardformer] fix bloom policy * [shardformer] fix llama policy * [shardformer] fix opt policy * [shardformer] fix t5 policy * [shardformer] fix fused qkv linear * [shardformer] fix bugs * force sync * [test] fix bugs * [test] fix transformer version |
||
---|---|---|
.. | ||
d_tensor | ||
__init__.py | ||
colo_parameter.py | ||
colo_tensor.py | ||
comm_spec.py | ||
compute_spec.py | ||
const.py | ||
dist_spec_mgr.py | ||
distspec.py | ||
op_wrapper.py | ||
param_op_hook.py | ||
process_group.py | ||
shape_consistency.py | ||
sharding_spec.py | ||
tensor_spec.py | ||
utils.py |