ColossalAI/colossalai/tensor
Boyuan Yao 22e947f982
[autoparallel] fix runtime apply memory estimation (#2281)
* [autoparallel] align the data_ptr with the old version of auto activation checkpoint pipeline

* [autoparallel] using fwd_time and bwd_time instead of fwd_flop and bwd_flop

* [autoparallel] specifycomm nodes' memory cost in construct chain

* [autoparallel] fix wrong runtime apply calculation

* [autoparallel] fix wrong runtime apply calculation

* [autoparallel] fix wrong runtime apply calculation
2023-01-03 17:18:07 +08:00
..
__init__.py [Gemini] ParamOpHook -> ColoParamOpHook (#2080) 2022-12-05 17:11:06 +08:00
colo_parameter.py [zero] fix error for BEiT models (#2169) 2022-12-26 15:03:54 +08:00
colo_tensor.py [hotfix] fix error for torch 2.0 (#2243) 2022-12-30 23:11:55 +08:00
comm_spec.py [autoparallel] add experimental permute handler (#2029) 2022-11-27 20:26:52 +08:00
compute_spec.py
const.py
dist_spec_mgr.py [autoparallel] fix bugs caused by negative dim key (#1808) 2022-11-08 17:03:50 +08:00
distspec.py
op_wrapper.py
param_op_hook.py [zero] fix error for BEiT models (#2169) 2022-12-26 15:03:54 +08:00
process_group.py
shape_consistency.py [autoparallel] fix runtime apply memory estimation (#2281) 2023-01-03 17:18:07 +08:00
sharding_spec.py [autoparallel] fix bugs caused by negative dim key (#1808) 2022-11-08 17:03:50 +08:00
tensor_spec.py [autoparallel] fix bugs caused by negative dim key (#1808) 2022-11-08 17:03:50 +08:00
utils.py [autoparallel] mix gather (#1977) 2022-11-23 21:49:17 +08:00