ColossalAI/colossalai/tensor
Guangyao Zhang 2e28c793ce [compatibility] support torch 2.2 (#5875)
* Support Pytorch 2.2.2

* keep build_on_pr file and update .compatibility
2024-07-16 13:59:25 +08:00
..
d_tensor [compatibility] support torch 2.2 (#5875) 2024-07-16 13:59:25 +08:00
moe_tensor [MoE/ZeRO] Moe refactor with zero refactor (#5821) 2024-06-28 14:00:08 +08:00
padded_tensor [shardformer] refactor embedding resize (#5603) 2024-04-18 16:10:18 +08:00
__init__.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
colo_parameter.py [gemini] fix param op hook when output is tuple (#5355) 2024-02-04 11:58:26 +08:00
colo_tensor.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
comm_spec.py fix some typo (#5307) 2024-01-25 13:56:27 +08:00
param_op_hook.py [gemini] fix param op hook when output is tuple (#5355) 2024-02-04 11:58:26 +08:00
shape_consistency.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
sharding_spec.py [Auto Parallel]: Speed up intra-op plan generation by 44% (#5446) 2024-07-15 12:05:06 +08:00
utils.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00