ColossalAI/colossalai/fx/profiler
Boyuan Yao 90a9fdd91d
[autoparallel] Patch meta information of `torch.matmul` (#2584)
* [autoparallel] matmul metainfo

* [auto_parallel] remove unused print

* [tests] skip test_matmul_handler when torch version is lower than 1.12.0
2023-02-08 11:05:31 +08:00
..
experimental [fx] refactor memory utils and extend shard utils. (#1754) 2022-10-26 14:24:41 +08:00
__init__.py [fx] refactor memory utils and extend shard utils. (#1754) 2022-10-26 14:24:41 +08:00
constants.py [fx/profiler] debug the fx.profiler / add an example test script for fx.profiler (#1730) 2022-10-19 14:24:51 +08:00
dataflow.py [fx] refactor memory utils and extend shard utils. (#1754) 2022-10-26 14:24:41 +08:00
memory_utils.py [autoparallel] move ckpt solvers to autoparallel folder / refactor code (#1764) 2022-11-01 10:43:15 +08:00
opcount.py [autoparallel] Patch meta information of `torch.matmul` (#2584) 2023-02-08 11:05:31 +08:00
profiler.py [autoparallel] refactor and add rotorc. (#1789) 2022-11-03 12:32:51 +08:00
shard_utils.py [autoparallel] modify comm nodes' memory cost in construct chain (#2263) 2023-01-03 11:38:48 +08:00
tensor.py [hotfix] meta tensor default device. (#2510) 2023-01-29 16:28:10 +08:00