ColossalAI/colossalai/auto_parallel
Boyuan Yao 7c7921f71b
[autoparallel] add torch.nn.ReLU metainfo (#1868)
* [fx] metainfo class for auto parallel

* [fx] add unit test for linear metainfo

* [fx] fix bwd param for linear

* [fx] modify unit test

* [fx] modify unit test

* [fx] modify import

* [fx] modify import

* [fx] modify import

* [fx] move meta profiler to auto parallel

* [fx] add conv metainfo class

* [fx] restore profiler

* [fx] restore meta profiler

* [autoparallel] modify unit test

* [fx] modify unit test

* [autoparallel] add batchnorm metainfo class

* [autoparallel] fix batchnorm unit test function declaration

* [fx] restore profiler

* [fx] add relu metainfo class

* [fx] restore profiler

* [autoparallel] modify metainfo input
2022-11-16 23:12:31 +08:00
..
checkpoint [autoparallel] user-friendly API for CheckpointSolver. (#1879) 2022-11-10 20:59:28 +08:00
meta_profiler [autoparallel] add torch.nn.ReLU metainfo (#1868) 2022-11-16 23:12:31 +08:00
passes [autoparallel] remove redundancy comm node (#1893) 2022-11-15 10:53:41 +08:00
pipeline_shard [autoparallel] init new folder structure (#1696) 2022-10-13 14:18:55 +08:00
tensor_shard [autoparallel] support addmm in tracer and solver (#1961) 2022-11-16 14:59:18 +08:00
__init__.py [autoparallel] standardize the code structure (#1469) 2022-08-19 15:51:54 +08:00