ColossalAI/tests
Boyuan Yao 7c7921f71b
[autoparallel] add torch.nn.ReLU metainfo (#1868)
* [fx] metainfo class for auto parallel

* [fx] add unit test for linear metainfo

* [fx] fix bwd param for linear

* [fx] modify unit test

* [fx] modify unit test

* [fx] modify import

* [fx] modify import

* [fx] modify import

* [fx] move meta profiler to auto parallel

* [fx] add conv metainfo class

* [fx] restore profiler

* [fx] restore meta profiler

* [autoparallel] modify unit test

* [fx] modify unit test

* [autoparallel] add batchnorm metainfo class

* [autoparallel] fix batchnorm unit test function declaration

* [fx] restore profiler

* [fx] add relu metainfo class

* [fx] restore profiler

* [autoparallel] modify metainfo input
2022-11-16 23:12:31 +08:00
..
components_to_test
test_amp [amp] add torch amp test (#1860) 2022-11-10 16:40:26 +08:00
test_auto_parallel [autoparallel] add torch.nn.ReLU metainfo (#1868) 2022-11-16 23:12:31 +08:00
test_comm
test_config
test_context
test_data
test_data_pipeline_tensor_parallel
test_ddp [zero] add chunk init function for users (#1729) 2022-10-18 16:31:22 +08:00
test_device
test_engine
test_fx [hotfix] pass test_complete_workflow (#1877) 2022-11-10 17:53:39 +08:00
test_gemini [Gemini] add GeminiAdamOptimizer (#1960) 2022-11-16 14:44:28 +08:00
test_layers [inference] overlap comm and compute in Linear1D_Row when stream_chunk_num > 1 (#1876) 2022-11-10 17:36:42 +08:00
test_moe
test_ops
test_optimizer
test_pipeline [fx/meta/rpc] move _meta_registration.py to fx folder / register fx functions with compatibility checks / remove color debug (#1710) 2022-10-18 10:44:23 +08:00
test_tensor [Gemini] add GeminiAdamOptimizer (#1960) 2022-11-16 14:44:28 +08:00
test_trainer
test_utils updated flash attention api 2022-11-15 15:25:39 +08:00
test_zero [zero] fix memory leak for zero2 (#1955) 2022-11-16 11:43:24 +08:00
__init__.py