ColossalAI/colossalai/auto_parallel/tensor_shard/node_handler
YuliangLiu0306 ffcdbf0f65
[autoparallel]integrate auto parallel feature with new tracer (#3408)
* [autoparallel] integrate new analyzer in module level

* unify the profiling method

* polish

* fix no codegen bug

* fix pass bug

* fix liveness test

* polish
2023-04-04 17:40:45 +08:00
..
strategy [autoparallel] adapt autoparallel with new analyzer (#3261) 2023-03-30 17:47:24 +08:00
__init__.py [autoparallel] add shard option (#2696) 2023-02-15 13:48:28 +08:00
addmm_handler.py
batch_norm_handler.py [autoparallel]integrate auto parallel feature with new tracer (#3408) 2023-04-04 17:40:45 +08:00
binary_elementwise_handler.py [autoparallel] update binary elementwise handler (#2451) 2023-01-12 09:35:10 +08:00
bmm_handler.py [autoparallel] adapt autoparallel with new analyzer (#3261) 2023-03-30 17:47:24 +08:00
conv_handler.py [autoparallel] use metainfo in handler (#2149) 2022-12-20 10:31:22 +08:00
default_reshape_handler.py [autoparallel] refactor handlers which reshape input tensors (#2615) 2023-02-08 15:02:49 +08:00
embedding_handler.py [autoparallel]add embedding handler (#2089) 2022-12-07 09:41:46 +08:00
getattr_handler.py [autoparallel] fix spelling error (#2270) 2023-01-03 16:13:00 +08:00
getitem_handler.py [NFC] polish colossalai/auto_parallel/tensor_shard/node_handler/getitem_handler.py code style (#2289) 2023-01-04 15:09:57 +08:00
layer_norm_handler.py [autoparallel] Patch meta information of `torch.nn.LayerNorm` (#2647) 2023-02-10 14:29:24 +08:00
linear_handler.py [autoparallel] distinguish different parallel strategies (#2699) 2023-02-15 22:28:28 +08:00
matmul_handler.py [autoparallel] Patch meta information of `torch.matmul` (#2584) 2023-02-08 11:05:31 +08:00
node_handler.py [autoparallel]integrate auto parallel feature with new tracer (#3408) 2023-04-04 17:40:45 +08:00
normal_pooling_handler.py [autoparallel] use metainfo in handler (#2149) 2022-12-20 10:31:22 +08:00
output_handler.py [autoparallel] fix spelling error (#2270) 2023-01-03 16:13:00 +08:00
permute_handler.py [autoparallel] refactor handlers which reshape input tensors (#2615) 2023-02-08 15:02:49 +08:00
placeholder_handler.py [autoparallel] fix spelling error (#2270) 2023-01-03 16:13:00 +08:00
registry.py
softmax_handler.py [autoparallel] implement softmax handler (#2132) 2022-12-14 16:09:53 +08:00
split_handler.py [autoparallel] refactor handlers which reshape input tensors (#2615) 2023-02-08 15:02:49 +08:00
sum_handler.py [autoparallel] add sum handler (#2101) 2022-12-08 17:02:54 +08:00
tensor_constructor_handler.py [autoparallel] add tensor constructor handler (#2082) 2022-12-06 10:20:10 +08:00
transpose_handler.py [autoparallel] refactor handlers which reshape input tensors (#2615) 2023-02-08 15:02:49 +08:00
unary_elementwise_handler.py [autockpt] make it work. (#2257) 2023-01-02 23:37:45 +08:00
view_handler.py [autoparallel] refactor handlers which reshape input tensors (#2615) 2023-02-08 15:02:49 +08:00
where_handler.py