ColossalAI/colossalai/auto_parallel/tensor_shard
Hongxin Liu 554aa9592e
[legacy] move communication and nn to legacy and refactor logger (#4671)
* [legacy] move communication to legacy (#4640)

* [legacy] refactor logger and clean up legacy codes (#4654)

* [legacy] make logger independent to gpc

* [legacy] make optim independent to registry

* [legacy] move test engine to legacy

* [legacy] move nn to legacy (#4656)

* [legacy] move nn to legacy

* [checkpointio] fix save hf config

* [test] remove useledd rpc pp test

* [legacy] fix nn init

* [example] skip tutorial hybriad parallel example

* [devops] test doc check

* [devops] test doc check
2023-09-11 16:24:28 +08:00
..
node_handler [legacy] move communication and nn to legacy and refactor logger (#4671) 2023-09-11 16:24:28 +08:00
solver [NFC] fix typo with colossalai/auto_parallel/tensor_shard (#3742) 2023-05-17 11:13:23 +08:00
utils [test] fixed tests failed due to dtensor change (#4082) 2023-07-04 16:05:01 +08:00
__init__.py [autoparallel] init new folder structure (#1696) 2022-10-13 14:18:55 +08:00
constants.py [autoparallel] adapt solver with self attention (#2037) 2022-12-01 17:53:15 +08:00
initialize.py [autoparallel]integrate auto parallel feature with new tracer (#3408) 2023-04-04 17:40:45 +08:00
options.py [autoparallel] add shard option (#2696) 2023-02-15 13:48:28 +08:00
sharding_strategy.py [autoparallel] memory estimation for shape consistency (#2144) 2022-12-21 10:39:37 +08:00