ColossalAI/colossalai/nn
ver217 51b9a49655
[zero] add zero optimizer for ColoTensor (#1046)
* add zero optimizer

* torch ok

* unit test ok

* polish code

* fix bugs

* polish unit test

* polish zero optim

* polish colo ddp v2

* refactor folder structure

* add comment

* polish unit test

* polish zero optim

* polish unit test
2022-06-02 12:13:15 +08:00
..
layer [NFC] polish colossalai/nn/layer/utils/common.py code style (#983) 2022-05-17 10:25:06 +08:00
loss [hotfix] Raise messages for indivisible batch sizes with tensor parallelism (#622) 2022-04-02 16:12:04 +08:00
lr_scheduler Refactored docstring to google style 2022-03-29 17:17:47 +08:00
metric [hotfix] Raise messages for indivisible batch sizes with tensor parallelism (#622) 2022-04-02 16:12:04 +08:00
model Develop/experiments (#59) 2021-12-09 15:08:29 +08:00
optimizer [zero] add zero optimizer for ColoTensor (#1046) 2022-06-02 12:13:15 +08:00
__init__.py Layer integration (#83) 2021-12-27 15:04:32 +08:00
init.py Refactored docstring to google style 2022-03-29 17:17:47 +08:00
parallel.py [zero] add zero optimizer for ColoTensor (#1046) 2022-06-02 12:13:15 +08:00