ColossalAI/colossalai/utils
アマデウス 01a80cd86d
Hotfix/Colossalai layers (#92)
* optimized 1d layer apis; reorganized nn.layer modules; fixed tests

* fixed 2.5d runtime issue

* reworked split batch, now called in trainer.schedule.load_batch

Co-authored-by: BoxiangW <45734921+BoxiangW@users.noreply.github.com>
2021-12-29 23:32:10 +08:00
..
data_sampler update examples and sphnix docs for the new api (#63) 2021-12-13 22:07:01 +08:00
gradient_accumulation update examples and sphnix docs for the new api (#63) 2021-12-13 22:07:01 +08:00
multi_tensor_apply update examples and sphnix docs for the new api (#63) 2021-12-13 22:07:01 +08:00
__init__.py Hotfix/Colossalai layers (#92) 2021-12-29 23:32:10 +08:00
activation_checkpoint.py Migrated project 2021-10-28 18:21:23 +02:00
checkpointing.py Support TP-compatible Torch AMP and Update trainer API (#27) 2021-11-18 19:45:06 +08:00
common.py Hotfix/Colossalai layers (#92) 2021-12-29 23:32:10 +08:00
cuda.py Migrated project 2021-10-28 18:21:23 +02:00
memory.py Layer integration (#83) 2021-12-27 15:04:32 +08:00
timer.py update examples and sphnix docs for the new api (#63) 2021-12-13 22:07:01 +08:00