ColossalAI/colossalai/nn
Jiarui Fang af5438caa2
[FAW] refactor reorder() for CachedParamMgr (#1514)
2022-08-29 14:22:07 +08:00
..
_ops [tensor] added linear implementation for the new sharding spec (#1416) 2022-08-12 11:33:09 +08:00
graph [NFC] polish doc style for ColoTensor (#1457) 2022-08-16 09:21:05 +08:00
layer [NFC] polish colossalai/nn/layer/wrapper/pipeline_wrapper.py code style (#1303) 2022-07-13 19:01:07 +08:00
loss [tensor] add unitest for colo_tensor 1DTP cross_entropy (#1230) 2022-07-07 19:17:23 +08:00
lr_scheduler [NFC] polish colossalai/nn/lr_scheduler/onecycle.py code style (#1269) 2022-07-13 12:08:21 +08:00
metric [hotfix] Raise messages for indivisible batch sizes with tensor parallelism (#622) 2022-04-02 16:12:04 +08:00
optimizer fix nvme docstring (#1450) 2022-08-12 18:01:02 +08:00
parallel [FAW] refactor reorder() for CachedParamMgr (#1514) 2022-08-29 14:22:07 +08:00
__init__.py [pipeline] refactor the pipeline module (#1087) 2022-06-10 11:27:38 +08:00
init.py [NFC] polish colossalai/nn/init.py code style (#1292) 2022-07-13 10:51:55 +08:00