ColossalAI/colossalai/nn
Jiarui Fang 504ff1d101
[embeddings] use cache_ratio instead of cuda_row_num (#1611)
2022-09-20 14:33:04 +08:00
..
_ops [NFC] polish colossalai/nn/_ops/embedding.py code style (#1561) 2022-09-08 22:11:04 +08:00
graph [NFC] polish doc style for ColoTensor (#1457) 2022-08-16 09:21:05 +08:00
layer [NFC] polish colossalai/nn/layer/colossalai_layer/dropout.py code style (#1568) 2022-09-08 22:11:04 +08:00
loss [NFC] polish colossalai/nn/loss/loss_2p5d.py code style (#1553) 2022-09-08 22:11:04 +08:00
lr_scheduler [NFC] polish colossalai/nn/lr_scheduler/multistep.py code style (#1572) 2022-09-08 22:11:04 +08:00
metric [hotfix] Raise messages for indivisible batch sizes with tensor parallelism (#622) 2022-04-02 16:12:04 +08:00
optimizer fix nvme docstring (#1450) 2022-08-12 18:01:02 +08:00
parallel [embeddings] use cache_ratio instead of cuda_row_num (#1611) 2022-09-20 14:33:04 +08:00
__init__.py [pipeline] refactor the pipeline module (#1087) 2022-06-10 11:27:38 +08:00
init.py [NFC] polish colossalai/nn/init.py code style (#1292) 2022-07-13 10:51:55 +08:00