ColossalAI/colossalai/nn/parallel/layers
Jiarui Fang 504ff1d101
[embeddings] use cache_ratio instead of cuda_row_num (#1611)
2022-09-20 14:33:04 +08:00
..
cache_embedding [embeddings] use cache_ratio instead of cuda_row_num (#1611) 2022-09-20 14:33:04 +08:00
__init__.py [embedding] freq_aware_embedding: add small functions for caller application (#1537) 2022-09-05 15:12:53 +08:00
colo_module.py [tensor] a shorter shard and replicate spec (#1245) 2022-07-11 15:51:48 +08:00
embedding.py [tensor] a shorter shard and replicate spec (#1245) 2022-07-11 15:51:48 +08:00
linear.py [tensor] a shorter shard and replicate spec (#1245) 2022-07-11 15:51:48 +08:00
module_utils.py [hotfix] fix unit test test_module_spec (#1321) 2022-07-15 14:02:32 +08:00