ColossalAI/colossalai/shardformer/layer
digger yu 049121d19d
[hotfix] fix typo change enabel to enable under colossalai/shardformer/ (#5317)
2024-03-05 21:48:46 +08:00
..
__init__.py [hotfix] Add layer norm gradients all-reduce for sequence parallel (#4926) 2023-11-03 13:32:43 +08:00
_operation.py [hotfix] fix typo change enabel to enable under colossalai/shardformer/ (#5317) 2024-03-05 21:48:46 +08:00
dropout.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
embedding.py [gemini] gemini support tensor parallelism. (#4942) 2023-11-10 10:15:16 +08:00
linear.py support linear accumulation fusion (#5199) 2023-12-29 18:22:42 +08:00
loss.py [shardformer] llama support DistCrossEntropy (#5176) 2023-12-13 01:39:14 +08:00
normalization.py [hotfix] fix typo change enabel to enable under colossalai/shardformer/ (#5317) 2024-03-05 21:48:46 +08:00
parallel_module.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
qkv_fused_linear.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
utils.py [npu] change device to accelerator api (#5239) 2024-01-09 10:20:05 +08:00