ColossalAI/colossalai/shardformer/layer
ver217 148469348a Merge branch 'main' into sync/npu 2024-01-18 12:05:21 +08:00
..
__init__.py [hotfix] Add layer norm gradients all-reduce for sequence parallel (#4926) 2023-11-03 13:32:43 +08:00
_operation.py support linear accumulation fusion (#5199) 2023-12-29 18:22:42 +08:00
dropout.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
embedding.py [gemini] gemini support tensor parallelism. (#4942) 2023-11-10 10:15:16 +08:00
linear.py support linear accumulation fusion (#5199) 2023-12-29 18:22:42 +08:00
loss.py [shardformer] llama support DistCrossEntropy (#5176) 2023-12-13 01:39:14 +08:00
normalization.py [shardformer]: support gpt-j, falcon, Mistral and add interleaved pipeline for bert (#5088) 2023-11-28 16:54:42 +08:00
parallel_module.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
qkv_fused_linear.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
utils.py [npu] change device to accelerator api (#5239) 2024-01-09 10:20:05 +08:00