ColossalAI/colossalai/shardformer/layer
Xuanlei Zhao d6df19bae7
[npu] support triangle attention for llama (#5130)
* update fused attn

* update spda

* tri attn

* update triangle

* import

* fix

* fix
2023-11-30 14:21:30 +08:00
..
__init__.py [hotfix] Add layer norm gradients all-reduce for sequence parallel (#4926) 2023-11-03 13:32:43 +08:00
_operation.py [gemini] gemini support tensor parallelism. (#4942) 2023-11-10 10:15:16 +08:00
dropout.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
embedding.py [gemini] gemini support tensor parallelism. (#4942) 2023-11-10 10:15:16 +08:00
linear.py [Inference] Fix bug in ChatGLM2 Tensor Parallelism (#5014) 2023-11-07 15:01:50 +08:00
loss.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
normalization.py [npu] add npu support for gemini and zero (#5067) 2023-11-20 16:12:41 +08:00
parallel_module.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
qkv_fused_linear.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
utils.py [npu] support triangle attention for llama (#5130) 2023-11-30 14:21:30 +08:00