ColossalAI/colossalai/shardformer/layer
Hongxin Liu dc2cdaf3e8
[shardformer] optimize seq parallelism (#6086)
* [shardformer] optimize seq parallelism

* [shardformer] fix gpt2 fused linear col

* [plugin] update gemini plugin

* [plugin] update moe hybrid plugin

* [test] update gpt2 fused linear test

* [shardformer] fix gpt2 fused linear reduce
2024-10-11 13:44:40 +08:00
..
__init__.py [shardformer] fix linear 1d row and support uneven splits for fused qkv linear (#6084) 2024-10-10 14:34:45 +08:00
_operation.py [shardformer] optimize seq parallelism (#6086) 2024-10-11 13:44:40 +08:00
attn.py fix 2024-09-16 13:45:04 +08:00
dropout.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
embedding.py [fp8] support hybrid parallel plugin (#5982) 2024-08-12 18:17:05 +08:00
linear.py [shardformer] optimize seq parallelism (#6086) 2024-10-11 13:44:40 +08:00
loss.py [Feature] Split cross-entropy computation in SP (#5959) 2024-09-10 12:06:50 +08:00
normalization.py [fp8] Merge feature/fp8_comm to main branch of Colossalai (#6016) 2024-08-22 09:21:34 +08:00
parallel_module.py [shardformer] refactor embedding resize (#5603) 2024-04-18 16:10:18 +08:00
qkv_fused_linear.py [shardformer] optimize seq parallelism (#6086) 2024-10-11 13:44:40 +08:00
utils.py [Feature] Split cross-entropy computation in SP (#5959) 2024-09-10 12:06:50 +08:00