mirror of https://github.com/hpcaitech/ColossalAI
![]() * add fused qkv * replace attn and mlp by shardformer * fix bugs in mlp * add docstrings * fix test_inference_engine.py * add optimize unbind * add fused_addmm * rm squeeze(1) * refactor codes * fix ci bugs * rename ShardFormerLlamaMLP and ShardFormerLlamaAttention * Removed the dependency on LlamaFlashAttention2 * rollback test_inference_engine.py |
||
---|---|---|
.. | ||
jit | ||
triton | ||
__init__.py | ||
extensions | ||
kernel_loader.py |