mirror of https://github.com/hpcaitech/ColossalAI
![]() * add fused qkv * replace attn and mlp by shardformer * fix bugs in mlp * add docstrings * fix test_inference_engine.py * add optimize unbind * add fused_addmm * rm squeeze(1) * refactor codes * fix ci bugs * rename ShardFormerLlamaMLP and ShardFormerLlamaAttention * Removed the dependency on LlamaFlashAttention2 * rollback test_inference_engine.py |
||
---|---|---|
.. | ||
kernel_utils.py | ||
test_context_attn_unpad.py | ||
test_decoding_attn.py | ||
test_fused_rotary_embedding.py | ||
test_kvcache_copy.py | ||
test_rmsnorm_triton.py | ||
test_rotary_embdding_unpad.py | ||
test_xine_copy.py |