mirror of https://github.com/hpcaitech/ColossalAI
![]() * add fused qkv * replace attn and mlp by shardformer * fix bugs in mlp * add docstrings * fix test_inference_engine.py * add optimize unbind * add fused_addmm * rm squeeze(1) * refactor codes * fix ci bugs * rename ShardFormerLlamaMLP and ShardFormerLlamaAttention * Removed the dependency on LlamaFlashAttention2 * rollback test_inference_engine.py |
||
---|---|---|
.. | ||
benchmark_llama.py | ||
build_smoothquant_weight.py | ||
run_benchmark.sh | ||
run_llama_inference.py |