146 Commits (bd38fe6b912379080673a43d77fd3bdf0e5c852e)

Author SHA1 Message Date
Runyu Lu aabc9fb6aa [feat] add use_cuda_kernel option 8 months ago
Runyu Lu 6e30248683 [fix] tmp for test 8 months ago
Runyu Lu ae24b4f025 diverse tests 8 months ago
Runyu Lu 1821a6dab0 [fix] pytest and fix dyn grid bug 8 months ago
yuehuayingxueluo f366a5ea1f
[Inference/kernel]Add Fused Rotary Embedding and KVCache Memcopy CUDA Kernel (#5418) 8 months ago
digger yu 385e85afd4
[hotfix] fix typo s/keywrods/keywords etc. (#5429) 9 months ago
Runyu Lu 633e95b301 [doc] add doc 9 months ago
Runyu Lu 9dec66fad6 [fix] multi graphs capture error 9 months ago
Runyu Lu b2c0d9ff2b [fix] multi graphs capture error 9 months ago
Steve Luo f7aecc0c6b
feat rmsnorm cuda kernel and add unittest, benchmark script (#5417) 9 months ago
Runyu Lu cefaeb5fdd [feat] cuda graph support and refactor non-functional api 9 months ago
digger yu 16c96d4d8c
[hotfix] fix typo change _descrption to _description (#5331) 9 months ago
yuehuayingxueluo 600881a8ea
[Inference]Add CUDA KVCache Kernel (#5406) 9 months ago
yuehuayingxueluo bc1da87366
[Fix/Inference] Fix format of input prompts and input model in inference engine (#5395) 9 months ago
yuehuayingxueluo 2a718c8be8
Optimized the execution interval time between cuda kernels caused by view and memcopy (#5390) 9 months ago
Jianghai 730103819d
[Inference]Fused kv copy into rotary calculation (#5383) 9 months ago
Yuanheng Zhao b21aac5bae
[Inference] Optimize and Refactor Inference Batching/Scheduling (#5367) 9 months ago
yuehuayingxueluo 8c69debdc7
[Inference]Support vllm testing in benchmark scripts (#5379) 10 months ago
Frank Lee 9afa52061f
[inference] refactored config (#5376) 10 months ago
Jianghai 1f8c7e7046
[Inference] User Experience: update the logic of default tokenizer and generation config. (#5337) 10 months ago
yuehuayingxueluo 6fb4bcbb24
[Inference/opt] Fused KVCahce Memcopy (#5374) 10 months ago
Frank Lee 58740b5f68
[inference] added inference template (#5375) 10 months ago
Frank Lee 8106ede07f
Revert "[Inference] Adapt to Fused rotary (#5348)" (#5373) 10 months ago
Jianghai 9f4ab2eb92
[Inference] Adapt to Fused rotary (#5348) 10 months ago
yuehuayingxueluo 35382a7fbf
[Inference]Fused the gate and up proj in mlp,and optimized the autograd process. (#5365) 10 months ago
Yuanheng Zhao 1dedb57747
[Fix/Infer] Remove unused deps and revise requirements (#5341) 10 months ago
yuehuayingxueluo 631862f339
[Inference]Optimize generation process of inference engine (#5356) 10 months ago
yuehuayingxueluo 21ad4a27f9
[Inference/opt]Optimize the mid tensor of RMS Norm (#5350) 10 months ago
Frank Lee 027aa1043f
[doc] updated inference readme (#5343) 10 months ago
Frank Lee db1a763307
[inference] removed redundancy init_batch (#5353) 10 months ago
yuehuayingxueluo 249644c23b
[Inference]Repalce Attention layer and MLP layer by shardformer to optimize the weight transpose operation,add fused_qkv and fused linear_add (#5340) 10 months ago
Frank Lee f8e456d202
[inference] simplified config verification (#5346) 10 months ago
Yuanheng Zhao 5f98a9d68a
[Infer] Optimize Blocked KVCache And Kernels Using It (#5325) 10 months ago
yuehuayingxueluo e8f0642f28
[Inference]Add Nopadding Llama Modeling (#5327) 10 months ago
Jianghai c7c104cb7c
[DOC] Update inference readme (#5280) 10 months ago
yuehuayingxueluo 4f28cb43c0
[inference]Optimize the usage of the mid tensors space in flash attn (#5304) 10 months ago
Yuanheng Zhao 3da9993b0d
[Kernel/Fix] Revise flash attention triton kernel API and add benchmark (#5301) 10 months ago
yuehuayingxueluo cea9c86e45 add utils.py 10 months ago
yuehuayingxueluo bfff9254ac
[inference] Adapted to Rotary Embedding and RMS Norm (#5283) 10 months ago
Yuanheng Zhao 6e487e7d3c
[kernel/fix] Performance Optimization for Decoding Kernel and Benchmarking (#5274) 10 months ago
Jianghai 9e2342bde2
[Hotfix] Fix bugs in testing continuous batching (#5270) 10 months ago
yuehuayingxueluo 86b63f720c
[Inference]Adapted to the triton attn kernels (#5264) 10 months ago
Jianghai d8db500efc
[Inference] Fix request handler and add recycle logic (#5260) 10 months ago
Frank Lee c597678da4
[doc] updated inference readme (#5269) 10 months ago
Yuanheng Zhao fa85e02b3b
[kernel] Add KV cache copy kernel during decoding (#5261) 10 months ago
FrankLeeeee 1ded7e81ef [git] fixed rebased files 11 months ago
yuehuayingxueluo d40eb26029 fix bugs in request_handler.py and engine.py 11 months ago
yuehuayingxueluo 10e3c9f923 rm torch.cuda.synchronize 11 months ago
yuehuayingxueluo fab294c7f4 fix CI bugs 11 months ago
yuehuayingxueluo 2a73e828eb fix bugs related to processing padding mask 11 months ago