Making large AI models cheaper, faster and more accessible
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
 
 
yuehuayingxueluo 2a718c8be8
Optimized the execution interval time between cuda kernels caused by view and memcopy (#5390)
9 months ago
..
test_models [Infer] Optimize Blocked KVCache And Kernels Using It (#5325) 10 months ago
test_ops/triton Optimized the execution interval time between cuda kernels caused by view and memcopy (#5390) 9 months ago
_utils.py [Inference] Add the logic of the inference engine (#5173) 11 months ago
test_batch_bucket.py [Inference] Optimize and Refactor Inference Batching/Scheduling (#5367) 9 months ago
test_config_and_struct.py [Inference] Optimize and Refactor Inference Batching/Scheduling (#5367) 9 months ago
test_inference_engine.py [Inference] User Experience: update the logic of default tokenizer and generation config. (#5337) 10 months ago
test_kvcache_manager.py [Inference] Optimize and Refactor Inference Batching/Scheduling (#5367) 9 months ago
test_request_handler.py [Inference] Optimize and Refactor Inference Batching/Scheduling (#5367) 9 months ago