Browse Source

[Inference]Fix readme and example for API server (#5742)

* fix chatapi readme and example

* updating doc

* add an api and change the doc

* remove

* add credits and del 'API' heading

* readme

* readme
pull/5750/head
Jianghai 6 months ago committed by GitHub
parent
commit
85946d4236
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
  1. 54
      colossalai/inference/README.md
  2. 27
      colossalai/inference/server/README.md
  3. 21
      colossalai/inference/server/api_server.py
  4. 9
      examples/inference/client/locustfile.py
  5. 2
      requirements/requirements.txt

54
colossalai/inference/README.md

@ -207,13 +207,13 @@ Learnt from [PagedAttention](https://arxiv.org/abs/2309.06180) by [vLLM](https:/
Request handler is responsible for managing requests and scheduling a proper batch from exisiting requests. Based on [Orca's](https://www.usenix.org/conference/osdi22/presentation/yu) and [vLLM's](https://github.com/vllm-project/vllm) research and work on batching requests, we applied continuous batching with unpadded sequences, which enables various number of sequences to pass projections (i.e. Q, K, and V) together in different steps by hiding the dimension of number of sequences, and decrement the latency of incoming sequences by inserting a prefill batch during a decoding step and then decoding together. Request handler is responsible for managing requests and scheduling a proper batch from exisiting requests. Based on [Orca's](https://www.usenix.org/conference/osdi22/presentation/yu) and [vLLM's](https://github.com/vllm-project/vllm) research and work on batching requests, we applied continuous batching with unpadded sequences, which enables various number of sequences to pass projections (i.e. Q, K, and V) together in different steps by hiding the dimension of number of sequences, and decrement the latency of incoming sequences by inserting a prefill batch during a decoding step and then decoding together.
<p align="center"> <p align="center">
<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/inference/continuous_batching.png" width="800"/> <img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/inference/naive_batching.png" width="800"/>
<br/> <br/>
<em>Naive Batching: decode until each sequence encounters eos in a batch</em> <em>Naive Batching: decode until each sequence encounters eos in a batch</em>
</p> </p>
<p align="center"> <p align="center">
<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/inference/naive_batching.png" width="800"/> <img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/inference/continuous_batching.png" width="800"/>
<br/> <br/>
<em>Continuous Batching: dynamically adjust the batch size by popping out finished sequences and inserting prefill batch</em> <em>Continuous Batching: dynamically adjust the batch size by popping out finished sequences and inserting prefill batch</em>
</p> </p>
@ -222,6 +222,54 @@ Request handler is responsible for managing requests and scheduling a proper bat
Modeling contains models, layers, and policy, which are hand-crafted for better performance easier usage. Integrated with `shardformer`, users can define their own policy or use our preset policies for specific models. Our modeling files are aligned with [Transformers](https://github.com/huggingface/transformers). For more details about the usage of modeling and policy, please check `colossalai/shardformer`. Modeling contains models, layers, and policy, which are hand-crafted for better performance easier usage. Integrated with `shardformer`, users can define their own policy or use our preset policies for specific models. Our modeling files are aligned with [Transformers](https://github.com/huggingface/transformers). For more details about the usage of modeling and policy, please check `colossalai/shardformer`.
## Online Service
Colossal-Inference supports fast-api based online service. Simple completion and chat are both supported. Follow the commands below and you can simply construct a server with both completion and chat functionalities. For now we support `Llama2`,`Llama3` and `Baichuan2` model, etc. we will fullfill the blank quickly.
### API
- GET '/ping':
Ping is used to check if the server can receive and send information.
- GET '/engine_check':
Check is the background engine is working.
- POST '/completion':
Completion api is used for single sequence request, like answer a question or complete words.
- POST '/chat':
Chat api is used for conversation-style request, which often includes dialogue participants(i.e. roles) and corresponding words. Considering the input data are very different from normal inputs, we introduce Chat-Template to match the data format in chat models.
#### chat-template
Followed `transformers`, we add the chat-template argument. As chat models have been trained with very different formats for converting conversations into a single tokenizable string. Using a format that matches the training data is extremely important. This attribute(chat_template) is inclueded in HuggingFace tokenizers, containing a Jinja template that converts conversation histories into a correctly formatted string. You can refer to the [HuggingFace-blog](https://huggingface.co/blog/chat-templates) for more information. We also provide a simple example temlate bellow. Both str or file style chat template are supported.
### Usage
#### Args for customizing your server
The configuration for api server contains both serving interface and engine backend.
For Interface:
- `--host`: The host url on your device for the server.
- `--port`: The port for service
- `--model`: The model that backend engine uses, both path and transformers model card are supported.
- `--chat-template` The file path of chat template or the template string.
- `--response-role` The role that colossal-inference plays.
For Engine Backend:
- `--block_size`: The memory usage for each block.
- `--max_batch_size`: The max batch size for engine to infer. This changes the speed of inference,
- `--max_input_len`: The max input length of a request.
- `--max_output_len`: The output length of response.
- `--dtype` and `--use_cuda_kernel`: Deciding the precision and kernel usage.
For more detailed arguments, please refer to source code.
### Examples
```bash
# First, Lauch an API locally.
python3 -m colossalai.inference.server.api_server --model path of your model --chat-template "{% for message in messages %}{{'<|im_start|>'+message['role']+'\n'+message['content']+'<|im_end|>'+'\n'}}{% endfor %}"
# Second, you can turn to the page `http://127.0.0.1:8000/docs` to check the api
# For completion service, you can invoke it
curl -X POST http://127.0.0.1:8000/completion -H 'Content-Type: application/json' -d '{"prompt":"hello, who are you? "}'
# For chat service, you can invoke it
curl -X POST http://127.0.0.1:8000/chat -H 'Content-Type: application/json' -d '{"messages":[{"role":"system","content":"you are a helpful assistant"},{"role":"user","content":"what is 1+1?"}]}'
# You can check the engine status now
curl http://localhost:8000/engine_check
```
## 🌟 Acknowledgement ## 🌟 Acknowledgement
@ -229,7 +277,7 @@ This project was written from scratch but we learned a lot from several other gr
- [vLLM](https://github.com/vllm-project/vllm) - [vLLM](https://github.com/vllm-project/vllm)
- [flash-attention](https://github.com/Dao-AILab/flash-attention) - [flash-attention](https://github.com/Dao-AILab/flash-attention)
- [HuggingFace](https://huggingface.co)
If you wish to cite relevant research papars, you can find the reference below. If you wish to cite relevant research papars, you can find the reference below.
```bibtex ```bibtex

27
colossalai/inference/server/README.md

@ -1,27 +0,0 @@
# Online Service
Colossal-Inference supports fast-api based online service. Simple completion and chat are both supported. Follow the commands below and
you can simply construct a server with both completion and chat functionalities. For now we only support `Llama` model, we will fullfill
the blank quickly.
# Usage
```bash
# First, Lauch an API locally.
python3 -m colossalai.inference.server.api_server --model path of your llama2 model --chat_template "{% for message in messages %}
{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}"
# Second, you can turn to the page `http://127.0.0.1:8000/docs` to check the api
# For completion service, you can invoke it
curl -X POST http://127.0.0.1:8000/completion -H 'Content-Type: application/json' -d '{"prompt":"hello, who are you? ","stream":"False"}'
# For chat service, you can invoke it
curl -X POST http://127.0.0.1:8000/completion -H 'Content-Type: application/json' -d '{"converation":
[{"role": "system", "content": "you are a helpful assistant"},
{"role": "user", "content": "what is 1+1?"},],
"stream": "False",}'
# If you just want to test a simple generation, turn to generate api
curl -X POST http://127.0.0.1:8000/generate -H 'Content-Type: application/json' -d '{"prompt":"hello, who are you? ","stream":"False"}'
```
We also support streaming output, simply change the `stream` to `True` in the request body.

21
colossalai/inference/server/api_server.py

@ -30,7 +30,6 @@ from colossalai.inference.utils import find_available_ports
from colossalai.inference.core.async_engine import AsyncInferenceEngine, InferenceEngine # noqa from colossalai.inference.core.async_engine import AsyncInferenceEngine, InferenceEngine # noqa
TIMEOUT_KEEP_ALIVE = 5 # seconds. TIMEOUT_KEEP_ALIVE = 5 # seconds.
supported_models_dict = {"Llama_Models": ("llama2-7b",)}
prompt_template_choices = ["llama", "vicuna"] prompt_template_choices = ["llama", "vicuna"]
async_engine = None async_engine = None
chat_serving = None chat_serving = None
@ -39,15 +38,25 @@ completion_serving = None
app = FastAPI() app = FastAPI()
# NOTE: (CjhHa1) models are still under development, need to be updated @app.get("/ping")
@app.get("/models") def health_check() -> JSONResponse:
def get_available_models() -> Response: """Health Check for server."""
return JSONResponse(supported_models_dict) return JSONResponse({"status": "Healthy"})
@app.get("/engine_check")
def engine_check() -> bool:
"""Check if the background loop is running."""
loop_status = async_engine.background_loop_status
if loop_status == False:
return JSONResponse({"status": "Error"})
return JSONResponse({"status": "Running"})
@app.post("/generate") @app.post("/generate")
async def generate(request: Request) -> Response: async def generate(request: Request) -> Response:
"""Generate completion for the request. """Generate completion for the request.
NOTE: THIS API IS USED ONLY FOR TESTING, DO NOT USE THIS IF YOU ARE IN ACTUAL APPLICATION.
A request should be a JSON object with the following fields: A request should be a JSON object with the following fields:
- prompts: the prompts to use for the generation. - prompts: the prompts to use for the generation.
@ -133,7 +142,7 @@ def add_engine_config(parser):
# Parallel arguments not supported now # Parallel arguments not supported now
# KV cache arguments # KV cache arguments
parser.add_argument("--block-size", type=int, default=16, choices=[8, 16, 32], help="token block size") parser.add_argument("--block_size", type=int, default=16, choices=[16, 32], help="token block size")
parser.add_argument("--max_batch_size", type=int, default=8, help="maximum number of batch size") parser.add_argument("--max_batch_size", type=int, default=8, help="maximum number of batch size")

9
examples/inference/client/locustfile.py

@ -20,7 +20,7 @@ class QuickstartUser(HttpUser):
self.client.post( self.client.post(
"/chat", "/chat",
json={ json={
"converation": [ "messages": [
{"role": "system", "content": "you are a helpful assistant"}, {"role": "system", "content": "you are a helpful assistant"},
{"role": "user", "content": "what is 1+1?"}, {"role": "user", "content": "what is 1+1?"},
], ],
@ -34,7 +34,7 @@ class QuickstartUser(HttpUser):
self.client.post( self.client.post(
"/chat", "/chat",
json={ json={
"converation": [ "messages": [
{"role": "system", "content": "you are a helpful assistant"}, {"role": "system", "content": "you are a helpful assistant"},
{"role": "user", "content": "what is 1+1?"}, {"role": "user", "content": "what is 1+1?"},
], ],
@ -42,6 +42,7 @@ class QuickstartUser(HttpUser):
}, },
) )
# offline-generation is only for showing the usage, it will never be used in actual serving.
@tag("offline-generation") @tag("offline-generation")
@task(5) @task(5)
def generate_streaming(self): def generate_streaming(self):
@ -54,5 +55,5 @@ class QuickstartUser(HttpUser):
@tag("online-generation", "offline-generation") @tag("online-generation", "offline-generation")
@task @task
def get_models(self): def health_check(self):
self.client.get("/models") self.client.get("/ping")

2
requirements/requirements.txt

@ -20,4 +20,6 @@ transformers==4.36.2
peft>=0.7.1 peft>=0.7.1
bitsandbytes>=0.39.0 bitsandbytes>=0.39.0
rpyc==6.0.0 rpyc==6.0.0
fastapi
uvicorn==0.29.0
galore_torch galore_torch

Loading…
Cancel
Save