【vLLM 学习】API 客户端

vLLM 是一款专为大语言模型推理加速而设计的框架,实现了 KV 缓存内存几乎零浪费,解决了内存管理瓶颈问题。

更多 vLLM 中文文档及教程可访问 →vllm.hyper.ai/

源代码:vllm-project/vllm

ini 复制代码
"""Example Python client for `vllm.entrypoints.api_server`
NOTE: The API server is used only for demonstration and simple performance
benchmarks. It is not intended for production use.
For production use, we recommend `vllm serve` and the OpenAI client API.
"""
"""用于 `vllm.entrypoints.api_server` 的 Python 客户端示例


注意:API 服务器仅用于演示和简单的性能基准测试。它并不适合用于生产环境。
对于生产环境,我们推荐使用 `vllm serve` 和 OpenAI 客户端 API。
"""


import argparse
import json
from typing import Iterable, List

import requests


def clear_line(n: int = 1) -> None:
    LINE_UP = '\033[1A'
    LINE_CLEAR = '\x1b[2K'
 for _ in range(n):
 print(LINE_UP, end=LINE_CLEAR, flush=True)


def post_http_request(prompt: str,
                      api_url: str,
                      n: int = 1,
                      stream: bool = False) -> requests.Response:
    headers = {"User-Agent": "Test Client"}
    pload = {
 "prompt": prompt,
 "n": n,
 "use_beam_search": True,
 "temperature": 0.0,
 "max_tokens": 16,
 "stream": stream,
 }
    response = requests.post(api_url,
                             headers=headers,
                             json=pload,
                             stream=stream)
 return response


def get_streaming_response(response: requests.Response) -> Iterable[List[str]]:
 for chunk in response.iter_lines(chunk_size=8192,
                                     decode_unicode=False,
                                     delimiter=b"\0"):
 if chunk:
            data = json.loads(chunk.decode("utf-8"))
            output = data["text"]
 yield output


def get_response(response: requests.Response) -> List[str]:
    data = json.loads(response.content)
    output = data["text"]
 return output


if __name__ == "__main__":
    parser = argparse.ArgumentParser()
    parser.add_argument("--host", type=str, default="localhost")
    parser.add_argument("--port", type=int, default=8000)
    parser.add_argument("--n", type=int, default=4)
    parser.add_argument("--prompt", type=str, default="San Francisco is a")
    parser.add_argument("--stream", action="store_true")
    args = parser.parse_args()
    prompt = args.prompt
    api_url = f"http://{args.host}:{args.port}/generate"
    n = args.n
    stream = args.stream

 print(f"Prompt: {prompt!r}\n", flush=True)
    response = post_http_request(prompt, api_url, n, stream)

 if stream:
        num_printed_lines = 0
 for h in get_streaming_response(response):
            clear_line(num_printed_lines)
            num_printed_lines = 0
 for i, line in enumerate(h):
                num_printed_lines += 1
 print(f"Beam candidate {i}: {line!r}", flush=True)
 else:
        output = get_response(response)
 for i, line in enumerate(output):
 print(f"Beam candidate {i}: {line!r}", flush=True)源代码:vllm-project/vllm
"""Example Python client for `vllm.entrypoints.api_server`
NOTE: The API server is used only for demonstration and simple performance
benchmarks. It is not intended for production use.
For production use, we recommend `vllm serve` and the OpenAI client API.
"""
"""用于 `vllm.entrypoints.api_server` 的 Python 客户端示例


注意:API 服务器仅用于演示和简单的性能基准测试。它并不适合用于生产环境。
对于生产环境,我们推荐使用 `vllm serve` 和 OpenAI 客户端 API。
"""


import argparse
import json
from typing import Iterable, List

import requests


def clear_line(n: int = 1) -> None:
    LINE_UP = '\033[1A'
    LINE_CLEAR = '\x1b[2K'
 for _ in range(n):
 print(LINE_UP, end=LINE_CLEAR, flush=True)


def post_http_request(prompt: str,
                      api_url: str,
                      n: int = 1,
                      stream: bool = False) -> requests.Response:
    headers = {"User-Agent": "Test Client"}
    pload = {
 "prompt": prompt,
 "n": n,
 "use_beam_search": True,
 "temperature": 0.0,
 "max_tokens": 16,
 "stream": stream,
 }
    response = requests.post(api_url,
                             headers=headers,
                             json=pload,
                             stream=stream)
 return response


def get_streaming_response(response: requests.Response) -> Iterable[List[str]]:
 for chunk in response.iter_lines(chunk_size=8192,
                                     decode_unicode=False,
                                     delimiter=b"\0"):
 if chunk:
            data = json.loads(chunk.decode("utf-8"))
            output = data["text"]
 yield output


def get_response(response: requests.Response) -> List[str]:
    data = json.loads(response.content)
    output = data["text"]
 return output


if __name__ == "__main__":
    parser = argparse.ArgumentParser()
    parser.add_argument("--host", type=str, default="localhost")
    parser.add_argument("--port", type=int, default=8000)
    parser.add_argument("--n", type=int, default=4)
    parser.add_argument("--prompt", type=str, default="San Francisco is a")
    parser.add_argument("--stream", action="store_true")
    args = parser.parse_args()
    prompt = args.prompt
    api_url = f"http://{args.host}:{args.port}/generate"
    n = args.n
    stream = args.stream

 print(f"Prompt: {prompt!r}\n", flush=True)
    response = post_http_request(prompt, api_url, n, stream)

 if stream:
        num_printed_lines = 0
 for h in get_streaming_response(response):
            clear_line(num_printed_lines)
            num_printed_lines = 0
 for i, line in enumerate(h):
                num_printed_lines += 1
 print(f"Beam candidate {i}: {line!r}", flush=True)
 else:
        output = get_response(response)
 for i, line in enumerate(output):
 print(f"Beam candidate {i}: {line!r}", flush=True)
相关推荐
泡芙萝莉酱2 分钟前
各省份发电量数据(2005-2022年)-社科数据
大数据·人工智能·深度学习·数据挖掘·数据分析·毕业论文·数据统计
threelab3 分钟前
02.three官方示例+编辑器+AI快速学习webgl_animation_skinning_blending
人工智能·学习·编辑器
码农黛兮_4611 分钟前
MySQL数据库容灾设计案例与SQL实现
数据库·sql·mysql
野犬寒鸦12 分钟前
MySQL索引详解(下)(SQL性能分析,索引使用)
数据库·后端·sql·mysql
赵渝强老师14 分钟前
【赵渝强老师】TiDB SQL层的工作机制
数据库·sql·tidb
想躺平的咸鱼干14 分钟前
sql的性能分析
数据库·sql
敲上瘾1 小时前
MySQL数据类型
数据库·c++·mysql·数据库开发·数据库架构
想躺平的咸鱼干1 小时前
SQL语句的优化
数据库·sql
小陶来咯2 小时前
【高级IO】多路转接之单线程Reactor
服务器·网络·数据库·c++
wei_shuo2 小时前
OB Cloud 云数据库V4.3:SQL +AI全新体验
数据库·人工智能·sql