【vLLM 学习】API 客户端

vLLM 是一款专为大语言模型推理加速而设计的框架,实现了 KV 缓存内存几乎零浪费,解决了内存管理瓶颈问题。

更多 vLLM 中文文档及教程可访问 →vllm.hyper.ai/

源代码:vllm-project/vllm

ini 复制代码
"""Example Python client for `vllm.entrypoints.api_server`
NOTE: The API server is used only for demonstration and simple performance
benchmarks. It is not intended for production use.
For production use, we recommend `vllm serve` and the OpenAI client API.
"""
"""用于 `vllm.entrypoints.api_server` 的 Python 客户端示例


注意:API 服务器仅用于演示和简单的性能基准测试。它并不适合用于生产环境。
对于生产环境,我们推荐使用 `vllm serve` 和 OpenAI 客户端 API。
"""


import argparse
import json
from typing import Iterable, List

import requests


def clear_line(n: int = 1) -> None:
    LINE_UP = '\033[1A'
    LINE_CLEAR = '\x1b[2K'
 for _ in range(n):
 print(LINE_UP, end=LINE_CLEAR, flush=True)


def post_http_request(prompt: str,
                      api_url: str,
                      n: int = 1,
                      stream: bool = False) -> requests.Response:
    headers = {"User-Agent": "Test Client"}
    pload = {
 "prompt": prompt,
 "n": n,
 "use_beam_search": True,
 "temperature": 0.0,
 "max_tokens": 16,
 "stream": stream,
 }
    response = requests.post(api_url,
                             headers=headers,
                             json=pload,
                             stream=stream)
 return response


def get_streaming_response(response: requests.Response) -> Iterable[List[str]]:
 for chunk in response.iter_lines(chunk_size=8192,
                                     decode_unicode=False,
                                     delimiter=b"\0"):
 if chunk:
            data = json.loads(chunk.decode("utf-8"))
            output = data["text"]
 yield output


def get_response(response: requests.Response) -> List[str]:
    data = json.loads(response.content)
    output = data["text"]
 return output


if __name__ == "__main__":
    parser = argparse.ArgumentParser()
    parser.add_argument("--host", type=str, default="localhost")
    parser.add_argument("--port", type=int, default=8000)
    parser.add_argument("--n", type=int, default=4)
    parser.add_argument("--prompt", type=str, default="San Francisco is a")
    parser.add_argument("--stream", action="store_true")
    args = parser.parse_args()
    prompt = args.prompt
    api_url = f"http://{args.host}:{args.port}/generate"
    n = args.n
    stream = args.stream

 print(f"Prompt: {prompt!r}\n", flush=True)
    response = post_http_request(prompt, api_url, n, stream)

 if stream:
        num_printed_lines = 0
 for h in get_streaming_response(response):
            clear_line(num_printed_lines)
            num_printed_lines = 0
 for i, line in enumerate(h):
                num_printed_lines += 1
 print(f"Beam candidate {i}: {line!r}", flush=True)
 else:
        output = get_response(response)
 for i, line in enumerate(output):
 print(f"Beam candidate {i}: {line!r}", flush=True)源代码:vllm-project/vllm
"""Example Python client for `vllm.entrypoints.api_server`
NOTE: The API server is used only for demonstration and simple performance
benchmarks. It is not intended for production use.
For production use, we recommend `vllm serve` and the OpenAI client API.
"""
"""用于 `vllm.entrypoints.api_server` 的 Python 客户端示例


注意:API 服务器仅用于演示和简单的性能基准测试。它并不适合用于生产环境。
对于生产环境,我们推荐使用 `vllm serve` 和 OpenAI 客户端 API。
"""


import argparse
import json
from typing import Iterable, List

import requests


def clear_line(n: int = 1) -> None:
    LINE_UP = '\033[1A'
    LINE_CLEAR = '\x1b[2K'
 for _ in range(n):
 print(LINE_UP, end=LINE_CLEAR, flush=True)


def post_http_request(prompt: str,
                      api_url: str,
                      n: int = 1,
                      stream: bool = False) -> requests.Response:
    headers = {"User-Agent": "Test Client"}
    pload = {
 "prompt": prompt,
 "n": n,
 "use_beam_search": True,
 "temperature": 0.0,
 "max_tokens": 16,
 "stream": stream,
 }
    response = requests.post(api_url,
                             headers=headers,
                             json=pload,
                             stream=stream)
 return response


def get_streaming_response(response: requests.Response) -> Iterable[List[str]]:
 for chunk in response.iter_lines(chunk_size=8192,
                                     decode_unicode=False,
                                     delimiter=b"\0"):
 if chunk:
            data = json.loads(chunk.decode("utf-8"))
            output = data["text"]
 yield output


def get_response(response: requests.Response) -> List[str]:
    data = json.loads(response.content)
    output = data["text"]
 return output


if __name__ == "__main__":
    parser = argparse.ArgumentParser()
    parser.add_argument("--host", type=str, default="localhost")
    parser.add_argument("--port", type=int, default=8000)
    parser.add_argument("--n", type=int, default=4)
    parser.add_argument("--prompt", type=str, default="San Francisco is a")
    parser.add_argument("--stream", action="store_true")
    args = parser.parse_args()
    prompt = args.prompt
    api_url = f"http://{args.host}:{args.port}/generate"
    n = args.n
    stream = args.stream

 print(f"Prompt: {prompt!r}\n", flush=True)
    response = post_http_request(prompt, api_url, n, stream)

 if stream:
        num_printed_lines = 0
 for h in get_streaming_response(response):
            clear_line(num_printed_lines)
            num_printed_lines = 0
 for i, line in enumerate(h):
                num_printed_lines += 1
 print(f"Beam candidate {i}: {line!r}", flush=True)
 else:
        output = get_response(response)
 for i, line in enumerate(output):
 print(f"Beam candidate {i}: {line!r}", flush=True)
相关推荐
AI科技1 分钟前
原创音乐人提升写歌数量,AI编曲软件实现创作周期大幅缩短
人工智能
亲爱的非洲野猪2 分钟前
从约束到互联:LLM生态中Rules、Tools、Skills与MCP的演进史
人工智能
jay神2 分钟前
基于MobileNet花卉识别系统
人工智能·深度学习·计算机视觉·毕业设计·花卉识别
云卓SKYDROID3 分钟前
无人机故障诊断技术模块要点!
人工智能·无人机·高科技·云卓科技·故障模块
m0_603888713 分钟前
VEQ Modality-Adaptive Quantization for MoE Vision-Language Models
人工智能·ai·语言模型·自然语言处理·论文速览
智驱力人工智能3 分钟前
无人机目标检测 低空安全治理的工程实践与价值闭环 无人机缺陷识别 农业无人机作物长势分析系统 森林防火无人机火点实时识别
人工智能·opencv·安全·yolo·目标检测·无人机·边缘计算
xdpcxq10294 分钟前
MySQL 5.6 2000 万行高频读写表新增字段
数据库·mysql
zhangfeng11334 分钟前
大语言模型llm 量化模型 跑在 边缘设备小显存显卡 GGUF GGML PyTorch (.pth, .bin, SafeTensors)
人工智能·pytorch·深度学习·语言模型
纤纡.4 分钟前
深度学习环境搭建:CUDA+PyTorch+TorchVision+Torchaudio 一站式安装教程
人工智能·pytorch·深度学习
huohuopro5 分钟前
Redis安装和杂谈
数据库·redis·缓存