通用大模型压测报告工具

前言

我们部署完大模型在上线之前需要做一个压测报告,需要涵盖业界标准的压测指标,并输出结构化的压测报告。接下来我们写一个通用的压测报告代码。

核心压测指标

指标类别 具体指标 说明
吞吐量 Tokens per second (TPS) 每秒生成的 token 数,衡量系统整体处理能力
延迟 Time to First Token (TTFT) 首 token 延迟(ms),影响用户体验
Inter-token Latency token 间平均延迟(ms)
End-to-End Latency (E2E) 完整请求响应时间(ms)
并发能力 Max Concurrent Requests 系统能稳定支持的最大并发数
资源利用率 GPU Memory Usage, GPU Util%, CPU%, RAM 辅助分析瓶颈
错误率 Error Rate (%) 请求失败比例(超时、5xx 等)
P99/P95 延迟 P95 TTFT, P99 E2E 尾部延迟,反映服务质量稳定性

压测工具设计原则

  • 协议兼容:支持 OpenAI API 格式(主流推理后端均兼容)
  • 异步高并发:使用 asyncio + aiohttp 实现高效并发
  • 动态负载:支持固定并发、阶梯加压、RPS 控制
  • 结果结构化:输出 JSON + Markdown 报告
  • 可配置:通过 YAML/JSON 配置模型地址、prompt、并发数等

完整压测代码(Python)

python 复制代码
# llm_benchmark.py
import asyncio
import time
import json
import argparse
import numpy as np
import pandas as pd
from typing import List, Dict, Any
import aiohttp
from tqdm.asyncio import tqdm
import yaml

class LLMBenchmark:
    def __init__(self, config_path: str):
        with open(config_path, 'r') as f:
            self.config = yaml.safe_load(f)
        self.base_url = self.config['base_url'].rstrip('/')
        self.model = self.config['model']
        self.headers = {"Content-Type": "application/json"}
        if 'api_key' in self.config:
            self.headers["Authorization"] = f"Bearer {self.config['api_key']}"

        # 压测参数
        self.concurrency = self.config['concurrency']
        self.total_requests = self.config['total_requests']
        self.timeout = self.config.get('timeout', 120)
        self.max_tokens = self.config.get('max_tokens', 512)
        self.temperature = self.config.get('temperature', 0.0)

        # 存储结果
        self.results: List[Dict] = []

    async def send_request(self, session: aiohttp.ClientSession, prompt: str):
        payload = {
            "model": self.model,
            "messages": [{"role": "user", "content": prompt}],
            "max_tokens": self.max_tokens,
            "temperature": self.temperature,
            "stream": False
        }
        start_time = time.time()
        try:
            async with session.post(
                f"{self.base_url}/v1/chat/completions",
                json=payload,
                headers=self.headers,
                timeout=aiohttp.ClientTimeout(total=self.timeout)
            ) as resp:
                response_time = time.time()
                if resp.status != 200:
                    error_text = await resp.text()
                    print(f"Error: {resp.status} - {error_text}")
                    return {
                        "success": False,
                        "error": f"HTTP {resp.status}",
                        "e2e_latency": response_time - start_time
                    }

                data = await resp.json()
                completion = data['choices'][0]['message']['content']
                prompt_tokens = data['usage']['prompt_tokens']
                completion_tokens = data['usage']['completion_tokens']
                total_tokens = data['usage']['total_tokens']

                ttft = None  # 非流式无法获取 TTFT,若需 TTFT 请启用 stream=True 并解析 SSE
                e2e_latency = response_time - start_time
                tps = completion_tokens / e2e_latency if e2e_latency > 0 else 0

                return {
                    "success": True,
                    "prompt_tokens": prompt_tokens,
                    "completion_tokens": completion_tokens,
                    "total_tokens": total_tokens,
                    "e2e_latency": e2e_latency,
                    "ttft": ttft,  # 可扩展为流式实现
                    "tps": tps,
                    "error": None
                }

        except Exception as e:
            return {
                "success": False,
                "error": str(e),
                "e2e_latency": time.time() - start_time
            }

    async def worker(self, session: aiohttp.ClientSession, prompts: List[str]):
        for prompt in prompts:
            result = await self.send_request(session, prompt)
            self.results.append(result)

    async def run(self):
        # 准备 prompts(可从文件加载或生成)
        prompts = self._generate_prompts()
        requests_per_worker = len(prompts) // self.concurrency
        remainder = len(prompts) % self.concurrency
        tasks = []
        async with aiohttp.ClientSession() as session:
            for i in range(self.concurrency):
                start_idx = i * requests_per_worker
                end_idx = start_idx + requests_per_worker
                if i < remainder:
                    end_idx += 1
                task_prompts = prompts[start_idx:end_idx]
                tasks.append(self.worker(session, task_prompts))
            await tqdm.gather(*tasks, desc="Running benchmark")

    def _generate_prompts(self) -> List[str]:
        # 可替换为从文件读取或使用真实数据集
        base_prompt = "Explain the theory of relativity in simple terms."
        return [base_prompt] * self.total_requests

    def generate_report(self) -> Dict[str, Any]:
        df = pd.DataFrame(self.results)
        total_requests = len(df)
        successful = df[df['success'] == True]
        failed = df[df['success'] == False]

        report = {
            "model": self.model,
            "total_requests": total_requests,
            "successful_requests": len(successful),
            "failed_requests": len(failed),
            "error_rate": len(failed) / total_requests * 100,
            "metrics": {}
        }

        if not successful.empty:
            e2e_latencies = successful['e2e_latency'].values
            tps_values = successful['tps'].values
            completion_tokens = successful['completion_tokens'].values

            report["metrics"] = {
                "throughput": {
                    "avg_tokens_per_sec": float(np.sum(completion_tokens) / np.sum(e2e_latencies)),
                    "avg_tps_per_request": float(np.mean(tps_values))
                },
                "latency_ms": {
                    "avg_e2e": float(np.mean(e2e_latencies) * 1000),
                    "p50_e2e": float(np.percentile(e2e_latencies, 50) * 1000),
                    "p95_e2e": float(np.percentile(e2e_latencies, 95) * 1000),
                    "p99_e2e": float(np.percentile(e2e_latencies, 99) * 1000),
                    "max_e2e": float(np.max(e2e_latencies) * 1000)
                },
                "token_stats": {
                    "avg_prompt_tokens": float(successful['prompt_tokens'].mean()),
                    "avg_completion_tokens": float(successful['completion_tokens'].mean()),
                    "avg_total_tokens": float(successful['total_tokens'].mean())
                }
            }

        # 错误详情
        if not failed.empty:
            report["errors"] = failed['error'].value_counts().to_dict()

        return report

    def save_report(self, output_path: str):
        report = self.generate_report()
        with open(output_path, 'w') as f:
            json.dump(report, f, indent=2)
        self._print_markdown_summary(report)

    def _print_markdown_summary(self, report: Dict):
        print("\n" + "="*60)
        print("📊 LLM 压测报告摘要")
        print("="*60)
        print(f"- 模型: `{report['model']}`")
        print(f"- 总请求数: {report['total_requests']}")
        print(f"- 成功请求: {report['successful_requests']}")
        print(f"- 失败请求: {report['failed_requests']}")
        print(f"- 错误率: {report['error_rate']:.2f}%")
        if 'metrics' in report and report['metrics']:
            m = report['metrics']
            print(f"\n📈 吞吐量:")
            print(f"  - 系统总 TPS: {m['throughput']['avg_tokens_per_sec']:.2f} tokens/s")
            print(f"  - 单请求平均 TPS: {m['throughput']['avg_tps_per_request']:.2f} tokens/s")
            print(f"\n⏱️ 延迟 (ms):")
            lat = m['latency_ms']
            print(f"  - 平均 E2E: {lat['avg_e2e']:.2f}")
            print(f"  - P95 E2E: {lat['p95_e2e']:.2f}")
            print(f"  - P99 E2E: {lat['p99_e2e']:.2f}")
            print(f"  - 最大 E2E: {lat['max_e2e']:.2f}")
            print(f"\n🔤 Token 统计:")
            tok = m['token_stats']
            print(f"  - 平均输入: {tok['avg_prompt_tokens']:.1f}")
            print(f"  - 平均输出: {tok['avg_completion_tokens']:.1f}")
        print("="*60)


# 配置文件示例 config.yaml
"""
base_url: http://localhost:8000
model: qwen3-30b-a3b-instruct-2507
concurrency: 16
total_requests: 200
max_tokens: 512
temperature: 0.0
# api_key: your-api-key-if-needed
"""

if __name__ == "__main__":
    parser = argparse.ArgumentParser()
    parser.add_argument("--config", type=str, default="config.yaml", help="配置文件路径")
    parser.add_argument("--output", type=str, default="benchmark_report.json", help="输出报告路径")
    args = parser.parse_args()

    bench = LLMBenchmark(args.config)
    asyncio.run(bench.run())
    bench.save_report(args.output)

使用方法

1. 创建配置文件 config.yaml
yaml 复制代码
base_url: http://your-llm-server:8000
model: qwen3-30b-a3b-instruct-2507
concurrency: 32
total_requests: 500
max_tokens: 256
temperature: 0.0
2. 运行压测
python 复制代码
python llm_benchmark.py --config config.yaml --output qwen3_30b_report.json
3. 输出内容
  • qwen3_30b_report.json:完整结构化数据(可用于自动化分析)
  • 控制台打印 Markdown 风格摘要(便于汇报)
相关推荐
Haooog7 小时前
AI应用代码生成平台
java·学习·大模型·langchain4j
千桐科技12 小时前
qKnow 知识平台商业版 v2.6.1 正式发布:移除对第三方 LLM 应用框架的依赖,一次真正走向自主可控的里程碑升级
大模型·知识图谱·图数据库·知识库·rag·qknow·知识平台
CoderJia程序员甲13 小时前
GitHub 热榜项目 - 日榜(2026-01-28)
人工智能·ai·大模型·github·ai教程
世优科技虚拟人14 小时前
从AI数字人讲解到MR数字人导览,数字人厂商革新文旅新服务
人工智能·大模型·数字人·智能交互
小哈里15 小时前
【计算】Ray框架介绍,AI基础设施之“通用”分布式计算(跨场景,门槛低,大规模生产,单机->集群->推理一站式)
人工智能·大模型·llm·分布式计算·ray
AI 菌15 小时前
DeepSeek-OCR v2 解读
人工智能·大模型·ocr·多模态
老友@16 小时前
JMeter 在 Linux 环境下进行生产级性能压测的完整实战指南
java·linux·jmeter·性能优化·系统架构·压测·性能瓶颈
山顶夕景1 天前
【VLM】Visual Merit or Linguistic Crutch? 看DeepSeek-OCR
大模型·llm·ocr·多模态
wangmengxxw2 天前
SpringAI-mcp-sse方式
java·人工智能·大模型·sse·springai·mcp
梁辰兴2 天前
DeepSeek-OCR 2如何让AI像人类一样“看懂“复杂文档?
人工智能·ai·大模型·ocr·deepseek·梁辰兴·deepseek-ocr 2