本章核心:深度解析JoyAgent-JDGenie项目的Python工具服务实现,从FastAPI架构设计到核心工具引擎,从代码解释器到深度搜索,从文件管理系统到性能优化策略,全面剖析现代多智能体系统的Python服务端技术实现精髓。
引言:Python服务的技术价值与架构定位
在JoyAgent-JDGenie多智能体系统中,Python工具服务承担着"能力执行层"的核心使命。它不仅要提供强大的工具执行能力,更要实现高性能的异步处理、安全的代码执行环境、智能的搜索引擎以及可靠的文件管理机制。通过FastAPI、smolagents、LiteLLM等现代Python技术栈,构建了一个高效、安全、可扩展的工具服务系统。
本章将采用"总-分-总"的结构体系,首先概述Python工具服务的整体架构设计,然后深入分析核心工具的实现机制,最后总结服务设计的精髓理念和最佳实践策略。
第一部分:FastAPI服务架构总览 📐
1.1 技术栈选型与架构理念
1.1.1 现代Python技术栈
JoyAgent-JDGenie Python工具服务采用了业界领先的异步Web框架技术栈:
toml
[project]
name = "python"
version = "0.1.0"
description = "Genie Tools"
readme = "README.md"
requires-python = ">=3.11,<4.0"
dependencies = [
"aiosqlite>=0.21.0",
"beautifulsoup4>=4.13.4",
"fastapi>=0.115.14",
"greenlet>=3.2.3",
"json-repair>=0.47.6",
"litellm>=1.74.0.post1",
"loguru>=0.7.3",
"matplotlib>=3.10.3",
"openai>=1.93.0",
"openpyxl>=3.1.5",
"pandas>=2.3.0",
"pydantic>=2.11.7",
"pyfiglet>=1.0.3",
"python-multipart>=0.0.20",
"smolagents>=1.19.0",
"sqlmodel>=0.0.24",
"sse-starlette>=2.4.1",
"uvicorn>=0.35.0",
]
技术选型的核心考量:
- FastAPI:现代异步Web框架,提供自动API文档生成和高性能
- smolagents:代码解释器框架,提供安全的Python代码执行环境
- LiteLLM:统一的LLM接口,支持多种大模型提供商
- Pydantic:数据验证和序列化,确保API接口的类型安全
- SSE-Starlette:服务端推送事件支持,实现实时流式响应
1.1.2 服务架构设计
bash
genie-tool/
├── genie_tool/
│ ├── api/ # API接口层
│ │ ├── __init__.py # 路由聚合
│ │ ├── tool.py # 工具API
│ │ └── file_manage.py # 文件管理API
│ ├── tool/ # 工具实现层
│ │ ├── code_interpreter.py # 代码解释器
│ │ ├── deepsearch.py # 深度搜索
│ │ ├── report.py # 报告生成
│ │ ├── ci_agent.py # CI智能体
│ │ └── search_component/ # 搜索组件
│ ├── model/ # 数据模型层
│ │ ├── protocal.py # API协议模型
│ │ ├── context.py # 上下文模型
│ │ └── document.py # 文档模型
│ ├── util/ # 工具类层
│ │ ├── file_util.py # 文件工具
│ │ ├── llm_util.py # LLM工具
│ │ └── log_util.py # 日志工具
│ └── db/ # 数据访问层
├── server.py # 服务启动入口
└── pyproject.toml # 项目配置
1.1.3 FastAPI应用入口设计
服务启动入口采用模块化的设计模式:
python
# -*- coding: utf-8 -*-
# =====================
#
#
# Author: liumin.423
# Date: 2025/7/7
# =====================
import os
from optparse import OptionParser
from pathlib import Path
import uvicorn
from dotenv import load_dotenv
from fastapi import FastAPI
from loguru import logger
from starlette.middleware.cors import CORSMiddleware
from genie_tool.util.middleware_util import UnknownException, HTTPProcessTimeMiddleware
load_dotenv()
def print_logo():
from pyfiglet import Figlet
f = Figlet(font="slant")
print(f.renderText("Genie Tool"))
def log_setting():
log_path = os.getenv("LOG_PATH", Path(__file__).resolve().parent / "logs" / "server.log")
log_format = "{time:YYYY-MM-DD HH:mm:ss.SSS} {level} {module}.{function} {message}"
logger.add(log_path, format=log_format, rotation="200 MB")
def create_app() -> FastAPI:
_app = FastAPI(
on_startup=[log_setting, print_logo]
)
register_middleware(_app)
register_router(_app)
return _app
def register_middleware(app: FastAPI):
app.add_middleware(UnknownException)
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_methods=["*"],
allow_headers=["*"],
allow_credentials=True,
)
app.add_middleware(HTTPProcessTimeMiddleware)
def register_router(app: FastAPI):
from genie_tool.api import api_router
app.include_router(api_router)
app = create_app()
if __name__ == "__main__":
parser = OptionParser()
parser.add_option("--host", dest="host", type="string", default="0.0.0.0")
parser.add_option("--port", dest="port", type="int", default=1601)
parser.add_option("--workers", dest="workers", type="int", default=10)
(options, args) = parser.parse_args()
print(f"Start params: {options}")
uvicorn.run(
app="server:app",
host=options.host,
port=options.port,
workers=options.workers,
reload=os.getenv("ENV", "local") == "local",
)
服务设计亮点:
- 模块化架构:清晰的分层设计,便于维护和扩展
- 中间件机制:全局异常处理、CORS支持、请求处理时间监控
- 配置驱动:通过环境变量和命令行参数灵活配置
- 生产就绪:支持多进程部署和日志轮转
1.1.4 API路由架构设计
路由系统采用了模块化和标签化的设计:
python
# -*- coding: utf-8 -*-
# =====================
#
#
# Author: liumin.423
# Date: 2025/7/7
# =====================
from fastapi import APIRouter
from .tool import router as tool_router
from .file_manage import router as file_router
api_router = APIRouter(prefix="/v1")
api_router.include_router(tool_router, prefix="/tool", tags=["tool"])
api_router.include_router(file_router, prefix="/file_tool", tags=["file_manage"])
路由设计特色:
- 版本控制:/v1前缀支持API版本管理
- 功能分组:工具API和文件管理API分离
- 标签管理:便于API文档的自动生成和分类
1.1.5 中间件系统设计
中间件系统提供了完善的请求处理和监控能力:
python
# -*- coding: utf-8 -*-
# =====================
#
#
# Author: liumin.423
# Date: 2025/7/7
# =====================
import time
import traceback
import uuid
from typing import Callable
from fastapi.routing import APIRoute
from loguru import logger
from starlette.middleware.base import BaseHTTPMiddleware, RequestResponseEndpoint
from starlette.requests import Request
from starlette.responses import Response
from genie_tool.model.context import RequestIdCtx
from genie_tool.util.log_util import AsyncTimer
class UnknownException(BaseHTTPMiddleware):
async def dispatch(self, request: Request, call_next: RequestResponseEndpoint) -> Response:
try:
return await call_next(request)
except Exception as e:
logger.error(f"{RequestIdCtx.request_id} {request.method} {request.url.path} error={traceback.format_exc()}")
return Response(content=f"Unexpected error: {e}", status_code=500)
class RequestHandlerRoute(APIRoute):
def get_route_handler(self) -> Callable:
original_route_handler = super().get_route_handler()
async def custom_route_handler(request: Request) -> Response:
try:
content_type = request.headers.get('content-type', '')
if request.method == "POST" and not content_type.startswith('multipart/form-data'):
body = (await request.body()).decode("utf-8")
logger.info(f"{RequestIdCtx.request_id} {request.method} {request.url.path} body={body}")
except Exception as e:
logger.warning(f"{RequestIdCtx.request_id} {request.method} {request.url.path} failed. error={e}")
return await original_route_handler(request)
return custom_route_handler
class HTTPProcessTimeMiddleware(BaseHTTPMiddleware):
async def dispatch(self, request, call_next):
RequestIdCtx.request_id = str(uuid.uuid4())
# ... 其他处理逻辑
中间件设计精髓:
- 异常捕获:全局异常处理,确保服务稳定性
- 请求日志:详细的请求参数和响应时间记录
- 请求追踪:UUID生成和上下文传递
- 性能监控:请求处理时间统计
第二部分:工具执行引擎深度解析 🔧
2.1 数据模型与API协议设计
2.1.1 API请求模型设计
工具服务定义了完善的请求模型体系:
python
# -*- coding: utf-8 -*-
# =====================
#
#
# Author: liumin.423
# Date: 2025/7/7
# =====================
import hashlib
from typing import Optional, Literal, List
from pydantic import BaseModel, Field, computed_field
class StreamMode(BaseModel):
"""流式模式
args:
mode: 流式模式 general 普通流式 token 按token流式 time 按时间流式
token: 流式模式下,每多少个token输出一次
time: 流式模式下,每多少秒输出一次
"""
mode: Literal["general", "token", "time"] = Field(default="general")
token: Optional[int] = Field(default=5, ge=1)
time: Optional[int] = Field(default=5, ge=1)
class CIRequest(BaseModel):
request_id: str = Field(alias="requestId", description="Request ID")
task: Optional[str] = Field(default=None, description="Task")
file_names: Optional[List[str]] = Field(default=[], alias="fileNames", description="输入的文件列表")
file_name: Optional[str] = Field(default=None, alias="fileName", description="返回的生成的文件名称")
file_description: Optional[str] = Field(default=None, alias="fileDescription", description="返回的生成的文件描述")
stream: bool = True
stream_mode: Optional[StreamMode] = Field(default=StreamMode(), alias="streamMode", description="流式模式")
origin_file_names: Optional[List[dict]] = Field(default=None, alias="originFileNames", description="原始文本信息")
class ReportRequest(CIRequest):
file_type: Literal["html", "markdown", "ppt"] = Field("html", alias="fileType", description="生成报告的文件类型")
class DeepSearchRequest(BaseModel):
request_id: str = Field(description="Request ID")
query: str = Field(description="搜索查询")
max_loop: Optional[int] = Field(default=1, alias="maxLoop", description="最大循环次数")
# bing, jina, sogou
search_engines: List[str] = Field(default=[], description="使用哪些搜索引擎")
stream: bool = Field(default=True, description="是否流式响应")
stream_mode: Optional[StreamMode] = Field(default=StreamMode(), alias="streamMode", description="流式模式")
数据模型设计亮点:
- 类型安全:使用Pydantic进行严格的类型验证
- 别名支持:支持前端驼峰命名到Python蛇形命名的转换
- 流式控制:多种流式输出模式支持
- 文档生成:自动生成API文档描述
2.1.2 工具API路由实现
工具API实现了丰富的异步处理和流式响应:
python
# -*- coding: utf-8 -*-
# =====================
#
#
# Author: liumin.423
# Date: 2025/7/7
# =====================
import json
import os
import time
from fastapi import APIRouter
from sse_starlette import ServerSentEvent, EventSourceResponse
from genie_tool.model.code import ActionOutput, CodeOuput
from genie_tool.model.protocal import CIRequest, ReportRequest, DeepSearchRequest
from genie_tool.util.file_util import upload_file
from genie_tool.tool.report import report
from genie_tool.tool.code_interpreter import code_interpreter_agent
from genie_tool.util.middleware_util import RequestHandlerRoute
from genie_tool.tool.deepsearch import DeepSearch
router = APIRouter(route_class=RequestHandlerRoute)
@router.post("/code_interpreter")
async def post_code_interpreter(
body: CIRequest,
):
# 处理文件路径
if body.file_names:
for idx, f_name in enumerate(body.file_names):
if not f_name.startswith("/") and not f_name.startswith("http"):
body.file_names[idx] = f"{os.getenv('FILE_SERVER_URL')}/preview/{body.request_id}/{f_name}"
async def _stream():
acc_content = ""
acc_token = 0
acc_time = time.time()
async for chunk in code_interpreter_agent(
task=body.task,
file_names=body.file_names,
request_id=body.request_id,
stream=True,
):
if isinstance(chunk, CodeOuput):
yield ServerSentEvent(
data=json.dumps(
{
"requestId": body.request_id,
"code": chunk.code,
"fileInfo": chunk.file_list,
"isFinal": False,
},
ensure_ascii=False,
)
)
elif isinstance(chunk, ActionOutput):
yield ServerSentEvent(
data=json.dumps(
{
"requestId": body.request_id,
"codeOutput": chunk.content,
"fileInfo": chunk.file_list,
"isFinal": True,
},
ensure_ascii=False,
)
)
yield ServerSentEvent(data="[DONE]")
else:
acc_content += chunk
acc_token += 1
if body.stream_mode.mode == "general":
yield ServerSentEvent(
data=json.dumps(
{"requestId": body.request_id, "data": chunk, "isFinal": False},
ensure_ascii=False,
)
)
elif body.stream_mode.mode == "token":
if acc_token >= body.stream_mode.token:
yield ServerSentEvent(
data=json.dumps(
{
"requestId": body.request_id,
"data": acc_content,
"isFinal": False,
},
ensure_ascii=False,
)
)
acc_token = 0
acc_content = ""
elif body.stream_mode.mode == "time":
if time.time() - acc_time > body.stream_mode.time:
yield ServerSentEvent(
data=json.dumps(
{
"requestId": body.request_id,
"data": acc_content,
"isFinal": False,
},
ensure_ascii=False,
)
)
acc_time = time.time()
acc_content = ""
if body.stream_mode.mode in ["time", "token"] and acc_content:
yield ServerSentEvent(
data=json.dumps(
{
"requestId": body.request_id,
"data": acc_content,
"isFinal": False,
},
ensure_ascii=False,
)
)
if body.stream:
return EventSourceResponse(
_stream(),
ping_message_factory=lambda: ServerSentEvent(data="heartbeat"),
ping=15,
)
else:
content = ""
async for chunk in code_interpreter_agent(
task=body.task,
file_names=body.file_names,
stream=body.stream,
):
content += chunk
file_info = [
await upload_file(
content=content,
file_name=body.file_name,
request_id=body.request_id,
file_type="html" if body.file_type == "ppt" else body.file_type,
)
]
return {
"code": 200,
"data": content,
"fileInfo": file_info,
"requestId": body.request_id,
}
API设计精髓:
- 流式响应:支持多种流式输出模式,提升用户体验
- 文件处理:智能的文件路径处理和上传机制
- 心跳机制:15秒心跳保持连接活跃
- 错误处理:完善的异常处理和状态反馈
2.2 代码解释器核心实现
2.2.1 代码解释器架构设计
代码解释器基于smolagents框架构建,提供安全可靠的Python代码执行环境:
python
# -*- coding: utf-8 -*-
# =====================
#
#
# Author: liumin.423
# Date: 2025/7/7
# =====================
import asyncio
import importlib
import os
import shutil
import tempfile
from typing import List, Optional
import pandas as pd
import yaml
from jinja2 import Template
from smolagents import LiteLLMModel, FinalAnswerStep, PythonInterpreterTool, ChatMessageStreamDelta
from genie_tool.tool.ci_agent import CIAgent
from genie_tool.util.file_util import download_all_files_in_path, upload_file, upload_file_by_path
from genie_tool.util.log_util import timer
from genie_tool.util.prompt_util import get_prompt
import requests
from genie_tool.model.code import ActionOutput, CodeOuput
@timer()
async def code_interpreter_agent(
task: str,
file_names: Optional[List[str]] = None,
max_file_abstract_size: int = 2000,
max_tokens: int = 32000,
request_id: str = "",
stream: bool = True,
):
work_dir = ""
try:
work_dir = tempfile.mkdtemp()
output_dir = os.path.join(work_dir, "output")
os.makedirs(output_dir, exist_ok=True)
import_files = await download_all_files_in_path(file_names=file_names, work_dir=work_dir)
# 1. 文件处理
files = []
if import_files:
for import_file in import_files:
file_name = import_file["file_name"]
file_path = import_file["file_path"]
if not file_name or not file_path:
continue
# 表格文件
if file_name.split(".")[-1] in ["xlsx", "xls", "csv"]:
pd.set_option("display.max_columns", None)
df = (
pd.read_csv(file_path)
if file_name.endswith(".csv")
else pd.read_excel(file_path)
)
files.append({"path": file_path, "abstract": f"{df.head(10)}"})
# 文本文件
elif file_name.split(".")[-1] in ["txt", "md", "html"]:
with open(file_path, "r") as rf:
files.append(
{
"path": file_path,
"abstract": "".join(rf.readlines())[
:max_file_abstract_size
],
}
)
# 2. 构建 Prompt
ci_prompt_template = get_prompt("code_interpreter")
# 3. CodeAgent
agent = create_ci_agent(
prompt_templates=ci_prompt_template,
max_tokens=max_tokens,
return_full_result=True,
output_dir=output_dir,
)
template_task = Template(ci_prompt_template["task_template"]).render(
files=files, task=task, output_dir=output_dir
)
if stream:
for step in agent.run(task=str(template_task), stream=True, max_steps=10):
if isinstance(step, CodeOuput):
file_info = await upload_file(
content=step.code,
file_name=step.file_name,
file_type="py",
request_id=request_id,
)
step.file_list = [file_info]
yield step
elif isinstance(step, FinalAnswerStep):
file_list = []
file_path = get_new_file_by_path(output_dir=output_dir)
if file_path:
file_info = await upload_file_by_path(
file_path=file_path, request_id=request_id
)
if file_info:
file_list.append(file_info)
code_name = f"{task[:20]}_代码输出.md"
file_list.append(
await upload_file(
content=step.output,
file_name=code_name,
file_type="md",
request_id=request_id,
)
)
output = ActionOutput(content=step.output, file_list=file_list)
yield output
elif isinstance(step, ChatMessageStreamDelta):
#yield step.content
pass
await asyncio.sleep(0)
else:
output = agent.run(task=task)
yield output
except Exception as e:
raise e
finally:
if work_dir:
shutil.rmtree(work_dir, ignore_errors=True)
def create_ci_agent(
prompt_templates=None,
max_tokens: int = 16000,
return_full_result: bool = True,
output_dir: str = "",
) -> CIAgent:
model = LiteLLMModel(
max_tokens=max_tokens,
model_id=os.getenv("CODE_INTEPRETER_MODEL","gpt-4.1")
)
return CIAgent(
model=model,
prompt_templates=prompt_templates,
tools=[PythonInterpreterTool()],
return_full_result=return_full_result,
additional_authorized_imports=[
"pandas",
"openpyxl",
"numpy",
"matplotlib",
"seaborn",
],
output_dir=output_dir,
)
代码解释器特色:
- 安全沙箱:基于smolagents的安全执行环境
- 文件处理:支持多种文件格式的自动解析
- 流式输出:实时展示代码执行过程
- 临时目录:自动管理临时文件和清理
2.3 深度搜索引擎实现
2.3.1 多引擎搜索架构
深度搜索引擎实现了多搜索引擎聚合和智能去重:
python
# -*- coding: utf-8 -*-
# =====================
#
#
# Author: wanghanmin1
# Date: 2025/7/8
# =====================
import asyncio
import json
import os
from concurrent.futures import ThreadPoolExecutor, as_completed
from functools import partial
from typing import List, AsyncGenerator, Tuple
from genie_tool.util.log_util import logger
from genie_tool.util.llm_util import ask_llm
from genie_tool.model.document import Doc
from genie_tool.util.log_util import timer
from genie_tool.tool.search_component.query_process import query_decompose
from genie_tool.tool.search_component.answer import answer_question
from genie_tool.tool.search_component.reasoning import search_reasoning
from genie_tool.tool.search_component.search_engine import MixSearch
from genie_tool.model.protocal import StreamMode
from genie_tool.util.file_util import truncate_files
from genie_tool.model.context import LLMModelInfoFactory
class DeepSearch:
"""深度搜索工具"""
def __init__(self, engines: List[str] = []):
if not engines:
engines = os.getenv("USE_SEARCH_ENGINE", "bing").split(",")
use_bing = "bing" in engines
use_jina = "jina" in engines
use_sogou = "sogou" in engines
use_serp = "serp" in engines
self._search_single_query = partial(
MixSearch().search_and_dedup, use_bing=use_bing, use_jina=use_jina, use_sogou=use_sogou, use_serp=use_serp)
self.searched_queries = []
self.current_docs = []
@timer()
async def run(
self,
query: str,
request_id: str = None,
max_loop: int = 1,
stream: bool = False,
stream_mode: StreamMode = StreamMode(),
*args,
**kwargs
) -> AsyncGenerator[str, None]:
"""深度搜索回复(流式)"""
current_loop = 1
# 执行深度搜索循环
while current_loop <= max_loop:
logger.info(f"{request_id} 第 {current_loop} 轮深度搜索...")
# 查询分解
sub_queries = await query_decompose(query=query)
yield json.dumps({
"requestId": request_id,
"query": query,
"searchResult": {"query": sub_queries, "docs": [[]] * len(sub_queries)},
"isFinal": False,
"messageType": "extend"
}, ensure_ascii=False)
await asyncio.sleep(0.1)
# 去除已经检索过的query
sub_queries = [sub_query for sub_query in sub_queries
if sub_query not in self.searched_queries]
# 并行搜索并去重
searched_docs, docs_list = await self._search_queries_and_dedup(
queries=sub_queries,
request_id=request_id,
)
truncate_len = int(os.getenv("SINGLE_PAGE_MAX_SIZE", 200))
yield json.dumps(
{
"requestId": request_id,
"query": query,
"searchResult": {
"query": sub_queries,
"docs": [[d.to_dict(truncate_len=truncate_len) for d in docs_l] for docs_l in docs_list]
},
"isFinal": False,
"messageType": "search"
}, ensure_ascii=False)
# 更新上下文
self.current_docs.extend(searched_docs)
self.searched_queries.extend(sub_queries)
# 如果是最后一轮,直接跳出
if current_loop == max_loop:
break
# 推理验证是否需要继续搜索
reasoning_result = search_reasoning(
request_id=request_id,
query=query,
content=self.search_docs_str(os.getenv("SEARCH_REASONING_MODEL")),
)
# 如果推理判断已经可以回答,跳出循环
if reasoning_result.get("is_verify", "1") in ["1", 1]:
logger.info(f"{request_id} reasoning 判断没有得到新的查询,流程结束")
break
current_loop += 1
# 生成最终答案
answer = ""
acc_content = ""
acc_token = 0
async for chunk in answer_question(
query=query, search_content=self.search_docs_str(os.getenv("SEARCH_ANSWER_MODEL"))
):
if stream:
if acc_token >= stream_mode.token:
yield json.dumps({
"requestId": request_id,
"query": query,
"searchResult": {
"query": [],
"docs": [],
},
"answer": acc_content,
"isFinal": False,
"messageType": "report"
}, ensure_ascii=False)
acc_content = ""
acc_token = 0
acc_content += chunk
acc_token += 1
answer += chunk
# ... 最终答案处理逻辑
深度搜索特色:
- 多引擎聚合:支持Bing、Jina、Sogou、SERP等多个搜索引擎
- 智能去重:基于内容相似度的文档去重算法
- 迭代搜索:支持多轮搜索和推理验证
- 并行处理:使用线程池并行执行搜索请求
2.4 报告生成器实现
2.4.1 多格式报告支持
报告生成器支持HTML、Markdown、PPT三种格式的报告生成:
python
# -*- coding: utf-8 -*-
# =====================
#
#
# Author: liumin.423
# Date: 2025/7/7
# =====================
import os
from datetime import datetime
from typing import Optional, List, Literal, AsyncGenerator
from dotenv import load_dotenv
from jinja2 import Template
from loguru import logger
from genie_tool.util.file_util import download_all_files, truncate_files, flatten_search_file
from genie_tool.util.prompt_util import get_prompt
from genie_tool.util.llm_util import ask_llm
from genie_tool.util.log_util import timer
from genie_tool.model.context import LLMModelInfoFactory
load_dotenv()
@timer(key="enter")
async def report(
task: str,
file_names: Optional[List[str]] = tuple(),
model: str = "gpt-4.1",
file_type: Literal["markdown", "html", "ppt"] = "markdown",
) -> AsyncGenerator:
report_factory = {
"ppt": ppt_report,
"markdown": markdown_report,
"html": html_report,
}
model = os.getenv("REPORT_MODEL", "gpt-4.1")
async for chunk in report_factory[file_type](task, file_names, model):
yield chunk
@timer(key="enter")
async def html_report(
task,
file_names: Optional[List[str]] = tuple(),
model: str = "gpt-4.1",
temperature: float = 0,
top_p: float = 0.9,
) -> AsyncGenerator:
files = await download_all_files(file_names)
key_files = []
flat_files = []
# 对于搜索文件有结构,需要重新解析
for f in files:
fpath = f["file_name"]
fname = os.path.basename(fpath)
if fname.split(".")[-1] in ["md", "txt", "csv"]:
# CI 输出结果
if "代码输出" in fname:
key_files.append({"content": f["content"], "description": fname, "type": "txt", "link": fpath})
# 搜索文件
elif fname.endswith("_search_result.txt"):
try:
flat_files.extend([{
"content": tf["content"],
"description": tf.get("title") or tf["content"][:20],
"type": "txt",
"link": tf.get("link"),
} for tf in flatten_search_file(f)
])
except Exception as e:
logger.warning(f"html_report parser file [{fpath}] error: {e}")
# 其他文件
else:
flat_files.append({
"content": f["content"],
"description": fname,
"type": "txt",
"link": fpath
})
discount = int(LLMModelInfoFactory.get_context_length(model) * 0.8)
key_files = truncate_files(key_files, max_tokens=discount)
flat_files = truncate_files(flat_files, max_tokens=discount - sum([len(f["content"]) for f in key_files]))
report_prompts = get_prompt("report")
prompt = Template(report_prompts["html_task"]) \
.render(task=task, key_files=key_files, files=flat_files, date=datetime.now().strftime('%Y年%m月%d日'))
async for chunk in ask_llm(
messages=[{"role": "system", "content": report_prompts["html_prompt"]},
{"role": "user", "content": prompt}],
model=model, stream=True, temperature=temperature, top_p=top_p, only_content=True):
yield chunk
报告生成特色:
- 多格式支持:HTML网页、Markdown文档、PPT演示文稿
- 智能解析:自动识别和解析不同类型的输入文件
- 模板引擎:使用Jinja2进行灵活的内容渲染
- 上下文管理:智能的文件内容截断和优先级处理
第三部分:文件管理与性能优化策略 📊
3.1 文件管理系统设计
3.1.1 文件操作工具类
文件管理系统提供了完善的文件处理能力:
python
# -*- coding: utf-8 -*-
# =====================
#
#
# Author: liumin.423
# Date: 2025/7/7
# =====================
import secrets
import string
import json
import os
from copy import deepcopy
from typing import List, Dict, Any
import aiohttp
from loguru import logger
from genie_tool.util.log_util import timer
from genie_tool.model.document import Doc
@timer()
async def upload_file(
content: str,
file_name: str,
file_type: str,
request_id: str,
):
if file_type == "markdown":
file_type = "md"
if not file_name.endswith(file_type):
file_name = f"{file_name}.{file_type}"
body = {
"requestId": request_id,
"fileName": file_name,
"content": content,
"description": content[:200],
}
async with aiohttp.ClientSession() as session:
async with session.post(
f"{os.getenv('FILE_SERVER_URL')}/upload_file", json=body, timeout=10
) as response:
result = json.loads(await response.text())
return {
"fileName": file_name,
"ossUrl": result["downloadUrl"],
"domainUrl": result["domainUrl"],
"downloadUrl": result["downloadUrl"],
"fileSize": len(content),
}
@timer()
async def download_all_files(file_names: list[str]) -> List[Dict[str, Any]]:
file_contents = []
for file_name in file_names:
try:
file_contents.append(
{
"file_name": file_name,
"content": await get_file_content(file_name),
}
)
except Exception as e:
logger.warning(f"Failed to download file {file_name}. Exception: {e}")
file_contents.append(
{
"file_name": file_name,
"content": "Failed to get content.",
}
)
return file_contents
@timer()
def truncate_files(
files: List[Dict[str, Any]] | List[Doc], max_tokens: int
) -> List[Dict[str, Any]] | List[Doc]:
"""近似计算 token 数"""
truncated_files = []
token_size = 0
for f_a in files:
f = deepcopy(f_a)
if token_size >= max_tokens:
break
if isinstance(f, Doc):
dct = f.to_dict()
dct["content"] = dct["content"][: max_tokens - token_size]
token_size += len(dct["content"] or "")
f = Doc(**dct)
else:
f["content"] = f["content"][: max_tokens - token_size]
token_size += len(f.get("content", ""))
truncated_files.append(f)
return truncated_files
文件管理特色:
- 异步处理:所有文件操作都采用异步IO,提升性能
- 错误处理:完善的异常捕获和错误提示
- 智能截断:基于token长度的智能文件内容截断
- 生命周期管理:临时文件的自动清理机制
3.2 LLM集成与调用优化
3.2.1 统一LLM调用接口
LLM工具类实现了统一的模型调用接口:
python
# -*- coding: utf-8 -*-
# =====================
#
#
# Author: liumin.423
# Date: 2025/7/8
# =====================
import json
import os
from typing import List, Any, Optional
from litellm import acompletion
from genie_tool.util.log_util import timer, AsyncTimer
from genie_tool.util.sensitive_detection import SensitiveWordsReplace
@timer(key="enter")
async def ask_llm(
messages: str | List[Any],
model: str,
temperature: float = None,
top_p: float = None,
stream: bool = False,
# 自定义字段
only_content: bool = False, # 只返回内容
extra_headers: Optional[dict] = None,
**kwargs,
):
if isinstance(messages, str):
messages = [{"role": "user", "content": messages}]
if os.getenv("SENSITIVE_WORD_REPLACE", "false") == "true":
for message in messages:
if isinstance(message.get("content"), str):
message["content"] = SensitiveWordsReplace.replace(message["content"])
else:
message["content"] = json.loads(
SensitiveWordsReplace.replace(json.dumps(message["content"], ensure_ascii=False)))
response = await acompletion(
messages=messages,
model=model,
temperature=temperature,
top_p=top_p,
stream=stream,
extra_headers=extra_headers,
**kwargs
)
async with AsyncTimer(key=f"exec ask_llm"):
if stream:
async for chunk in response:
if only_content:
if chunk.choices and chunk.choices[0] and chunk.choices[0].delta and chunk.choices[0].delta.content:
yield chunk.choices[0].delta.content
else:
yield chunk
else:
yield response.choices[0].message.content if only_content else response
LLM调用优化:
- 统一接口:基于LiteLLM的多模型统一调用
- 敏感词过滤:可配置的敏感词替换机制
- 性能监控:详细的调用时间和性能统计
- 流式支持:支持流式和非流式两种调用模式
3.3 性能监控与日志系统
3.3.1 性能监控装饰器
日志工具提供了完善的性能监控能力:
python
# -*- coding: utf-8 -*-
# =====================
#
#
# Author: liumin.423
# Date: 2025/7/8
# =====================
import asyncio
import functools
import time
import traceback
from loguru import logger
from genie_tool.model.context import RequestIdCtx
class Timer(object):
def __init__(self, key: str):
self.key = key
def __enter__(self):
self.start_time = time.time()
logger.info(f"{RequestIdCtx.request_id} {self.key} start...")
return self
def __exit__(self, exc_type, exc_val, exc_tb):
if exc_type is not None:
logger.error(f"{RequestIdCtx.request_id} {self.key} error={exc_tb}")
else:
logger.info(f"{RequestIdCtx.request_id} {self.key} cost=[{int((time.time() - self.start_time) * 1000)} ms]")
class AsyncTimer(object):
def __init__(self, key: str):
self.key = key
async def __aenter__(self):
self.start_time = time.time()
logger.info(f"{RequestIdCtx.request_id} {self.key} start...")
return self
async def __aexit__(self, exc_type, exc_val, exc_tb):
if exc_type is not None:
logger.error(f"{RequestIdCtx.request_id} {self.key} error={traceback.format_exc()}")
else:
logger.info(f"{RequestIdCtx.request_id} {self.key} cost=[{int((time.time() - self.start_time) * 1000)} ms]")
def timer(key: str = ""):
def decorator(func):
if asyncio.iscoroutinefunction(func):
@functools.wraps(func)
async def wrapper(*args, **kwargs):
async with AsyncTimer(f"{key} {func.__name__}"):
result = await func(*args, **kwargs)
return result
return wrapper
else:
@functools.wraps(func)
def wrapper(*args, **kwargs):
with Timer(f"{key} {func.__name__}"):
result = func(*args, **kwargs)
return result
return wrapper
return decorator
性能监控特色:
- 自动计时:函数执行时间的自动统计
- 异步支持:同时支持同步和异步函数的监控
- 请求追踪:基于RequestId的请求链路追踪
- 异常记录:完整的异常堆栈记录
第四部分:扩展性设计与架构精髓 🎯
4.1 模块化设计精髓
4.1.1 分层架构设计
Python工具服务采用了清晰的分层架构设计:
-
API接口层:
- 统一的路由管理和参数验证
- 标准化的响应格式和错误处理
- 中间件机制提供横切关注点
-
业务逻辑层:
- 核心工具的具体实现
- 业务流程的编排和控制
- 数据转换和格式化处理
-
数据访问层:
- 文件存储和管理
- 数据库操作和查询
- 外部服务集成
-
基础设施层:
- 日志记录和性能监控
- 配置管理和环境控制
- 工具类和公共服务
4.1.2 依赖注入与配置管理
系统通过环境变量实现了灵活的配置管理:
python
# LLM模型配置
model = LiteLLMModel(
max_tokens=max_tokens,
model_id=os.getenv("CODE_INTEPRETER_MODEL","gpt-4.1")
)
# 搜索引擎配置
engines = os.getenv("USE_SEARCH_ENGINE", "bing").split(",")
# 文件服务配置
body.file_names[idx] = f"{os.getenv('FILE_SERVER_URL')}/preview/{body.request_id}/{f_name}"
# 性能参数配置
max_workers=int(os.getenv("SEARCH_THREAD_NUM", 5))
truncate_len = int(os.getenv("SINGLE_PAGE_MAX_SIZE", 200))
配置管理优势:
- 环境隔离:不同环境使用不同的配置参数
- 动态调整:无需重新编译即可调整系统参数
- 默认值保护:提供合理的默认值,确保系统稳定运行
- 类型安全:适当的类型转换和验证
4.2 安全性与可靠性保障
4.2.1 代码执行安全
代码解释器实现了多层安全保障:
- 沙箱环境:基于smolagents的安全执行环境
- 导入控制:严格控制可导入的Python模块
- 资源限制:限制代码执行的时间和资源消耗
- 临时隔离:使用临时目录隔离不同请求的文件操作
4.2.2 异常处理与恢复
系统实现了多层异常处理机制:
python
# 全局异常中间件
class UnknownException(BaseHTTPMiddleware):
async def dispatch(self, request: Request, call_next: RequestResponseEndpoint) -> Response:
try:
return await call_next(request)
except Exception as e:
logger.error(f"{RequestIdCtx.request_id} {request.method} {request.url.path} error={traceback.format_exc()}")
return Response(content=f"Unexpected error: {e}", status_code=500)
# 业务逻辑异常处理
try:
# 业务逻辑
pass
except Exception as e:
logger.warning(f"Failed to download file {file_name}. Exception: {e}")
# 降级处理
finally:
# 资源清理
if work_dir:
shutil.rmtree(work_dir, ignore_errors=True)
4.3 未来扩展方向
4.3.1 工具生态扩展
-
新工具接入:
- 支持更多类型的工具插件
- 统一的工具接口规范
- 工具版本管理和升级机制
-
能力增强:
- 更强大的代码执行能力
- 更智能的搜索算法
- 更丰富的报告格式
4.3.2 性能优化升级
-
缓存机制:
- 搜索结果缓存
- 文件内容缓存
- LLM响应缓存
-
资源优化:
- 连接池管理
- 内存使用优化
- CPU资源调度
总结:Python服务设计精髓与实践价值 🚀
总结回顾
通过对JoyAgent-JDGenie Python工具服务的深度分析,我们总结出以下设计精髓:
架构设计亮点
- 现代化技术栈:FastAPI + smolagents + LiteLLM的组合,提供了最佳的异步性能
- 模块化架构:清晰的分层设计和职责分工,实现高度的代码复用
- 异步优先:全面的异步编程模式,确保高并发处理能力
- 安全可靠:多层安全保障和异常处理机制
功能实现亮点
- 代码解释器:基于smolagents的安全Python代码执行环境
- 深度搜索:多引擎聚合和智能去重的搜索系统
- 报告生成:支持多格式的智能报告生成能力
- 文件管理:完善的文件生命周期管理机制
最佳实践建议
- 异步优先:选择异步框架和库,避免阻塞操作
- 模块化设计:清晰的分层架构,松耦合的模块设计
- 资源管理:及时释放资源,使用连接池,监控资源使用情况
- 安全性:严格的参数验证,沙箱环境隔离,权限最小化原则
JoyAgent-JDGenie的Python工具服务为现代多智能体系统的工具层开发提供了完整的技术参考。通过FastAPI生态的强大能力,结合精心设计的架构模式和优化策略,构建了一个高性能、高可用、安全可靠的工具服务系统。