[LangChain智能体本质论]中间件装饰器是如何将函数转换成AgentMiddleware的?

中间件除了可以通过定义继承自AgentMiddleware的类型来定义,还可以在某个满足签名条件的函数上应用相应的装饰器将其转换成AgentMiddlewre对象。可用的装饰器包括:

  • @before_agent
  • @before_model
  • @after_model
  • @after_agent
  • @wrap_model_call
  • @wrap_tool_call
  • @dynamic_prompt

1. 生命周期拦截器

AgentMiddleware中用于在Agent和Model执行前后进行拦截的方法对应于如下四种装饰器:

  • @before_agent: 在Agent启动前执行 (对于每次Agent调用,只执行一次);
  • @before_model: 在每次调用Model前执行;
  • @after_model: 在每次调用Model后执行;
  • @after_agent: 在Agent调用完成后执行(对于每次Agent调用,只执行一次);

1.1 中间件如何被创建?

这些装饰器会创建一个继承自AgentMiddleware类型,并对被装饰的函数进行适当的包装后作为对应的方法。以如下这个log_response中间件为例,它会在调用模型后将输出AIMessage主体内容的前50个字符和Token使用情况。

python 复制代码
from langchain.agents import create_agent , AgentState
from langchain.agents.middleware import after_model
from langchain_core.tools import tool
from langchain_core.messages import AIMessage
from langchain_openai import ChatOpenAI
from langgraph.runtime import Runtime
from typing import cast
from dotenv import load_dotenv

load_dotenv()

@after_model
def log_response_middleware(state: AgentState, runtime: Runtime) -> dict | None:
    message = cast(AIMessage, state["messages"][-1])
    print(f"Response: {str(message.content)[:50]}...")
    if message.usage_metadata:
        print(f"Token usage: {message.usage_metadata}")
    return None

@tool
def get_weather(city: str) -> str:
    """A tool to get weather information for a given city."""
    return f"It's sunny today in {city}."

agent = create_agent(
    model = ChatOpenAI(model="gpt-5.2-chat"),
    tools=[get_weather], 
    middleware=[log_response_middleware]
)

agent.invoke(input={"messages":[{"role":"user", "content":"What is the weather like in Suzhou?"}]})

输出

复制代码
Response: ...
Token usage: {'input_tokens': 138, 'output_tokens': 89, 'total_tokens': 227, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 64}}

Response: It's **sunny** in Suzhou today. ☀️...
 Token usage: {'input_tokens': 174, 'output_tokens': 18, 'total_tokens': 192, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}}

如果我们将log_response函数上的@after_model装饰器去掉,也可用按照如下的方式将这个函数转成中间件。

python 复制代码
def wrapped( self:AgentMiddleware, state:AgentState, runtime:Runtime):
    return log_response(state, runtime) 

middleware = type(
        "log_response",
        (AgentMiddleware,),
        {
            "state_schema": AgentState,
            "tools": [],
            "after_model": wrapped,
        },
    )()

具体操作为:

  • 使用与AgentMiddlewareafter_model方法具有相同签名的wrapped函数来对log_response函数进行了包装;
  • 调用type函数生成一个名为"log_response"且继承自AgentMiddleware的类型,并对生成类型的三个成员state_schematoolsafter_model进行了设置,其中after_model方法被设置为包装函数;
  • 将生成类型作为构造函数创建中间件实例。

@after_model装饰器就是按照类似这种方式将同步函数转换成中间件的,针对异步函数的包装与之类型,包装函数会作为生成中间件类型的aafter_model方法。

1.2 装饰器函数定义

如下所示的是装饰器函数@before_model的两个重载,参数func表示我们定义的中间件函数,返回值可以是表示中间件的AgentMiddleware对象,也可以一个用于创建中间件的工厂函数Callable[[_CallableWithStateAndRuntime],AgentMiddleware]

python 复制代码
@overload
def before_model(
    func: None = None,
    *,
    state_schema: type[StateT] | None = None,
    tools: list[BaseTool] | None = None,
    can_jump_to: list[JumpTo] | None = None,
    name: str | None = None,
) -> Callable[[_CallableWithStateAndRuntime[StateT, ContextT]], AgentMiddleware[StateT, ContextT]]

def before_model(
    func: _CallableWithStateAndRuntime[StateT, ContextT] | None = None,
    *,
    state_schema: type[StateT] | None = None,
    tools: list[BaseTool] | None = None,
    can_jump_to: list[JumpTo] | None = None,
    name: str | None = None,
) -> (Callable[[_CallableWithStateAndRuntime[StateT, ContextT]], AgentMiddleware[StateT, ContextT]]| AgentMiddleware[StateT, ContextT])

class _CallableWithStateAndRuntime(Protocol[StateT_contra, ContextT]):
    def __call__(self, state: StateT_contra, runtime: Runtime[ContextT]
    ) -> dict[str, Any] | Command[Any] | None | Awaitable[dict[str, Any] | Command[Any] | None]

StateT = TypeVar("StateT", bound=AgentState[Any], default=AgentState[Any])
StateT_contra = TypeVar("StateT_contra", bound=AgentState[Any], contravariant=True)
JumpTo = Literal["tools", "model", "end"]

func参数类型_CallableWithStateAndRuntime体现了被装饰的中间件函数的签名,它具有表示状态和运行时的两个参数stateruntime。对于同步函数,可以返回字典或Command对象体现对状态成员的增量更新和节点跳转,如果中间价的作用仅限于函数主体操作,就直接返回None。异步函数自然就返回Awaitable[dict[str, Any] | Command[Any] | None]。通过@before_model装饰器函数的参数可以进行如下设置:

  • state_schema:状态Schema类型;
  • can_jump_to:可以跳转的目标节点;
  • name:生成的中间件类型名称;

其余三个装饰器函数(@after_model, @before_agent@after_agent)的定义与之类似。

2. 调用包装器

针对Model和Tool调用的包装对应如下两种装饰器,它们的实现方式于上面介绍的四种装饰器类似。

  • @wrap_model_call:包装针对Model的调用;
  • @wrap_tool_call:包装针对Tool的调用;

2.1 中间件如何被创建?

上面我们利用@after_model装饰器创建了一个用来记录模型响应内容摘要和Token消费的log_response中间件,这个中间也可以采用@wrap_model_call装饰器来实现,具体实现如下:

python 复制代码
from langchain.agents import create_agent
from langchain.agents.middleware import wrap_model_call,ModelRequest,ModelResponse
from langchain_core.tools import tool
from langchain_core.messages import AIMessage
from langchain_openai import ChatOpenAI
from typing import Callable, cast
from dotenv import load_dotenv

load_dotenv()

@wrap_model_call
def log_response(request: ModelRequest,handler: Callable[[ModelRequest], ModelResponse]) -> ModelResponse:
    response = handler(request)
    message = cast(AIMessage, response.result[-1])
    print(f"Response: {str(message.content)[:50]}...")
    if message.usage_metadata:
        print(f"Token usage: {message.usage_metadata}")
    return response

@tool
def get_weather(city: str) -> str:
    """A tool to get weather information for a given city."""
    return f"It's sunny today in {city}."

agent = create_agent(
    model = ChatOpenAI(model="gpt-5.2-chat"),
    tools=[get_weather], 
    middleware=[log_response]
)

agent.invoke(input={"messages":[{"role":"user", "content":"What is the weather like in Suzhou?"}]})

如果剔除log_response函数上的@wrap_model_call装饰器,我们也可以采用如下的方式将log_response函数转换成对应的中间件对象,这本质上就是@wrap_model_call将同步函数转换成中间件对象的逻辑,针对异步函数的转换方式与之类似。

python 复制代码
def wrapped(
    self: AgentMiddleware,
    request: ModelRequest,
    handler: Callable[[ModelRequest], ModelResponse],
) -> ModelResponse:
    return log_response(request, handler)
middleware = type(
    "log_response",
    (AgentMiddleware,),
    {
        "state_schema": AgentState,
        "tools": [],
        "wrap_model_call": wrapped,
    },
)()

2.2 装饰器函数定义

如下所示的是装饰器函数@wrap_model_call的三个重载,参数func表示我们定义的中间件函数,返回值可以是表示中间件的AgentMiddleware对象,也可以一个用于创建中间件的工厂函数Callable[[_CallableReturningModelResponse],AgentMiddleware]

python 复制代码
@overload
def wrap_model_call(
    func: _CallableReturningModelResponse[StateT, ContextT],
) -> AgentMiddleware[StateT, ContextT]

@overload
def wrap_model_call(
    func: None = None,
    *,
    state_schema: type[StateT] | None = None,
    tools: list[BaseTool] | None = None,
    name: str | None = None,
) -> Callable[[_CallableReturningModelResponse[StateT, ContextT]], AgentMiddleware[StateT, ContextT]]

def wrap_model_call(
    func: _CallableReturningModelResponse[StateT, ContextT] | None = None,
    *,
    state_schema: type[StateT] | None = None,
    tools: list[BaseTool] | None = None,
    name: str | None = None,
) -> (Callable[[_CallableReturningModelResponse[StateT, ContextT]],AgentMiddleware[StateT, ContextT],]| AgentMiddleware[StateT, ContextT])

class _CallableReturningModelResponse(Protocol[StateT_contra, ContextT]): 
    def __call__(
        self,
        request: ModelRequest,
        handler: Callable[[ModelRequest], ModelResponse],
    ) -> ModelCallResult

ModelCallResult: TypeAlias = ModelResponse | AIMessage

func参数类型_CallableWithStateAndRuntime体现了被装饰中间件函数应该具有的签名,它的两个参数requesthandler,分别表调用请求和调用处理器,对应类型分别为ModelRequestCallable[[ModelRequest], ModelResponse]@wrap_tool_call装饰器函数也具有类似的定义。

3. @dynamic_prompt装饰器

如下这个@dynamic_prompt装饰器可以用来动态修改模型的系统提示词。

python 复制代码
@overload
def dynamic_prompt(
    func: None = None,
) -> Callable[[_CallableReturningSystemMessage[StateT, ContextT]],AgentMiddleware[StateT, ContextT]]

def dynamic_prompt(
    func: _CallableReturningSystemMessage[StateT, ContextT] | None = None,
) -> (Callable[[_CallableReturningSystemMessage[StateT, ContextT]],AgentMiddleware[StateT, ContextT]]| AgentMiddleware[StateT, ContextT])

class _CallableReturningSystemMessage(Protocol[StateT_contra, ContextT]): 
    def __call__(
        self, request: ModelRequest
    ) -> str | SystemMessage | Awaitable[str | SystemMessage]

_CallableReturningSystemMessage类型体现了被装饰中间件函数的签名:

  • ModelRequest为参数;
  • 同步函数以返回的字符串或者SystemMessage体现修改后的系统指令,异步函数则返回Awaitable[str|SystemMessage]对象。

如下的演示程序利用@dynamic_prompt装饰器定义了一个role_based_prompt中间件,它利用静态上下文对象Context携带的角色动态改变系统提示词。我们利用另一个通过@wrap_model_call装饰器定义的print_system_message输出最终使用的系统提示词。

python 复制代码
from langchain.agents import create_agent
from langchain.agents.middleware import (
    dynamic_prompt,
    wrap_model_call,
    ModelRequest,
    ModelResponse,
)
from langchain_openai import ChatOpenAI
from typing import Callable, TypedDict, cast,Any
from dotenv import load_dotenv

load_dotenv()

class Context(TypedDict):
    user_role: str

@dynamic_prompt
def role_based_prompt(request: ModelRequest) -> str:
    context = cast(Context, request.runtime.context)
    return (
        f"You are an expert"
        if context.get("user_role") == "expert"
        else "You are a helpful assistant"
    )

@wrap_model_call
def print_system_message(request: ModelRequest, handler: Callable[[ModelRequest], ModelResponse]) -> ModelResponse:
    system_message = request.system_message
    if system_message:
        print(f"System message: {system_message.content}")
    return handler(request)

middlewares: list[Any] = [role_based_prompt, print_system_message]
agent = create_agent(
    model=ChatOpenAI(model="gpt-5.2-chat"), 
    middleware=middlewares,
    context_schema=Context,
)

input = {
    "messages": [{"role": "user", "content": "What is blackhole? Please answer it max 100 words."}],
}

result = agent.invoke(input, context={"user_role": "expert"}) # type: ignore
for message in result["messages"]:
    print(message.content)
result = agent.invoke(input, context={"user_role": "user"}) # type: ignore
for message in result["messages"]:
    print(message.content)

输出:

复制代码
System message: You are an expert
What is blackhole? Please answer it max 100 words.
A **black hole** is a region in space where gravity is so strong that nothing---not even light---can escape from it. It forms when a massive star collapses under its own gravity at the end of its life. Black holes have an invisible boundary called the **event horizon** and a dense center known as a **singularity**. They cannot be seen directly, but their presence is detected by their effects on nearby stars, gas, and light.

System message: You are a helpful assistant
What is blackhole? Please answer it max 100 words.
A black hole is a region in space where gravity is so strong that nothing, not even light, can escape from it. It forms when a very massive star collapses under its own gravity. Black holes have an invisible boundary called the event horizon, beyond which nothing can return. They can grow by absorbing nearby matter and merging with other black holes.

如果将role_based_prompt函数上的@dynamic_prompt装饰器剔除,我们可以安装如下的方式将这个函数转换成中间件:

python 复制代码
def wrapped(
    self: AgentMiddleware,
    request: ModelRequest,
    handler: Callable[[ModelRequest], ModelResponse],
) -> ModelResponse:
    prompt = role_based_prompt(request)
    return handler(request.override(system_message=SystemMessage(content=prompt)))

middleware = type(
    "role_based_prompt",
    (AgentMiddleware,),
    {
        "state_schema": AgentState,
        "tools": [],
        "wrap_model_call": wrapped,
    },
)()

具体的转换流程如下:

  • 利用wrapped包装user_role_prompt函数;
  • 利用type函数创建一个继承自AgentMiddleware名为role_based_prompt的类型,该类型的wrap_model_call方法为包装函数;
  • 利用生成类型作为构造函数创建中间件;

这段代码体现@dynamic_prompt如何将同步函数转换成中间件,针对异步函数的转换与之类似。

相关推荐
2401_891655812 小时前
ZLibrary反爬机制概述
数据库·python
2201_761080192 小时前
Python上下文管理器(with语句)的原理与实践
jvm·数据库·python
研究点啥好呢2 小时前
3月21日GitHub热门项目推荐|攻守兼备,方得圆满
java·c++·python·开源·github
Storynone2 小时前
【Day29】LeetCode:62. 不同路径,63. 不同路径 II,343. 整数拆分,96. 不同的二叉搜索树
python·算法·leetcode
chushiyunen2 小时前
python语法-继承、方法命名、单例等
开发语言·python
咚咚王者2 小时前
人工智能之语言领域 自然语言处理 第十八章 Python NLP生态
人工智能·python·自然语言处理
码路飞2 小时前
AI 写的代码越来越多,但你敢直接上线吗?我的多模型交叉 Review 方案
python·openai
MgArcher2 小时前
Python 入门核心考点:数据类型与变量全解
python
m0_662577973 小时前
自动化机器学习(AutoML)库TPOT使用指南
jvm·数据库·python