5. langgraph中的react agent使用 (从零构建一个react agent)

1. 定义 Agent 状态

首先,我们需要定义 Agent 的状态,这包括 Agent 所持有的消息。

python 复制代码
from typing import (
    Annotated,
    Sequence,
    TypedDict,
)
from langchain_core.messages import BaseMessage
from langgraph.graph.message import add_messages

class AgentState(TypedDict):
    
    messages: Annotated[Sequence[BaseMessage], add_messages]

2. 初始化模型和工具

接下来,我们初始化一个 ChatOpenAI 模型,并定义一个工具 get_weather

python 复制代码
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool

model = ChatOpenAI(
    temperature=0,
    model="glm-4-plus",
    openai_api_key="your_api_key",
    openai_api_base="https://open.bigmodel.cn/api/paas/v4/"
)

@tool
def get_weather(location: str):
    """Call to get the weather from a specific location."""
    # This is a placeholder for the actual implementation
    # Don't let the LLM know this though 😊
    if any([city in location.lower() for city in ["sf", "san francisco"]]):
        return "It's sunny in San Francisco, but you better look out if you're a Gemini 😈."
    else:
        return f"I am not sure what the weather is in {location}"

tools = [get_weather]

model = model.bind_tools(tools)

3. 定义工具节点和模型调用节点

我们需要定义工具节点和模型调用节点,以便在 Agent 工作流中使用。

python 复制代码
import json
from langchain_core.messages import ToolMessage, SystemMessage
from langchain_core.runnables import RunnableConfig

tools_by_name = {tool.name: tool for tool in tools}

def tool_node(state: AgentState):
    outputs = []
    for tool_call in state["messages"][-1].tool_calls:
        tool_result = tools_by_name[tool_call["name"]].invoke(tool_call["args"])
        outputs.append(
            ToolMessage(
                content=json.dumps(tool_result),
                name=tool_call["name"],
                tool_call_id=tool_call["id"],
            )
        )
    return {"messages": outputs}

def call_model(
    state: AgentState,
    config: RunnableConfig,
):
 
    system_prompt = SystemMessage(
        "You are a helpful AI assistant, please respond to the users query to the best of your ability!"
    )
    response = model.invoke([system_prompt] + state["messages"], config)

    return {"messages": [response]}

def should_continue(state: AgentState):
    messages = state["messages"]
    last_message = messages[-1]
    # If there is no function call, then we finish
    if not last_message.tool_calls:
        return "end"
    # Otherwise if there is, we continue
    else:
        return "continue"

4. 构建工作流

使用 StateGraph 构建工作流,定义节点和边。

python 复制代码
from langgraph.graph import StateGraph, END

workflow = StateGraph(AgentState)

workflow.add_node("agent", call_model)
workflow.add_node("tools", tool_node)

workflow.set_entry_point("agent")

workflow.add_conditional_edges(
    "agent",
    should_continue,
    {
        "continue": "tools",
        "end": END,
    },
)

workflow.add_edge("tools", "agent")

graph = workflow.compile()

from IPython.display import Image, display

try:
    display(Image(graph.get_graph().draw_mermaid_png()))
except Exception:
    pass

5. 运行工作流

最后,我们定义一个辅助函数来格式化输出,并运行工作流。

python 复制代码
# Helper function for formatting the stream nicely
def print_stream(stream):
    for s in stream:
        message = s["messages"][-1]
        if isinstance(message, tuple):
            print(message)
        else:
            message.pretty_print()

inputs = {"messages": [("user", "what is the weather in sf")]}
print_stream(graph.stream(inputs, stream_mode="values"))

输出结果如下:

复制代码
================================[1m Human Message [0m=================================
what is the weather in sf
================================[1m Ai Message [0m==================================
Tool Calls:
  get_weather (call_9208187575599553774)
 Call ID: call_9208187575599553774
  Args:
    location: San Francisco
================================[1m Tool Message [0m=================================
Name: get_weather

"It's sunny in San Francisco, but you better look out if you're a Gemini 😈."
================================[1m Ai Message [0m==================================

It's sunny in San Francisco, but you better look out if you're a Gemini 😈.

参考链接:https://langchain-ai.github.io/langgraph/how-tos/react-agent-from-scratch/

相关推荐
学历真的很重要6 小时前
VsCode+Roo Code+Gemini 2.5 Pro+Gemini Balance AI辅助编程环境搭建(理论上通过多个Api Key负载均衡达到无限免费Gemini 2.5 Pro)
前端·人工智能·vscode·后端·语言模型·负载均衡·ai编程
普通网友6 小时前
微服务注册中心与负载均衡实战精要,微软 2025 年 8 月更新:对固态硬盘与电脑功能有哪些潜在的影响。
人工智能·ai智能体·技术问答
苍何6 小时前
一人手搓!AI 漫剧从0到1详细教程
人工智能
苍何6 小时前
Gemini 3 刚刷屏,蚂蚁灵光又整活:一句话生成「闪游戏」
人工智能
苍何7 小时前
越来越对 AI 做的 PPT 敬佩了!(附7大用法)
人工智能
苍何7 小时前
超全Nano Banana Pro 提示词案例库来啦,小白也能轻松上手
人工智能
t梧桐树t7 小时前
简单实现一个LangChain Agent
langchain
阿杰学AI8 小时前
AI核心知识39——大语言模型之World Model(简洁且通俗易懂版)
人工智能·ai·语言模型·aigc·世界模型·world model·sara
智慧地球(AI·Earth)8 小时前
Vibe Coding:你被取代了吗?
人工智能
大、男人8 小时前
DeepAgent学习
人工智能·学习