LangGraph构建Ai智能体-6-智能体ReAct-例子

示例 2 产品信息查询的例子

python 复制代码
from dotenv import load_dotenv
import os

load_dotenv()
llm = ChatOpenAI(model="qwen-max",
                 base_url=os.getenv("BASE_URL"),
                 api_key=os.getenv("OPENAI_API_KEY"),
                 streaming=True)


# tools
def product_info(product_name: str) -> str:
    """Fetch product infomation"""
    product_catalog = {
        'iPhone 20': 'The latest iPhone features an A15 chip and improved camera.',
        'MacBook': 'The new MacBook has an M2 chip and a 14-inch Retina display.',
    }
    print(f"调用工具 {product_name}")
    return product_catalog.get(product_name, 'Sorry, product not found.')


checkpointer = MemorySaver()

tools = [product_info]
# ReAct agent
graph = create_react_agent(model=llm, checkpointer=checkpointer, tools=tools, debug=False)
#
config = {"configurable": {"thread_id": "thread_1"}}
inputs = {"messages": [("user", "介绍下新的iPhone 20")]}
messages = graph.invoke(inputs, config)
for message in messages["messages"]:
    message.pretty_print()

inputs = {"messages": [("user", "进一步介绍下iPhone 20")]}
messages = graph.invoke(inputs, config)
for message in messages["messages"]:
    message.pretty_print()

输出结果为

bash 复制代码
调用工具 iPhone 20
================================ Human Message =================================

介绍下新的iPhone 20
================================== Ai Message ==================================
Tool Calls:
  product_info (call_08b215e1a2c945108de654)
 Call ID: call_08b215e1a2c945108de654
  Args:
    product_name: iPhone 20
================================= Tool Message =================================
Name: product_info

The latest iPhone features an A15 chip and improved camera.
================================== Ai Message ==================================

最新的iPhone 20配备了A15芯片和改进的摄像头。请注意,这可能是指当前最新的iPhone型号,因为直到我的知识更新为止(2021年),并没有一款叫做iPhone 20的手机。如果您想了解的是另一个特定的产品,请提供更多信息。
调用工具 iPhone 20
================================ Human Message =================================

介绍下新的iPhone 20
================================== Ai Message ==================================
Tool Calls:
  product_info (call_08b215e1a2c945108de654)
 Call ID: call_08b215e1a2c945108de654
  Args:
    product_name: iPhone 20
================================= Tool Message =================================
Name: product_info

The latest iPhone features an A15 chip and improved camera.
================================== Ai Message ==================================

最新的iPhone 20配备了A15芯片和改进的摄像头。请注意,这可能是指当前最新的iPhone型号,因为直到我的知识更新为止(2021年),并没有一款叫做iPhone 20的手机。如果您想了解的是另一个特定的产品,请提供更多信息。
================================ Human Message =================================

进一步介绍下iPhone 20
================================== Ai Message ==================================
Tool Calls:
  product_info (call_d456afc25aac46e98e869a)
 Call ID: call_d456afc25aac46e98e869a
  Args:
    product_name: iPhone 20
================================= Tool Message =================================
Name: product_info

The latest iPhone features an A15 chip and improved camera.
================================== Ai Message ==================================

再次确认,最新的iPhone配备了A15芯片以及升级的摄像头系统。不过,实际上并没有一款被正式命名为"iPhone 20"的产品。可能您指的是最新发布的iPhone型号。如果您需要更详细的信息,比如具体型号(如iPhone 13, iPhone 14等)、特色功能、设计变化或其他细节,请告知我,这样我可以为您提供更加准确的信息。
  
如果有其他特定的方面或者功能您感兴趣的话,也请告诉我!

示例3 产品查询并检查库存的例子

python 复制代码
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent
from langgraph.checkpoint.memory import MemorySaver
from dotenv import load_dotenv
import os

load_dotenv()
llm = ChatOpenAI(model="qwen-max",
                 base_url=os.getenv("BASE_URL"),
                 api_key=os.getenv("OPENAI_API_KEY"),
                 streaming=True)


# tools
def product_info(product_name: str) -> str:
    """Fetch product infomation"""
    product_catalog = {
        'iPhone': 'The latest iPhone features an A15 chip and improved camera.',
        'MacBook': 'The new MacBook has an M2 chip and a 14-inch Retina display.',
    }
    print(f"调用工具product_info {product_name}")
    return product_catalog.get(product_name, 'Sorry, product not found.')


def check_stock(product_name: str) -> str:
    """Check product stock availability."""
    stock_data = {
        'iPhone': 'In stock.',
        'MacBook': 'Out of stock.',
    }
    print(f"调用工具check_stock {product_name}")
    return stock_data.get(product_name, 'Stock information unavailable.')


checkpointer = MemorySaver()

tools = [product_info, check_stock]
# ReAct agent
graph = create_react_agent(model=llm, checkpointer=checkpointer, tools=tools, debug=False)
#
config = {"configurable": {"thread_id": "thread_1"}}
inputs = {"messages": [("user", "我是小明,介绍下新的iPhone")]}
messages = graph.invoke(inputs, config)
for message in messages["messages"]:
    message.pretty_print()

inputs = {"messages": [("user", "iPhone 有库存吗")]}
messages = graph.invoke(inputs, config)
for message in messages["messages"]:
    message.pretty_print()

inputs = {"messages": [("user", "介绍下MacBook,另外还有库存吗")]}
messages = graph.invoke(inputs, config)
for message in messages["messages"]:
    message.pretty_print()

输出结果为

ini 复制代码
调用工具product_info iPhone
================================ Human Message =================================

我是小明,介绍下新的iPhone
================================== Ai Message ==================================
Tool Calls:
  product_info (call_7fe4fbbe6dc6426db484df)
 Call ID: call_7fe4fbbe6dc6426db484df
  Args:
    product_name: iPhone
================================= Tool Message =================================
Name: product_info

The latest iPhone features an A15 chip and improved camera.
================================== Ai Message ==================================

最新的iPhone配备了A15芯片和改进的相机。
调用工具check_stock iPhone
================================ Human Message =================================

我是小明,介绍下新的iPhone
================================== Ai Message ==================================
Tool Calls:
  product_info (call_7fe4fbbe6dc6426db484df)
 Call ID: call_7fe4fbbe6dc6426db484df
  Args:
    product_name: iPhone
================================= Tool Message =================================
Name: product_info

The latest iPhone features an A15 chip and improved camera.
================================== Ai Message ==================================

最新的iPhone配备了A15芯片和改进的相机。
================================ Human Message =================================

iPhone 有库存吗
================================== Ai Message ==================================
Tool Calls:
  check_stock (call_1b24d717931a4b42807943)
 Call ID: call_1b24d717931a4b42807943
  Args:
    product_name: iPhone
================================= Tool Message =================================
Name: check_stock

In stock.
================================== Ai Message ==================================

iPhone目前有库存。


调用工具product_info MacBook
调用工具check_stock MacBook
================================ Human Message =================================

我是小明,介绍下新的iPhone
================================== Ai Message ==================================
Tool Calls:
  product_info (call_7fe4fbbe6dc6426db484df)
 Call ID: call_7fe4fbbe6dc6426db484df
  Args:
    product_name: iPhone
================================= Tool Message =================================
Name: product_info

The latest iPhone features an A15 chip and improved camera.
================================== Ai Message ==================================

最新的iPhone配备了A15芯片和改进的相机。
================================ Human Message =================================

iPhone 有库存吗
================================== Ai Message ==================================
Tool Calls:
  check_stock (call_1b24d717931a4b42807943)
 Call ID: call_1b24d717931a4b42807943
  Args:
    product_name: iPhone
================================= Tool Message =================================
Name: check_stock

In stock.
================================== Ai Message ==================================

iPhone目前有库存。
================================ Human Message =================================

介绍下MacBook,另外还有库存吗
================================== Ai Message ==================================
Tool Calls:
  product_info (call_a1a34117064840f0bda5c9)
 Call ID: call_a1a34117064840f0bda5c9
  Args:
    product_name: MacBook
================================= Tool Message =================================
Name: product_info

The new MacBook has an M2 chip and a 14-inch Retina display.
================================== Ai Message ==================================

新款MacBook配备了M2芯片和14英寸Retina显示屏。

让我查一下库存情况。
Tool Calls:
  check_stock (call_5cd17f2a9a5041caba454d)
 Call ID: call_5cd17f2a9a5041caba454d
  Args:
    product_name: MacBook
================================= Tool Message =================================
Name: check_stock

Out of stock.
================================== Ai Message ==================================

目前MacBook没有库存。

示例4 多步推理和动态Action的例子

  • 用一个父图+多个子图完成复杂的任务
python 复制代码
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent
from langgraph.checkpoint.memory import MemorySaver
from dotenv import load_dotenv
from langgraph.graph import StateGraph, MessagesState, START, END
import os
from typing import TypedDict

load_dotenv()
llm = ChatOpenAI(model="qwen-max",
                 base_url=os.getenv("BASE_URL"),
                 api_key=os.getenv("OPENAI_API_KEY"),
                 streaming=True)


#
class ReActAgentState(TypedDict):
    message: str
    action: str
    sub_action: str


#
def reasoning_node(state: ReActAgentState):
    query = state['message']
    if 'weather' in query:
        return {'action': 'fetch_weather'}
    elif 'news' in query:
        return {'action': 'fetch_news'}
    elif 'recommend' in query:
        return {'action': 'recommendation', 'sub_action': 'book'}
    else:
        return {'action': 'unknown'}


# tools
def weather_subgraph_node(state: ReActAgentState):
    # Simulating a weather tool call
    return {'message': 'The weather today is sunny.'}


def news_subgraph_node(state: ReActAgentState):
    # Simulating a news tool call
    return {'message': 'Here are the latest news headlines.'}


def recommendation_subgraph_node(state: ReActAgentState):
    if state.get('sub_action') == 'book':
        return {'message': "I recommend reading 'The Pragmatic Programmer'."}
    else:
        return {'message': 'I have no other recommendations at the moment.'}


# 天气子图
weather_subgraph_builder = StateGraph(ReActAgentState)
weather_subgraph_builder.add_node("weather_action", weather_subgraph_node)
weather_subgraph_builder.set_entry_point("weather_action")
weather_subgraph = weather_subgraph_builder.compile()

# 新闻子图
news_subgraph_builder = StateGraph(ReActAgentState)
news_subgraph_builder.add_node("news_action", news_subgraph_node)
news_subgraph_builder.set_entry_point("news_action")
news_subgraph = news_subgraph_builder.compile()

# 推荐子图
recommendation_subgraph_builder = StateGraph(ReActAgentState)
recommendation_subgraph_builder.add_node('recommendation_action', recommendation_subgraph_node)
recommendation_subgraph_builder.set_entry_point('recommendation_action')
recommendation_subgraph = recommendation_subgraph_builder.compile()


#
def reasoning_state_manager(state: ReActAgentState):
    if state['action'] == "fetch_weather":
        return weather_subgraph
    elif state['action'] == "fetch_news":
        return news_subgraph
    elif state['action'] == "recommendation":
        return recommendation_subgraph
    else:
        return None


#  parent graph
parent_builder = StateGraph(ReActAgentState)
parent_builder.add_node("reasoning", reasoning_node)
parent_builder.add_node("action_dispatch", reasoning_state_manager)
parent_builder.add_edge(START, "reasoning")
parent_builder.add_edge("reasoning", "action_dispatch")
#
react_agent_graph = parent_builder.compile()
#
inputs_weather = {'message': 'What is the weather today?'}
result_weather = react_agent_graph.invoke(inputs_weather)
print(result_weather['message'])
#
inputs_news = {'message': 'Give me the latest news.'}
result_news = react_agent_graph.invoke(inputs_news)
print(result_news['message'])
#
inputs_recommendation = {'message': 'Can you recommend a good book?'}
result_recommendation = react_agent_graph.invoke(inputs_recommendation)
print(result_recommendation['message'])

输出结果

erlang 复制代码
The weather today is sunny.
Here are the latest news headlines.
I recommend reading 'The Pragmatic Programmer'.
  • 父图推理节点负责处理用户输入,并判断需要执行的动作
  • 父图管理节点负责根据状态路由到不同的子图上
  • 各个子图负责处理各自的不同的任务

示例5 多个子图和上下文记忆的高级 ReAct 代理

python 复制代码
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent
from langgraph.checkpoint.memory import MemorySaver
from dotenv import load_dotenv
from langgraph.graph import StateGraph, MessagesState, START, END
import os
from typing import TypedDict

load_dotenv()
llm = ChatOpenAI(model="qwen-max",
                 base_url=os.getenv("BASE_URL"),
                 api_key=os.getenv("OPENAI_API_KEY"),
                 streaming=True)


#
class ReActAgentState(TypedDict):
    message: str
    action: str
    sub_action: str
    memory: dict  # Memory of past interactions


#
def reasoning_node(state: ReActAgentState):
    query = state['message']
    past_interactions = state.get('memory', {})
    if 'weather' in query:
        return {'action': 'fetch_weather'}
    elif 'news' in query:
        return {'action': 'fetch_news'}
    elif 'recommend' in query:
        if past_interactions.get('favorite_genre') == 'science':
            return {'action': 'recommendation', 'sub_action': 'science_book'}
        else:
            return {'action': 'recommendation', 'sub_action': 'general_book'}
    else:
        return {'action': 'unknown'}


# tools
def weather_subgraph_node(state: ReActAgentState):
    # Simulating a weather tool call
    return {'message': 'The weather today is sunny.'}


def news_subgraph_node(state: ReActAgentState):
    # Simulating a news tool call
    return {'message': 'Here are the latest news headlines.'}


def general_recommendation_node(state: ReActAgentState):
    return {'message': "I recommend reading 'The Pragmatic Programmer'."}


def science_recommendation_node(state: ReActAgentState):
    return {'message': "Since you like science, I recommend 'A Brief History of Time' by Stephen Hawking."}


def update_memory_node(state: ReActAgentState):
    if "recommend" in state["message"]:
        state['memory']['favorite_genre'] = 'science'
    return state


# 天气子图
weather_subgraph_builder = StateGraph(ReActAgentState)
weather_subgraph_builder.add_node("weather_action", weather_subgraph_node)
weather_subgraph_builder.set_entry_point("weather_action")
weather_subgraph = weather_subgraph_builder.compile()

# 新闻子图
news_subgraph_builder = StateGraph(ReActAgentState)
news_subgraph_builder.add_node("news_action", news_subgraph_node)
news_subgraph_builder.set_entry_point("news_action")
news_subgraph = news_subgraph_builder.compile()

# 推荐子图
recommendation_subgraph_builder = StateGraph(ReActAgentState)
recommendation_subgraph_builder.add_node('general_recommendation_node', general_recommendation_node)
recommendation_subgraph_builder.set_entry_point('general_recommendation_node')
general_recommendation_subgraph = recommendation_subgraph_builder.compile()
#
science_recommendation_builder = StateGraph(ReActAgentState)
science_recommendation_builder.add_node('science_recommendation_action', science_recommendation_node)
science_recommendation_builder.set_entry_point('science_recommendation_action')
science_recommendation_subgraph = science_recommendation_builder.compile()

#
memory_update_builder = StateGraph(ReActAgentState)
memory_update_builder.add_node('update_memory_action', update_memory_node)
memory_update_builder.set_entry_point('update_memory_action')
memory_update_subgraph = memory_update_builder.compile()


#
def reasoning_state_manager(state: ReActAgentState):
    if state['action'] == "fetch_weather":
        return weather_subgraph
    elif state['action'] == "fetch_news":
        return news_subgraph
    elif state['action'] == "recommendation":
        if state['sub_action'] == 'science_book':
            return science_recommendation_subgraph
        else:
            return general_recommendation_subgraph
    else:
        return None


#  parent graph
parent_builder = StateGraph(ReActAgentState)
parent_builder.add_node("reasoning", reasoning_node)
parent_builder.add_node("action_dispatch", reasoning_state_manager)
parent_builder.add_node('update_memory', memory_update_subgraph)
parent_builder.add_edge(START, "reasoning")
parent_builder.add_edge("reasoning", "action_dispatch")
parent_builder.add_edge('action_dispatch', 'update_memory')

react_agent_graph = parent_builder.compile()
checkpointer = MemorySaver()

#
inputs_weather = {'message': 'What is the weather today?', 'memory': {}}
result_weather = react_agent_graph.invoke(inputs_weather)
print(result_weather['message'])
#
inputs_recommendation_first = {'message': 'Can you recommend a good book?', 'memory': {}}
result_recommendation_first = react_agent_graph.invoke(inputs_recommendation_first)
print(result_recommendation_first['message'])
#
inputs_recommendation_second = {'message': 'Can you recommend another book?',
                                'memory': {'favorite_genre': 'science'}}
result_recommendation_second = react_agent_graph.invoke(inputs_recommendation_second)
print(result_recommendation_second['message'])

输出结果

csharp 复制代码
The weather today is sunny.
I recommend reading 'The Pragmatic Programmer'.
Since you like science, I recommend 'A Brief History of Time' by Stephen Hawking.

示例6 动态定价的Agent

python 复制代码
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent
from langgraph.checkpoint.memory import MemorySaver
from dotenv import load_dotenv
from langgraph.graph import StateGraph, MessagesState, START, END
import os
from typing import TypedDict

load_dotenv()
model = ChatOpenAI(model="qwen-max",
                   base_url=os.getenv("BASE_URL"),
                   api_key=os.getenv("OPENAI_API_KEY"),
                   streaming=True)


# 模拟api
def get_demand_data(product_id: str) -> dict:
    """Mock demand API to get demand data for a product."""
    return {"product_id": product_id, "demand_level": "high"}


def get_competitor_pricing(product_id: str) -> dict:
    """Mock competitor pricing API."""
    return {'product_id': product_id, 'competitor_price': 95.0}


tools = [get_demand_data, get_competitor_pricing]

graph = create_react_agent(model, tools=tools)

initial_messages = [
    ('system', '你是一个人工智能代理,能够根据市场需求和竞争对手的价格动态调整产品价格。'),
    ('user', "产品ID"12345"的价格应该定多少?")
]

inputs = {"messages": initial_messages}
for state in graph.stream(inputs, stream_mode="values"):
    message = state["messages"][-1]
    if isinstance(message, tuple):
        print(message)
    else:
        message.pretty_print()

输出结果

ini 复制代码
================================ Human Message =================================

产品ID"12345"的价格应该定多少?
================================== Ai Message ==================================
Tool Calls:
  get_competitor_pricing (call_8212a1aad1044c488a9a53)
 Call ID: call_8212a1aad1044c488a9a53
  Args:
    product_id: 12345
================================= Tool Message =================================
Name: get_competitor_pricing

{"product_id": "12345", "competitor_price": 95.0}
================================== Ai Message ==================================
Tool Calls:
  get_demand_data (call_2450404f79774e85b1904a)
 Call ID: call_2450404f79774e85b1904a
  Args:
    product_id: 12345
================================= Tool Message =================================
Name: get_demand_data

{"product_id": "12345", "demand_level": "high"}
================================== Ai Message ==================================

根据市场情况,对于产品ID为"12345"的商品,竞争对手的定价是95.0元。同时,当前市场上该产品的需求水平较高。

鉴于高需求以及竞争对手的价格,我们可以考虑将价格设定得略高于或等于95.0元以最大化利润,同时保持竞争力。确切的价格还需要考虑成本、品牌定位等因素。如果我们决定跟随竞争者的价格,那么可以将价格也设为95.0元;如果想要利用高需求来增加收益,并且确定顾客愿意为此支付更多,我们则可以适度提价。您希望如何继续?

示例 7 使用 LLM 和自定义情感分析工具的情感分析代理

python 复制代码
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent
from langgraph.checkpoint.memory import MemorySaver
from dotenv import load_dotenv
from langgraph.graph import StateGraph, MessagesState, START, END
import os
from typing import TypedDict
from typing import Annotated, Sequence, TypedDict
from langchain_core.messages import BaseMessage, ToolMessage, SystemMessage
from langgraph.graph.message import add_messages
from langchain_core.tools import tool
# pip install textblob
from textblob import TextBlob
import json
from langchain_core.runnables import RunnableConfig

load_dotenv()
model = ChatOpenAI(model="qwen-max",
                   base_url=os.getenv("BASE_URL"),
                   api_key=os.getenv("OPENAI_API_KEY"),
                   streaming=True)


class AgentState(TypedDict):
    messages: Annotated[Sequence[BaseMessage], add_messages]


@tool
def analyze_sentiment(feedback: str) -> str:
    """Analyze customer feedback sentiment with custom logic."""
    analysis = TextBlob(feedback)
    if analysis.sentiment.polarity > 0.5:
        return "positive"
    elif analysis.sentiment.polarity == 0.5:
        return "neutral"
    else:
        return "negative"


@tool
def respond_based_on_sentiment(sentiment: str) -> str:
    """Respond to the customer based on the analyzed sentiment."""
    if sentiment == 'positive':
        return '感谢您的积极反馈!'
    elif sentiment == 'neutral':
        return '我们感谢您的反馈。'
    else:
        return "我们很遗憾听到你不满意。我们如何提供帮助?"


tools = [analyze_sentiment, respond_based_on_sentiment]
llm = model.bind_tools(tools)

tools_by_name = {tool.name: tool for tool in tools}
graph = create_react_agent(model, tools=tools)


def tool_node(state: AgentState):
    outputs = []
    for tool_call in state['messages'][-1].tool_calls:
        tool_result = tools_by_name[tool_call['name']].invoke(tool_call['args'])
        outputs.append(
            ToolMessage(content=json.dumps(tool_result),
                        name=tool_call['name'],
                        tool_call_id=tool_call['id']))
    return {'messages': outputs}


def call_model(state: AgentState, config: RunnableConfig):
    system_prompt = SystemMessage(content='你是一位乐于助人的助手,负责回复客户反馈。')
    response = llm.invoke([system_prompt] + state['messages'], config)
    return {'messages': [response]}


def should_continue(state: AgentState):
    last_message = state['messages'][-1]
    # If there is no tool call, then we finish
    if not last_message.tool_calls:
        return 'end'
    else:
        return 'continue'


#
workflow = StateGraph(AgentState)
workflow.add_node('agent', call_model)
workflow.add_node('tools', tool_node)
workflow.set_entry_point('agent')
workflow.add_conditional_edges(
    'agent',
    should_continue,
    {'continue': 'tools', 'end': END, },
)
workflow.add_edge('tools', 'agent')
graph = workflow.compile()


def print_stream(stream):
    for s in stream:
        message = s['messages'][-1]
    if isinstance(message, tuple):
        print(message)
    else:
        message.pretty_print()


initial_state = {'messages': [('user', '产品很好,但交货很差。')]}
print_stream(graph.stream(initial_state, stream_mode='values'))

输出结果

ini 复制代码
================================== Ai Message ==================================

我们很遗憾听到你不满意。我们如何提供帮助?

示例8 使用 LLM 和内存的个性化产品推荐代理

python 复制代码
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent
from langgraph.checkpoint.memory import MemorySaver
from dotenv import load_dotenv
from langgraph.graph import StateGraph, MessagesState, START, END
import os
from typing import TypedDict
from typing import Annotated, Sequence, TypedDict
from langchain_core.messages import BaseMessage, ToolMessage, SystemMessage
from langgraph.graph.message import add_messages
from langchain_core.tools import tool
# pip install textblob
from textblob import TextBlob
import json
from langchain_core.runnables import RunnableConfig

load_dotenv()
model = ChatOpenAI(model="qwen-max",
                   base_url=os.getenv("BASE_URL"),
                   api_key=os.getenv("OPENAI_API_KEY"),
                   streaming=True)


class RecommendationState(TypedDict):
    user_id: str  # 用户标识符
    preference: str  # 用户当前的偏好(例如,流派、类别)
    reasoning: str  # 来自 LLM 的推理过程
    recommendation: str  # 最终产品推荐
    memory: dict  # 用于存储首选项的用户内存
    messages: Annotated[Sequence[BaseMessage], add_messages]


@tool
def recommend_product(preference: str) -> str:
    """Recommend a product based on the user's preferences."""
    product_db = {
        'science': "I recommend 'A Brief History of Time' by Stephen Hawking.",
        'technology': "I recommend 'The Innovators' by Walter Isaacson.",
        'fiction': "I recommend 'The Alchemist' by Paulo Coelho."
    }
    return product_db.get(preference, 'I recommend exploring our latest products!')


tools = [recommend_product]
tools_by_name = {tool.name: tool for tool in tools}

llm = model.bind_tools(tools)


def update_memory(state: RecommendationState):
    state["memory"][state['user_id']] = state["preference"]
    return state


def tool_node(state: RecommendationState):
    outputs = []
    for tool_call in state["messages"][-1].tool_calls:
        tool_result = tools_by_name[tool_call["name"]].invoke(tool_call["args"])
        outputs.append(
            ToolMessage(
                content=json.dumps(tool_result),
                name=tool_call["name"],
                tool_call_id=tool_call["id"],
            )
        )
    return {"messages": outputs}


def call_model(state: RecommendationState, config: RunnableConfig):
    system_prompt = SystemMessage(
        content=f"You are a helpful assistant for recommending a product based on the user's preference."
    )
    response = llm.invoke([system_prompt] + state["messages"] + [("user", state["preference"])], config)
    return {"messages": [response]}


def should_continue(state: RecommendationState):
    last_message = state["messages"][-1]
    # If there is no tool call, then we finish
    if not last_message.tool_calls:
        return "end"
    else:
        return "continue"


workflow = StateGraph(RecommendationState)
workflow.add_node('agent', call_model)
workflow.add_node('tools', tool_node)
workflow.add_node('update_memory', update_memory)
workflow.set_entry_point('agent')
workflow.add_conditional_edges(
    'agent',
    should_continue,
    {
        'continue': 'tools',
        'end': END,
    })
workflow.add_edge('tools', 'update_memory')
workflow.add_edge('update_memory', 'agent')
graph = workflow.compile()

#
initial_state = {
    "messages": [("user", "I'm looking for a book.")],
    "user_id": "user1", "preference": "science",
    "memory": {},
    "reasoning": ""
}

result = graph.invoke(initial_state)
#
print(f"Reasoning: {result['reasoning']}")
print(f"Product Recommendation: {result['messages'][-1].content}")
print(f"Updated Memory: {result['memory']}")


# Helper function to print the conversation
def print_stream(stream):
    for s in stream:
        message = s["messages"][-1]
        if isinstance(message, tuple):
            print(message)
        else:
            message.pretty_print()


# Run the agent
print_stream(graph.stream(initial_state, stream_mode="values"))

输出结果

vbnet 复制代码
Reasoning: 
Product Recommendation: I recommend 'A Brief History of Time' by Stephen Hawking. It's a great read for anyone interested in science, particularly in the areas of cosmology and theoretical physics.
Updated Memory: {'user1': 'science'}
================================ Human Message =================================

I'm looking for a book.
================================== Ai Message ==================================
Tool Calls:
  recommend_product (call_6b6a242cc9e24557a89db8)
 Call ID: call_6b6a242cc9e24557a89db8
  Args:
    preference: science
================================= Tool Message =================================
Name: recommend_product

"I recommend 'A Brief History of Time' by Stephen Hawking."
================================= Tool Message =================================
Name: recommend_product

"I recommend 'A Brief History of Time' by Stephen Hawking."
================================== Ai Message ==================================

I recommend 'A Brief History of Time' by Stephen Hawking. It's a great book that explores complex scientific concepts in an accessible way. Would you like more recommendations or information on this book?
相关推荐
跟橙姐学代码1 分钟前
学 Python 就像谈恋爱:从暧昧试探到正式牵手,我用 8 个瞬间讲透了!
python
彼方卷不动了9 分钟前
【AI 学习】用 Kotlin 开发一个最基础的 MCP Server 并让它与 Cursor 联动
人工智能·kotlin·mcp
说私域17 分钟前
基于梅特卡夫定律的开源链动2+1模式AI智能名片S2B2C商城小程序价值重构研究
人工智能·小程序·开源
鲸鱼在dn23 分钟前
RAG-大模型课程《李宏毅 2025》作业1笔记
人工智能·笔记·gpt·搜索引擎·语言模型·chatgpt
令狐寻欢26 分钟前
AI 大模型应用进阶系列(五):FastAPI 入门
人工智能·python·fastapi
传奇开心果编程39 分钟前
【传奇开心果系列】Flet框架平面级联菜单侧边栏和和登录用户圆形头像自定义组件模板
python·ui·前端框架
Gu_shiwww44 分钟前
数据结构2线性表——顺序表
c语言·开发语言·数据结构·python
站大爷IP1 小时前
Python数字限制在指定范围内:方法与实践
python
POLOAPI1 小时前
被低估的效率巨兽:Claude Flow 隐藏功能竟能让任务提速 24 倍?
人工智能·ai编程·claude
aneasystone本尊1 小时前
学习 Coze Studio 的代码架构
人工智能