大家好,智能体开发领域正在迅速发展,LangChain也随之不断演变进化。虽然传统的LangChain智能体(尤其是基于AgentExecutor构建的)已经提供了稳定的服务,但LangGraph的出现带来了更为强大和灵活的解决方案。
本文将向大家介绍如何将智能体迁移至LangGraph,使迁移后的智能体能够充分利用LangGraph的最新技术优势。
1.传统LangChain与LangGraph
传统LangChain智能体是基于AgentExecutor类构建的,为LangChain平台中的智能体开发提供了一种结构化的方法,并为智能体的行为提供了全面的配置选项。
LangGraph代表了LangChain智能体开发的新纪元。它赋予了开发者构建高度定制化和可控智能体的能力。与之前的版本相比,LangGraph提供了更为精细的控制能力。
2.为什么迁移至LangGraph
迁移至LangGraph可以解锁多个好处:
-
控制力提升:LangGraph提供了对智能体决策过程的更大控制权,可以更精确地定制其响应和动作。
-
架构灵活性:LangGraph的架构设计更为灵活,开发者可以根据特定需求设计出完美的智能体。
-
技术前瞻性:LangChain正在积极推进开发LangGraph,预示着平台内智能体创建的未来方向。及时迁移能够确保智能体技术始终处于行业前沿。
3.代码实现
下面是将传统LangChain智能体迁移到LangGraph所需的代码级别更改。
步骤I:安装库
python
pip install -U langgraph langchain langchain-openai
步骤II:智能体的基本使用
python
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain.memory import ChatMessageHistory
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
model = ChatOpenAI(model="gpt-4o")
memory = ChatMessageHistory(session_id="test-session")
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful assistant."),
# First put the history
("placeholder", "{chat_history}"),
# Then the new input
("human", "{input}"),
# Finally the scratchpad
("placeholder", "{agent_scratchpad}"),
]
)
@tool
def magic_function(input: int) -> int:
"""Applies a magic function to an input."""
return input + 2
tools = [magic_function]
agent = create_tool_calling_agent(model, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools)
agent_with_chat_history = RunnableWithMessageHistory(
agent_executor,
# 这是必需的,因为在大多数现实场景中,需要一个会话ID
# 但在这里没有真正使用,因为使用的是简单的内存ChatMessageHistory
lambda session_id: memory,
input_messages_key="input",
history_messages_key="chat_history",
)
config = {"configurable": {"session_id": "test-session"}}
print(
agent_with_chat_history.invoke(
{"input": "Hi, I'm polly! What's the output of magic_function of 3?"}, config
)["output"]
)
print("---")
print(agent_with_chat_history.invoke({"input": "Remember my name?"}, config)["output"])
print("---")
print(
agent_with_chat_history.invoke({"input": "what was that output again?"}, config)[
"output"
]
)
# 输出
Hi Polly! The output of the magic function for the input 3 is 5.
---
Yes, I remember your name, Polly! How can I assist you further?
---
The output of the magic function for the input 3 is 5.
步骤III:LangGraph的智能体状态管理
python
from langchain_core.messages import SystemMessage
from langgraph.checkpoint import MemorySaver # 内存中的检查点保存器
from langgraph.prebuilt import create_react_agent
system_message = "You are a helpful assistant."
# 这也可以是一个SystemMessage对象
# system_message = SystemMessage(content="You are a helpful assistant. Respond only in Spanish.")
memory = MemorySaver()
app = create_react_agent(
model, tools, messages_modifier=system_message, checkpointer=memory
)
config = {"configurable": {"thread_id": "test-thread"}}
print(
app.invoke(
{
"messages": [
("user", "Hi, I'm polly! What's the output of magic_function of 3?")
]
},
config,
)["messages"][-1].content
)
print("---")
print(
app.invoke({"messages": [("user", "Remember my name?")]}, config)["messages"][
-1
].content
)
print("---")
print(
app.invoke({"messages": [("user", "what was that output again?")]}, config)[
"messages"
][-1].content
)
# 输出
Hi Polly! The output of the magic_function for the input 3 is 5.
---
Yes, your name is Polly!
---
The output of the magic_function for the input 3 was 5.
4.总结
迁移至LangGraph的智能体会获得更深层次的能力和灵活性。按照既定步骤并理解系统消息的概念,将有助于实现平滑过渡,并优化智能体的性能表现。为了获得更全面的迁移指导和掌握高级技术,建议查阅官方LangChain文档。