在构建复杂的AI代理系统时,你是否曾感到像在黑箱中操作?Deep Agents的Streaming功能为你打开了一扇窗,让你能够实时观察主代理和子代理的每一个动作。本文将带你深入了解Deep Agents Streaming的核心功能,并提供可以直接运行的代码示例。
什么是Deep Agents Streaming?
Deep Agents Streaming是LangChain Deep Agents框架的一项强大功能,它允许你实时获取代理执行过程中的各种事件,包括:
- 子代理进度跟踪
- LLM令牌流
- 工具调用详情
- 自定义更新事件
通过这些实时数据流,你可以构建更直观的用户界面,提供更好的用户体验,或者进行详细的执行分析。
核心功能详解
1. 启用子图流(Subgraph Streaming)
要接收子代理事件,你需要在流式传输时启用stream_subgraphs参数(在代码中表示为subgraphs=True):
python
from deepagents import create_deep_agent
agent = create_deep_agent(
model="google_genai:gemini-3.1-pro-preview",
system_prompt="You are a helpful research assistant",
subagents=[
{
"name": "researcher",
"description": "Researches a topic in depth",
"system_prompt": "You are a thorough researcher.",
},
],
)
for chunk in agent.stream(
{"messages": [{"role": "user", "content": "Research quantum computing advances"}]},
stream_mode="updates",
subgraphs=True, # 启用子图流
version="v2", # 使用v2格式
):
if chunk["type"] == "updates":
if chunk["ns"]:
# 子代理事件 - 命名空间标识来源
print(f"[subagent: {chunk['ns']}]")
else:
# 主代理事件
print("[main agent]")
print(chunk["data"])
2. 命名空间(Namespaces)
命名空间是识别事件来源的关键。每个流事件都包含一个命名空间,它是一个节点名称和任务ID的路径,代表了代理层次结构:
()(空): 主代理("tools:abc123",): 由主代理的task工具调用abc123生成的子代理("tools:abc123", "model_request:def456"): 子代理内部的模型请求节点
使用命名空间可以将事件路由到正确的UI组件:
python
for chunk in agent.stream(
{"messages": [{"role": "user", "content": "Plan my vacation"}]},
stream_mode="updates",
subgraphs=True,
version="v2",
):
if chunk["type"] == "updates":
# 检查事件是否来自子代理
is_subagent = any(
segment.startswith("tools:") for segment in chunk["ns"]
)
if is_subagent:
# 从命名空间中提取工具调用ID
tool_call_id = next(
s.split(":")[1] for s in chunk["ns"] if s.startswith("tools:")
)
print(f"Subagent {tool_call_id}: {chunk['data']}")
else:
print(f"Main agent: {chunk['data']}")
3. 跟踪子代理进度
使用stream_mode="updates"可以跟踪子代理的进度,了解每个子代理完成了哪些工作:
python
from deepagents import create_deep_agent
agent = create_deep_agent(
model="google_genai:gemini-3.1-pro-preview",
system_prompt=(
"You are a project coordinator. Always delegate research tasks "
"to your researcher subagent using the task tool. Keep your final response to one sentence."
),
subagents=[
{
"name": "researcher",
"description": "Researches topics thoroughly",
"system_prompt": (
"You are a thorough researcher. Research the given topic "
"and provide a concise summary in 2-3 sentences."
),
},
],
)
for chunk in agent.stream(
{"messages": [{"role": "user", "content": "Write a short summary about AI safety"}]},
stream_mode="updates",
subgraphs=True,
version="v2",
):
if chunk["type"] == "updates":
# 主代理更新(空命名空间)
if not chunk["ns"]:
for node_name, data in chunk["data"].items():
if node_name == "tools":
# 子代理结果返回给主代理
for msg in data.get("messages", []):
if msg.type == "tool":
print(f"\nSubagent complete: {msg.name}")
print(f" Result: {str(msg.content)[:200]}...")
else:
print(f"[main agent] step: {node_name}")
# 子代理更新(非空命名空间)
else:
for node_name, data in chunk["data"].items():
print(f" [{chunk['ns'][0]}] step: {node_name}")
预期输出:
[main agent] step: model_request
[tools:call_abc123] step: model_request
[tools:call_abc123] step: tools
[tools:call_abc123] step: model_request
Subagent complete: task
Result: ## AI Safety Report...
[main agent] step: model_request
4. 流式传输LLM令牌
使用stream_mode="messages"可以流式传输主代理和子代理的单个令牌:
python
current_source = ""
for chunk in agent.stream(
{"messages": [{"role": "user", "content": "Research quantum computing advances"}]},
stream_mode="messages",
subgraphs=True,
version="v2",
):
if chunk["type"] == "messages":
token, metadata = chunk["data"]
# 检查事件是否来自子代理(命名空间包含"tools:")
is_subagent = any(s.startswith("tools:") for s in chunk["ns"])
if is_subagent:
# 来自子代理的令牌
subagent_ns = next(s for s in chunk["ns"] if s.startswith("tools:"))
if subagent_ns != current_source:
print(f"\n\n--- [subagent: {subagent_ns}] ---")
current_source = subagent_ns
if token.content:
print(token.content, end="", flush=True)
else:
# 来自主代理的令牌
if "main" != current_source:
print("\n\n--- [main agent] ---")
current_source = "main"
if token.content:
print(token.content, end="", flush=True)
print()
5. 流式传输工具调用
当子代理使用工具时,你可以流式传输工具调用事件,以显示每个子代理正在做什么:
python
for chunk in agent.stream(
{"messages": [{"role": "user", "content": "Research recent quantum computing advances"}]},
stream_mode="messages",
subgraphs=True,
version="v2",
):
if chunk["type"] == "messages":
token, metadata = chunk["data"]
# 识别来源:"main"或子代理命名空间段
is_subagent = any(s.startswith("tools:") for s in chunk["ns"])
source = next((s for s in chunk["ns"] if s.startswith("tools:")), "main") if is_subagent else "main"
# 工具调用块(流式工具调用)
if token.tool_call_chunks:
for tc in token.tool_call_chunks:
if tc.get("name"):
print(f"\n[{source}] Tool call: {tc['name']}")
# 参数以块的形式流式传输 - 增量写入
if tc.get("args"):
print(tc["args"], end="", flush=True)
# 工具结果
if token.type == "tool":
print(f"\n[{source}] Tool result [{token.name}]: {str(token.content)[:150]}")
# 常规AI内容(跳过工具调用消息)
if token.type == "ai" and token.content and not token.tool_call_chunks:
print(token.content, end="", flush=True)
print()
6. 自定义更新
使用get_stream_writer可以在子代理工具内部发出自定义进度事件:
python
import time
from langchain.tools import tool
from langgraph.config import get_stream_writer
from deepagents import create_deep_agent
@tool
def analyze_data(topic: str) -> str:
"""对给定主题运行数据分析。
此工具执行实际分析并发出进度更新。
对于任何分析请求,您必须调用此工具。
"""
writer = get_stream_writer()
writer({"status": "starting", "topic": topic, "progress": 0})
time.sleep(0.5)
writer({"status": "analyzing", "progress": 50})
time.sleep(0.5)
writer({"status": "complete", "progress": 100})
return (
f'Analysis of "{topic}": Customer sentiment is 85% positive, '
"driven by product quality and support response times."
)
agent = create_deep_agent(
model="google_genai:gemini-3.1-pro-preview",
system_prompt=(
"You are a coordinator. For any analysis request, you MUST delegate "
"to the analyst subagent using the task tool. Never try to answer directly. "
"After receiving the result, summarize it in one sentence."
),
subagents=[
{
"name": "analyst",
"description": "Performs data analysis with real-time progress tracking",
"system_prompt": (
"You are a data analyst. You MUST call the analyze_data tool "
"for every analysis request. Do not use any other tools. "
"After the analysis completes, report the result."
),
"tools": [analyze_data],
},
],
)
for chunk in agent.stream(
{"messages": [{"role": "user", "content": "Analyze customer satisfaction trends"}]},
stream_mode="custom",
subgraphs=True,
version="v2",
):
if chunk["type"] == "custom":
is_subagent = any(s.startswith("tools:") for s in chunk["ns"])
if is_subagent:
subagent_ns = next(s for s in chunk["ns"] if s.startswith("tools:"))
print(f"[{subagent_ns}]", chunk["data"])
else:
print("[main]", chunk["data"])
预期输出:
[tools:call_abc123] {'status': 'starting', 'topic': 'customer satisfaction trends', 'progress': 0}
[tools:call_abc123] {'status': 'analyzing', 'progress': 50}
[tools:call_abc123] {'status': 'complete', 'progress': 100}
7. 组合多种流模式
你可以组合多种流模式,以获得代理执行的完整视图:
python
# 跳过内部中间件步骤 - 只显示有意义的节点名称
INTERESTING_NODES = {"model_request", "tools"}
last_source = ""
mid_line = False # 当我们写了没有尾随换行符的令牌时为True
for chunk in agent.stream(
{"messages": [{"role": "user", "content": "Analyze the impact of remote work on team productivity"}]},
stream_mode=["updates", "messages", "custom"],
subgraphs=True,
version="v2",
):
is_subagent = any(s.startswith("tools:") for s in chunk["ns"])
source = "subagent" if is_subagent else "main"
if chunk["type"] == "updates":
for node_name in chunk["data"]:
if node_name not in INTERESTING_NODES:
continue
if mid_line:
print()
mid_line = False
print(f"[{source}] step: {node_name}")
elif chunk["type"] == "messages":
token, metadata = chunk["data"]
if token.content:
# 当源更改时打印标题
if source != last_source:
if mid_line:
print()
mid_line = False
print(f"\n[{source}] ", end="")
last_source = source
print(token.content, end="", flush=True)
mid_line = True
elif chunk["type"] == "custom":
if mid_line:
print()
mid_line = False
print(f"[{source}] custom event:", chunk["data"])
print()
8. 跟踪子代理生命周期
监控子代理的启动、运行和完成:
python
active_subagents = {}
for chunk in agent.stream(
{"messages": [{"role": "user", "content": "Research the latest AI safety developments"}]},
stream_mode="updates",
subgraphs=True,
version="v2",
):
if chunk["type"] == "updates":
for node_name, data in chunk["data"].items():
# ─── 阶段1: 检测子代理启动 ────────────────────────
# 当主代理的model_request包含task工具调用时,
# 一个子代理已被生成。
if not chunk["ns"] and node_name == "model_request":
for msg in data.get("messages", []):
for tc in getattr(msg, "tool_calls", []):
if tc["name"] == "task":
active_subagents[tc["id"]] = {
"type": tc["args"].get("subagent_type"),
"description": tc["args"].get("description", "")[:80],
"status": "pending",
}
print(
f'[lifecycle] PENDING → subagent "{tc["args"].get("subagent_type")}" '
f'({tc["id"]})'
)
# ─── 阶段2: 检测子代理运行 ─────────────────────────
# 当我们从tools:UUID命名空间接收事件时,
# 该子代理正在积极执行。
if chunk["ns"] and chunk["ns"][0].startswith("tools:"):
pregel_id = chunk["ns"][0].split(":")[1]
# 检查是否有任何待处理的子代理需要标记为运行中。
# 注意:pregel任务ID与tool_call_id不同,
# 因此我们在第一次子代理事件时将任何待处理的子代理标记为运行中。
for sub_id, sub in active_subagents.items():
if sub["status"] == "pending":
sub["status"] = "running"
print(
f'[lifecycle] RUNNING → subagent "{sub["type"]}" '
f"(pregel: {pregel_id})"
)
break
# ─── 阶段3: 检测子代理完成 ──────────────────────
# 当主代理的tools节点返回工具消息时,
# 子代理已完成并返回其结果。
if not chunk["ns"] and node_name == "tools":
for msg in data.get("messages", []):
if msg.type == "tool":
sub = active_subagents.get(msg.tool_call_id)
if sub:
sub["status"] = "complete"
print(
f'[lifecycle] COMPLETE → subagent "{sub["type"]}" '
f"({msg.tool_call_id})"
)
print(f" Result preview: {str(msg.content)[:120]}...")
# 打印最终状态
print("\n--- Final subagent states ---")
for sub_id, sub in active_subagents.items():
print(f" {sub['type']}: {sub['status']}")
v2流格式
所有上述示例都使用v2流格式(version="v2"),这是推荐的方法。每个块都是一个StreamPart字典,包含type、ns和data键------无论流模式、模式数量或子图设置如何,形状都相同。
v2格式消除了嵌套元组解包,使得在Deep Agents中处理子图流变得简单直接。比较两种格式:
v2格式(推荐):
python
# 统一格式 --- 无需嵌套元组解包
for chunk in agent.stream(
{"messages": [{"role": "user", "content": "Research quantum computing"}]},
stream_mode=["updates", "messages", "custom"],
subgraphs=True,
version="v2",
):
print(chunk["type"]) # "updates", "messages", 或 "custom"
print(chunk["ns"]) # () 表示主代理, ("tools:<id>",) 表示子代理
print(chunk["data"]) # 有效载荷
v1格式(旧版):
python
# 必须处理 (namespace, (mode, data)) 嵌套元组
for namespace, chunk in agent.stream(
{"messages": [{"role": "user", "content": "Research quantum computing"}]},
stream_mode=["updates", "messages", "custom"],
subgraphs=True,
):
mode, data = chunk[0], chunk[1]
print(mode) # "updates", "messages", 或 "custom"
print(namespace) # () 表示主代理, ("tools:<id>",) 表示子代理
print(data) # 有效载荷
总结
Deep Agents Streaming为你提供了前所未有的代理执行可见性。通过利用命名空间、多种流模式和自定义更新,你可以构建出能够实时响应代理活动的复杂应用程序。无论是简单的进度指示器还是完整的多代理监控仪表板,Streaming功能都能满足你的需求。
记住,要使用这些功能,你需要确保安装了LangGraph >= 1.1版本,并始终在流式调用中指定version="v2"以获得最佳体验。
现在,你已经掌握了Deep Agents Streaming的核心概念和实现方法,可以开始构建自己的实时代理监控系统了!