【LLM】DeepAgents实战教程及DeepResearch代码分析

1. 项目背景介绍

Deep Agents 是一款开箱即用、强约定式的智能体托管框架,无需手动搭建提示词、工具与上下文管理逻辑,可直接运行并按需定制。它内置任务规划、文件系统操作、沙箱化命令执行、子智能体任务委派等能力,搭配优化好的默认提示词让模型高效使用工具,同时具备自动摘要与大输出存文件的上下文管理机制,兼顾易用性与工程实用性。

本教程将介绍如何使用DeepAgents的工具对名为"deep_research"的实际项目代码进行全面分析,并生成详细的研究报告。deep_research是一个基于LangGraph和DeepAgents的深度研究代理项目,用于执行网络搜索和信息收集任务。通过本教程,您将学习如何利用AI的能力来理解代码结构、识别潜在问题、提取关键功能,并生成可视化的分析结果。

2. DeepAgents工具安装与配置步骤

2.1 安装DeepAgents

首先,我们需要安装DeepAgents及其依赖项。打开终端并执行以下命令:

bash 复制代码
pip install deepagents

2.2 配置API密钥

DeepAgents需要使用LLM模型来执行分析任务,默认使用Anthropic的Claude模型。因此,我们需要配置相应的API密钥:

  1. 注册并获取Anthropic API密钥
  2. 设置环境变量:
bash 复制代码
# Windows
tset ANTHROPIC_API_KEY=your_api_key_here

# Linux/macOS
export ANTHROPIC_API_KEY=your_api_key_here

2.3 项目准备

假设我们已经有一个名为"deep_research"的项目,我们需要确保该项目的代码结构清晰,以便DeepAgents能够有效地分析它。

3. 核心代码实现

3.1 项目结构分析

deep_research项目采用模块化设计,主要包含以下文件和目录:

复制代码
deep_research/
├── research_agent/       # 研究代理模块
│   ├── __init__.py
│   ├── prompts.py        # 提示模板
│   └── tools.py          # 自定义工具
├── .env.example          # 环境变量示例
├── README.md             # 项目说明文档
├── agent.py              # 主代理实现
├── langgraph.json        # LangGraph配置
├── pyproject.toml        # 项目依赖配置
├── research_agent.ipynb  # Jupyter笔记本示例
├── utils.py              # 工具函数
└── uv.lock               # 依赖锁定文件

3.2 主代理实现 (agent.py)

主代理文件实现了一个深度研究代理,使用Claude或Gemini模型执行网络搜索和信息收集任务:

python 复制代码
"""Research Agent - LangGraph部署的独立脚本

该模块创建了一个深度研究代理,具备自定义工具和提示功能,用于执行具有战略思维和上下文管理的网络研究。"""

from datetime import datetime

from langchain.chat_models import init_chat_model
from langchain_google_genai import ChatGoogleGenerativeAI
from deepagents import create_deep_agent

from research_agent.prompts import (
    RESEARCHER_INSTRUCTIONS,
    RESEARCH_WORKFLOW_INSTRUCTIONS,
    SUBAGENT_DELEGATION_INSTRUCTIONS,
)
from research_agent.tools import tavily_search, think_tool

# 限制参数
max_concurrent_research_units = 3  # 最大并发研究单元数
max_researcher_iterations = 3      # 最大研究迭代次数

# 获取当前日期
current_date = datetime.now().strftime("%Y-%m-%d")

# 组合指令(RESEARCHER_INSTRUCTIONS仅用于子代理)
INSTRUCTIONS = (
    RESEARCH_WORKFLOW_INSTRUCTIONS
    + "\n\n"
    + "=" * 80
    + "\n\n"
    + SUBAGENT_DELEGATION_INSTRUCTIONS.format(
        max_concurrent_research_units=max_concurrent_research_units,
        max_researcher_iterations=max_researcher_iterations,
    )
)

# 创建研究子代理
research_sub_agent = {
    "name": "research-agent",
    "description": "Delegate research to the sub-agent researcher. Only give this researcher one topic at a time.",
    "system_prompt": RESEARCHER_INSTRUCTIONS.format(date=current_date),
    "tools": [tavily_search, think_tool],
}

# 模型选择
# 可以选择使用Gemini 3
# model = ChatGoogleGenerativeAI(model="gemini-3-pro-preview", temperature=0.0)

# 默认使用Claude 4.5
model = init_chat_model(model="anthropic:claude-sonnet-4-5-20250929", temperature=0.0)

# 创建代理
agent = create_deep_agent(
    model=model,
    tools=[tavily_search, think_tool],
    system_prompt=INSTRUCTIONS,
    subagents=[research_sub_agent],
)

3.3 提示模板实现 (research_agent/prompts.py)

提示模板文件定义了研究工作流程、子代理委托和研究人员指令:

python 复制代码
"""研究深度智能体的提示模板和工具描述。"""

RESEARCH_WORKFLOW_INSTRUCTIONS = """# Research Workflow

Follow this workflow for all research requests:

1. **Plan**: Create a todo list with write_todos to break down the research into focused tasks
2. **Save the request**: Use write_file() to save the user's research question to `/research_request.md`
3. **Research**: Delegate research tasks to sub-agents using the task() tool - ALWAYS use sub-agents for research, never conduct research yourself
4. **Synthesize**: Review all sub-agent findings and consolidate citations (each unique URL gets one number across all findings)
5. **Write Report**: Write a comprehensive final report to `/final_report.md` (see Report Writing Guidelines below)
6. **Verify**: Read `/research_request.md` and confirm you've addressed all aspects with proper citations and structure

## Research Planning Guidelines
- Batch similar research tasks into a single TODO to minimize overhead
- For simple fact-finding questions, use 1 sub-agent
- For comparisons or multi-faceted topics, delegate to multiple parallel sub-agents
- Each sub-agent should research one specific aspect and return findings

## Report Writing Guidelines

When writing the final report to `/final_report.md`, follow these structure patterns:

**For comparisons:**
1. Introduction
2. Overview of topic A
3. Overview of topic B
4. Detailed comparison
5. Conclusion

**For lists/rankings:**
Simply list items with details - no introduction needed:
1. Item 1 with explanation
2. Item 2 with explanation
3. Item 3 with explanation

**For summaries/overviews:**
1. Overview of topic
2. Key concept 1
3. Key concept 2
4. Key concept 3
5. Conclusion

**General guidelines:**
- Use clear section headings (## for sections, ### for subsections)
- Write in paragraph form by default - be text-heavy, not just bullet points
- Do NOT use self-referential language ("I found...", "I researched...")
- Write as a professional report without meta-commentary
- Each section should be comprehensive and detailed
- Use bullet points only when listing is more appropriate than prose

**Citation format:**
- Cite sources inline using [1], [2], [3] format
- Assign each unique URL a single citation number across ALL sub-agent findings
- End report with ### Sources section listing each numbered source
- Number sources sequentially without gaps (1,2,3,4...)
- Format: [1] Source Title: URL (each on separate line for proper list rendering)
- Example:

  Some important finding [1]. Another key insight [2].

  ### Sources
  [1] AI Research Paper: https://example.com/paper
  [2] Industry Analysis: https://example.com/analysis
"""

RESEARCHER_INSTRUCTIONS = """You are a research assistant conducting research on the user's input topic. For context, today's date is {date}.

<Task>
Your job is to use tools to gather information about the user's input topic.
You can use any of the research tools provided to you to find resources that can help answer the research question. 
You can call these tools in series or in parallel, your research is conducted in a tool-calling loop.
</Task>

<Available Research Tools>
You have access to two specific research tools:
1. **tavily_search**: For conducting web searches to gather information
2. **think_tool**: For reflection and strategic planning during research
**CRITICAL: Use think_tool after each search to reflect on results and plan next steps**
</Available Research Tools>

<Instructions>
Think like a human researcher with limited time. Follow these steps:

1. **Read the question carefully** - What specific information does the user need?
2. **Start with broader searches** - Use broad, comprehensive queries first
3. **After each search, pause and assess** - Do I have enough to answer? What's still missing?
4. **Execute narrower searches as you gather information** - Fill in the gaps
5. **Stop when you can answer confidently** - Don't keep searching for perfection
</Instructions>

<Hard Limits>
**Tool Call Budgets** (Prevent excessive searching):
- **Simple queries**: Use 2-3 search tool calls maximum
- **Complex queries**: Use up to 5 search tool calls maximum
- **Always stop**: After 5 search tool calls if you cannot find the right sources

**Stop Immediately When**:
- You can answer the user's question comprehensively
- You have 3+ relevant examples/sources for the question
- Your last 2 searches returned similar information
</Hard Limits>

<Show Your Thinking>
After each search tool call, use think_tool to analyze the results:
- What key information did I find?
- What's missing?
- Do I have enough to answer the question comprehensively?
- Should I search more or provide my answer?
</Show Your Thinking>

<Final Response Format>
When providing your findings back to the orchestrator:

1. **Structure your response**: Organize findings with clear headings and detailed explanations
2. **Cite sources inline**: Use [1], [2], [3] format when referencing information from your searches
3. **Include Sources section**: End with ### Sources listing each numbered source with title and URL

Example:

## Key Findings

Context engineering is a critical technique for AI agents [1]. Studies show that proper context management can improve performance by 40% [2].

### Sources
[1] Context Engineering Guide: https://example.com/context-guide
[2] AI Performance Study: https://example.com/study


The orchestrator will consolidate citations from all sub-agents into the final report.
</Final Response Format>
"""

SUBAGENT_DELEGATION_INSTRUCTIONS = """# Sub-Agent Research Coordination

Your role is to coordinate research by delegating tasks from your TODO list to specialized research sub-agents.

## Delegation Strategy

**DEFAULT: Start with 1 sub-agent** for most queries:
- "What is quantum computing?" → 1 sub-agent (general overview)
- "List the top 10 coffee shops in San Francisco" → 1 sub-agent
- "Summarize the history of the internet" → 1 sub-agent
- "Research context engineering for AI agents" → 1 sub-agent (covers all aspects)

**ONLY parallelize when the query EXPLICITLY requires comparison or has clearly independent aspects:**

**Explicit comparisons** → 1 sub-agent per element:
- "Compare OpenAI vs Anthropic vs DeepMind AI safety approaches" → 3 parallel sub-agents
- "Compare Python vs JavaScript for web development" → 2 parallel sub-agents

**Clearly separated aspects** → 1 sub-agent per aspect (use sparingly):
- "Research renewable energy adoption in Europe, Asia, and North America" → 3 parallel sub-agents (geographic separation)
- Only use this pattern when aspects cannot be covered efficiently by a single comprehensive search

## Key Principles
- **Bias towards single sub-agent**: One comprehensive research task is more token-efficient than multiple narrow ones
- **Avoid premature decomposition**: Don't break "research X" into "research X overview", "research X techniques", "research X applications" - just use 1 sub-agent for all of X
- **Parallelize only for clear comparisons**: Use multiple sub-agents when comparing distinct entities or geographically separated data

## Parallel Execution Limits
- Use at most {max_concurrent_research_units} parallel sub-agents per iteration
- Make multiple task() calls in a single response to enable parallel execution
- Each sub-agent returns findings independently

## Research Limits
- Stop after {max_researcher_iterations} delegation rounds if you haven't found adequate sources
- Stop when you have sufficient information to answer comprehensively
- Bias towards focused research over exhaustive exploration"""

3.4 自定义工具实现 (research_agent/tools.py)

工具文件实现了网络搜索和思考工具:

python 复制代码
"""Research Tools.

该模块为研究代理提供搜索和内容处理工具,使用Tavily进行URL发现并获取完整的网页内容。
"""

import httpx
from langchain_core.tools import InjectedToolArg, tool
from markdownify import markdownify
from tavily import TavilyClient
from typing_extensions import Annotated, Literal

tavily_client = TavilyClient()


def fetch_webpage_content(url: str, timeout: float = 10.0) -> str:
    """获取网页内容并转换为Markdown格式。
    参数:
        url: 要获取的网页URL
        timeout: 请求超时时间(秒)
    返回:
        网页内容的Markdown格式
    """
    headers = {
        "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36"
    }

    try:
        response = httpx.get(url, headers=headers, timeout=timeout)
        response.raise_for_status()
        return markdownify(response.text)
    except Exception as e:
        return f"Error fetching content from {url}: {str(e)}"


@tool(parse_docstring=True)
def tavily_search(
    query: str,
    max_results: Annotated[int, InjectedToolArg] = 1,
    topic: Annotated[
        Literal["general", "news", "finance"], InjectedToolArg
    ] = "general",
) -> str:
    """ 搜索给定查询的网络信息。
    使用Tavily发现相关URL,然后获取并以Markdown格式返回完整的网页内容。
    参数:
        query: 要执行的搜索查询
        max_results: 要返回的最大结果数量(默认值:1)
        topic: 主题过滤器 - 'general'(通用)、'news'(新闻)或 'finance'(金融)(默认值:'general')
    返回:
        包含完整网页内容的格式化搜索结果
    """
    # 使用Tavily发现URL
    search_results = tavily_client.search(
        query,
        max_results=max_results,
        topic=topic,
    )

    # 获取每个URL的完整内容
    result_texts = []
    for result in search_results.get("results", []):
        url = result["url"]
        title = result["title"]

        # 获取网页内容
        content = fetch_webpage_content(url)

        result_text = f"""## {title}
**URL:** {url}

{content}

---
"""
        result_texts.append(result_text)

    # 格式化最终响应
    response = f"""🔍 Found {len(result_texts)} result(s) for '{query}':

{chr(10).join(result_texts)}"""

    return response


@tool(parse_docstring=True)
def think_tool(reflection: str) -> str:
    """ 研究进度战略反思与决策工具
        每次检索后使用本工具系统分析结果并规划后续步骤
        这将在研究工作流程中创建审慎的暂停,以确保决策质量
    使用时机:
        - 获取检索结果后:我发现了哪些关键信息?
        - 决定下一步前:现有材料是否足够给出全面回答?
        - 评估研究缺口时:我仍缺少哪些具体信息?
        - 结束研究前:现在能否提供完整答案?
    反思应包含:
        1. 当前发现分析------已收集哪些具体信息?
        2. 缺口评估------仍缺失哪些关键信息?
        3. 质量评估------是否有足够证据/实例支撑优质回答?
        4. 战略决策------应继续搜索还是提交现有答案?
    参数:
        reflection: 关于研究进度、发现、缺口及后续步骤的详细反思
    返回:
        确认反思记录已存档供决策参考
    """
    return f"Reflection recorded: {reflection}"

3.5 工具函数实现 (utils.py)

工具函数文件提供了用于在Jupyter笔记本中显示消息和提示的功能:

python 复制代码
"""用于在Jupyter笔记本中显示消息和提示的实用函数。"""

import json

from rich.console import Console
from rich.panel import Panel
from rich.text import Text

console = Console()


def format_message_content(message):
    """Convert message content to displayable string."""
    parts = []
    tool_calls_processed = False

    # 处理主要内容
    if isinstance(message.content, str):
        parts.append(message.content)
    elif isinstance(message.content, list):
        # 处理复杂内容,如工具调用(Anthropic格式)
        for item in message.content:
            if item.get("type") == "text":
                parts.append(item["text"])
            elif item.get("type") == "tool_use":
                parts.append(f"\n🔧 Tool Call: {item['name']}")
                parts.append(f"   Args: {json.dumps(item['input'], indent=2)}")
                parts.append(f"   ID: {item.get('id', 'N/A')}")
                tool_calls_processed = True
    else:
        parts.append(str(message.content))

    # 处理附加到消息的工具调用(OpenAI格式)- 仅当尚未处理时
    if (
        not tool_calls_processed
        and hasattr(message, "tool_calls")
        and message.tool_calls
    ):
        for tool_call in message.tool_calls:
            parts.append(f"\n🔧 Tool Call: {tool_call['name']}")
            parts.append(f"   Args: {json.dumps(tool_call['args'], indent=2)}")
            parts.append(f"   ID: {tool_call['id']}")

    return "\n".join(parts)


def format_messages(messages):
    """Format and display a list of messages with Rich formatting."""
    for m in messages:
        msg_type = m.__class__.__name__.replace("Message", "")
        content = format_message_content(m)

        if msg_type == "Human":
            console.print(Panel(content, title="🧑 Human", border_style="blue"))
        elif msg_type == "Ai":
            console.print(Panel(content, title="🤖 Assistant", border_style="green"))
        elif msg_type == "Tool":
            console.print(Panel(content, title="🔧 Tool Output", border_style="yellow"))
        else:
            console.print(Panel(content, title=f"📝 {msg_type}", border_style="white"))


def format_message(messages):
    """Alias for format_messages for backward compatibility."""
    return format_messages(messages)


def show_prompt(prompt_text: str, title: str = "Prompt", border_style: str = "blue"):
    """Display a prompt with rich formatting and XML tag highlighting.

    Args:
        prompt_text: The prompt string to display
        title: Title for the panel (default: "Prompt")
        border_style: Border color style (default: "blue")
    """
    # 创建提示的格式化显示
    formatted_text = Text(prompt_text)
    formatted_text.highlight_regex(r"<[^>]+>", style="bold blue")  # 高亮XML标签
    formatted_text.highlight_regex(
        r"##[^#\n]+", style="bold magenta"
    )  # 高亮标题
    formatted_text.highlight_regex(
        r"###[^#\n]+", style="bold cyan"
    )  # 高亮子标题

    # 在面板中显示以获得更好的展示效果
    console.print(
        Panel(
            formatted_text,
            title=f"[bold green]{title}[/bold green]",
            border_style=border_style,
            padding=(1, 2),
        )
    )

4. 研究报告生成流程

4.1 执行分析脚本

根据deep_research项目的设计,您可以通过两种方式运行研究代理:

方式1:Jupyter笔记本
bash 复制代码
uv run jupyter notebook research_agent.ipynb
方式2:LangGraph服务器
bash 复制代码
langgraph dev

4.2 分析过程

DeepAgents在deep_research项目中执行以下步骤:

  1. 计划 :使用write_todos创建待办事项列表,将研究分解为重点任务
  2. 保存请求 :使用write_file()将用户的研究问题保存到/research_request.md
  3. 研究 :使用task()工具将研究任务委托给子代理 - 始终使用子代理进行研究,而不是自己进行研究
  4. 综合:审查所有子代理的发现并整合引用(每个唯一URL在所有发现中获得一个编号)
  5. 撰写报告 :将综合最终报告写入/final_report.md
  6. 验证 :读取/research_request.md并确认已使用适当的引用和结构解决了所有方面的问题

4.3 报告内容

生成的研究报告包含以下部分:

  • 项目概述:deep_research项目的基本信息和目标
  • 目录结构:项目文件组织和模块关系
  • 核心功能:主要功能模块和API介绍,包括研究工作流程、子代理委托策略等
  • 技术栈:使用的编程语言、框架和库,如Python、LangChain、DeepAgents、Tavily等
  • 代码质量分析:代码风格、注释、可维护性评估
  • 潜在问题:识别的bug、性能问题和安全隐患
  • 优化建议:改进代码质量和性能的建议
  • 结论:对项目的整体评估和总结

5. 结果分析与可视化展示

5.1 代码结构可视化

我们可以使用Mermaid语法生成deep_research项目的代码结构可视化,帮助我们更好地理解项目组织:
agent.py
research_agent/prompts.py
research_agent/tools.py
deepagents
tavily_search
think_tool
TavilyClient
fetch_webpage_content
langchain
ChatGoogleGenerativeAI
init_chat_model
utils.py
format_messages
show_prompt

5.2 研究工作流程可视化

deep_research项目的核心功能是执行深度研究,其工作流程可以可视化如下:
用户输入研究问题
创建待办事项列表
保存研究请求到research_request.md
委托研究任务给子代理
子代理执行网络搜索
子代理使用think_tool反思结果
子代理返回研究发现
综合所有子代理的发现
撰写最终报告到final_report.md
验证报告是否解决所有问题

5.3 子代理委托策略可视化

deep_research项目使用了智能的子代理委托策略,根据查询类型决定使用多少个子代理:
简单事实查询
比较查询
多方面主题
接收研究查询
查询类型?
使用1个子代理
使用多个并行子代理
使用多个并行子代理
执行研究
返回研究结果

5.4 性能分析

分析deep_research项目的性能情况,我们可以关注以下几个方面:

  1. 网络请求性能:tavily_search工具执行网络搜索和内容获取,这是主要的性能瓶颈
  2. 模型推理性能:使用Claude或Gemini模型进行推理的时间
  3. 并行执行效率:多子代理并行执行的效率
  4. 内存使用:处理大量网页内容时的内存消耗

5.5 优化建议可视化

基于性能分析,我们可以生成优化建议的可视化:
性能瓶颈
网络请求
模型推理
内存使用
实现请求缓存
优化HTTP请求
使用更高效的模型
优化提示模板
实现内容流式处理
限制单次处理的内容量

6. 常见问题解决方法

6.1 API密钥配置问题

问题:运行脚本时出现API密钥错误

解决方案

  • 确保正确设置了所有必要的环境变量:

    bash 复制代码
    # Windows
    tset ANTHROPIC_API_KEY=your_anthropic_api_key_here
    tset GOOGLE_API_KEY=your_google_api_key_here
    tset TAVILY_API_KEY=your_tavily_api_key_here
    tset LANGSMITH_API_KEY=your_langsmith_api_key_here
    
    # Linux/macOS
    export ANTHROPIC_API_KEY=your_anthropic_api_key_here
    export GOOGLE_API_KEY=your_google_api_key_here
    export TAVILY_API_KEY=your_tavily_api_key_here
    export LANGSMITH_API_KEY=your_langsmith_api_key_here
  • 检查API密钥是否有效

  • 确认网络连接正常

6.2 依赖安装问题

问题:安装依赖时出现错误

解决方案

  • 确保已安装uv包管理器:

    bash 复制代码
    curl -LsSf https://astral.sh/uv/install.sh | sh
  • 在deep_research目录下运行:

    bash 复制代码
    uv sync
  • 检查网络连接,确保能够访问PyPI

6.3 网络请求失败问题

问题:tavily_search工具无法获取网页内容

解决方案

  • 检查网络连接是否正常

  • 确保Tavily API密钥有效

  • 检查目标网站是否可访问

  • 尝试增加超时时间:

    python 复制代码
    def fetch_webpage_content(url: str, timeout: float = 15.0) -> str:
        # 实现略

6.4 性能问题

问题:研究过程速度慢

解决方案

  • 减少并行子代理数量:

    python 复制代码
    max_concurrent_research_units = 2  # 从3减少到2
  • 限制搜索结果数量:

    python 复制代码
    search_results = tavily_client.search(
        query,
        max_results=1,  # 保持为1
        topic=topic,
    )
  • 优化提示模板,减少Token使用

6.5 内存使用问题

问题:处理大型网页内容时内存使用过高

解决方案

  • 实现内容流式处理
  • 限制单次处理的内容量
  • 对大型网页内容进行分段处理

6.6 LangGraph服务器启动问题

问题:langgraph dev命令无法启动服务器

解决方案

  • 确保已安装langgraph:

    bash 复制代码
    uv add langgraph
  • 检查langgraph.json配置文件是否正确

  • 检查端口是否被占用

7. 总结

通过对deep_research项目的分析,我们可以看到DeepAgents工具为我们提供了一种高效、自动化的方法。本教程详细介绍了如何使用DeepAgents对deep_research项目进行构造,并生成详细的研究报告。

deep_research项目的主要特点

  1. 模块化设计:采用清晰的模块化结构,将核心功能、提示模板和工具分离
  2. 智能研究工作流程:实现了完整的研究工作流程,包括计划、保存请求、研究、综合、撰写报告和验证
  3. 子代理委托策略:根据查询类型智能选择子代理数量,优化研究效率
  4. 自定义工具:实现了tavily_search和think_tool等自定义工具,增强研究能力
  5. 多模型支持:支持Claude和Gemini等多种LLM模型
  6. 可视化界面:通过LangGraph服务器提供可视化界面,提升用户体验

通过本教程的学习,您应该能够使用DeepAgents工具来构建自己的项目代码,并生成有价值的内容。希望这对您的软件开发和维护工作有所帮助!

8. 扩展阅读

相关推荐
黑金IT2 小时前
*Qwen3-V2**与 **Gemini 4o**区别
人工智能·prompt
海兰2 小时前
【实战】MemPalace 完整安装与使用指南
人工智能·openclaw
weixin_408099672 小时前
【保姆级教程】按键精灵调用 OCR 文字识别 API(从0到1完整实战 + 可运行脚本)
java·前端·人工智能·后端·ocr·api·按键精灵
CoderJia程序员甲2 小时前
GitHub 热榜项目 - 日榜(2026-04-10)
人工智能·ai·大模型·github·ai教程
Daydream.V2 小时前
语言转换方法——CBOW
人工智能·语言模型·word2vec·词向量·cbow·神经语言模型
zhangshuang-peta2 小时前
MCP 的渐进式披露
人工智能·ai agent·mcp·peta
Eloudy2 小时前
不同特征值的特征向量互相正交的矩阵
人工智能·算法·机器学习
eBaoGao2 小时前
AI 对劳动就业市场的结构性冲击:基于美国劳工统计局 2026 数据的 3 大行业深度分析
人工智能
正在走向自律2 小时前
AI Agent:从概念到实践,下一代人工智能的核心形态
人工智能·ai agent·智能体