RAG索引流程详解:如何高效解析文档构建知识库

引言:为什么文档解析是RAG的基石?

在RAG(检索增强生成)系统中,文档解析是整个知识库构建的第一步,也是最关键的一步。就像建房子需要打好地基一样,良好的文档解析质量直接决定了后续检索和生成的效果。今天,我们就深入探讨RAG索引流程中的文档解析技术。

一、RAG文档解析的整体架构

首先,让我们通过一个流程图了解完整的解析流程:

复制代码
┌─────────────────┐    ┌─────────────────┐    ┌─────────────────┐
│                 │    │                 │    │                 │
│  原始文档集合   │───▶│  文档解析与拆分 │───▶│  文本向量化     │
│                 │    │                 │    │                 │
└─────────────────┘    └─────────────────┘    └────────┬────────┘
                                                        │
┌─────────────────┐    ┌─────────────────┐             │
│                 │    │                 │             ▼
│  元数据提取    │◀───│  语义分块       │    ┌─────────────────┐
│                 │    │                 │    │                 │
└─────────────────┘    └─────────────────┘    │  向量存储索引  │
                                                        │                 │
                               └─────────────────┘

二、文档解析的核心步骤详解

2.1 支持多种文档格式

实际项目中,文档格式多种多样。我们需要一个能处理各种格式的解析器:

python 复制代码
import os
from typing import List, Dict, Any
from langchain.document_loaders import (
    PyPDFLoader,
    Docx2txtLoader,
    UnstructuredHTMLLoader,
    UnstructuredMarkdownLoader,
    TextLoader
)
from langchain.schema import Document

class MultiFormatDocumentParser:
    """多格式文档解析器"""
    
    def __init__(self):
        self.format_handlers = {
            '.pdf': self._parse_pdf,
            '.docx': self._parse_docx,
            '.html': self._parse_html,
            '.htm': self._parse_html,
            '.md': self._parse_markdown,
            '.txt': self._parse_text,
        }
    
    def parse_document(self, file_path: str) -> List[Document]:
        """解析单个文档"""
        ext = os.path.splitext(file_path)[1].lower()
        
        if ext not in self.format_handlers:
            raise ValueError(f"不支持的文件格式: {ext}")
        
        return self.format_handlers[ext](file_path)
    
    def _parse_pdf(self, file_path: str) -> List[Document]:
        """解析PDF文档"""
        loader = PyPDFLoader(file_path)
        documents = loader.load()
        
        # 提取PDF元数据
        for doc in documents:
            doc.metadata.update({
                'source': file_path,
                'format': 'pdf',
                'total_pages': len(documents)
            })
        
        return documents
    
    def _parse_docx(self, file_path: str) -> List[Document]:
        """解析Word文档"""
        loader = Docx2txtLoader(file_path)
        documents = loader.load()
        
        # 添加文档结构信息
        for doc in documents:
            doc.metadata.update({
                'source': file_path,
                'format': 'docx'
            })
        
        return documents
    
    def _parse_html(self, file_path: str) -> List[Document]:
        """解析HTML文档"""
        loader = UnstructuredHTMLLoader(file_path)
        return loader.load()
    
    def _parse_markdown(self, file_path: str) -> List[Document]:
        """解析Markdown文档"""
        loader = UnstructuredMarkdownLoader(file_path)
        return loader.load()
    
    def _parse_text(self, file_path: str) -> List[Document]:
        """解析纯文本文档"""
        loader = TextLoader(file_path, encoding='utf-8')
        return loader.load()

# 使用示例
parser = MultiFormatDocumentParser()
documents = parser.parse_document("example.pdf")

2.2 智能文档分块策略

文档分块是解析的核心环节,直接影响检索质量:

python 复制代码
from langchain.text_splitter import (
    RecursiveCharacterTextSplitter,
    TokenTextSplitter,
    MarkdownHeaderTextSplitter
)
import re
from typing import List

class SmartChunker:
    """智能文档分块器"""
    
    def __init__(self, chunk_size: int = 1000, chunk_overlap: int = 200):
        self.chunk_size = chunk_size
        self.chunk_overlap = chunk_overlap
        
        # 初始化不同的分块器
        self.recursive_splitter = RecursiveCharacterTextSplitter(
            chunk_size=chunk_size,
            chunk_overlap=chunk_overlap,
            separators=["\n\n", "\n", "。", "!", "?", ";", ",", " ", ""]
        )
        
        self.token_splitter = TokenTextSplitter(
            chunk_size=chunk_size,
            chunk_overlap=chunk_overlap
        )
    
    def semantic_chunking(self, text: str, doc_type: str = None) -> List[str]:
        """语义感知的分块"""
        
        # 根据文档类型选择分块策略
        if doc_type == 'markdown':
            return self._markdown_chunking(text)
        elif self._is_code_document(text):
            return self._code_chunking(text)
        else:
            return self._semantic_paragraph_chunking(text)
    
    def _markdown_chunking(self, text: str) -> List[str]:
        """Markdown文档分块"""
        headers_to_split_on = [
            ("#", "标题1"),
            ("##", "标题2"),
            ("###", "标题3"),
        ]
        
        markdown_splitter = MarkdownHeaderTextSplitter(
            headers_to_split_on=headers_to_split_on
        )
        
        chunks = markdown_splitter.split_text(text)
        return [chunk.page_content for chunk in chunks]
    
    def _code_chunking(self, text: str) -> List[str]:
        """代码文档分块"""
        # 按函数、类、方法进行分块
        patterns = [
            r'(def\s+\w+(.*?):.*?(?=\n\s*def|\Z))',  # 函数
            r'(class\s+\w+(.*?):.*?(?=\n\s*class|\Z))',  # 类
            r'(//\s*===.*?===)',  # 注释区块
        ]
        
        chunks = []
        for pattern in patterns:
            chunks.extend(re.findall(pattern, text, re.DOTALL))
        
        return chunks if chunks else self.recursive_splitter.split_text(text)
    
    def _semantic_paragraph_chunking(self, text: str) -> List[str]:
        """语义段落分块"""
        # 先按段落分割
        paragraphs = re.split(r'\n\s*\n', text)
        
        chunks = []
        current_chunk = ""
        
        for para in paragraphs:
            para = para.strip()
            if not para:
                continue
                
            # 如果当前块加上新段落不超过限制,就合并
            if len(current_chunk) + len(para) + 1 <= self.chunk_size:
                if current_chunk:
                    current_chunk += "\n\n" + para
                else:
                    current_chunk = para
            else:
                # 保存当前块,开始新块
                if current_chunk:
                    chunks.append(current_chunk)
                current_chunk = para
        
        # 添加最后一个块
        if current_chunk:
            chunks.append(current_chunk)
        
        return chunks
    
    def _is_code_document(self, text: str) -> bool:
        """判断是否为代码文档"""
        code_keywords = ['def ', 'class ', 'import ', 'function ', 'var ', 'let ', 'const ']
        return any(keyword in text[:500] for keyword in code_keywords)

# 使用示例
chunker = SmartChunker(chunk_size=1000, chunk_overlap=200)
chunks = chunker.semantic_chunking(large_text_document, doc_type='markdown')

2.3 高级元数据提取

元数据能显著提升检索精度:

python 复制代码
import hashlib
from datetime import datetime
import pytz
from langchain.schema import Document

class MetadataExtractor:
    """元数据提取器"""
    
    def __init__(self):
        self.zh_timezone = pytz.timezone('Asia/Shanghai')
    
    def extract_document_metadata(self, 
                                 content: str, 
                                 file_path: str, 
                                 doc_type: str) -> Dict[str, Any]:
        """提取文档元数据"""
        
        metadata = {
            'source': file_path,
            'doc_type': doc_type,
            'file_name': os.path.basename(file_path),
            'file_size': os.path.getsize(file_path),
            'last_modified': self._get_file_mtime(file_path),
            'content_hash': self._calculate_content_hash(content),
            'chunk_count': 0,
            'total_length': len(content),
            'indexing_time': datetime.now(self.zh_timezone).isoformat(),
        }
        
        # 提取内容相关元数据
        content_metadata = self._extract_content_metadata(content)
        metadata.update(content_metadata)
        
        return metadata
    
    def _extract_content_metadata(self, content: str) -> Dict[str, Any]:
        """从内容中提取元数据"""
        
        # 提取标题
        title = self._extract_title(content)
        
        # 提取关键词(简单实现)
        keywords = self._extract_keywords(content)
        
        # 提取文档结构信息
        structure_info = self._analyze_structure(content)
        
        # 提取时间信息
        time_info = self._extract_time_info(content)
        
        return {
            'title': title,
            'keywords': keywords,
            'sections': structure_info.get('sections', []),
            'paragraph_count': structure_info.get('paragraph_count', 0),
            'mentioned_dates': time_info,
            'language': self._detect_language(content),
            'has_tables': self._has_tables(content),
            'has_code_blocks': self._has_code_blocks(content),
        }
    
    def _extract_title(self, content: str) -> str:
        """提取文档标题"""
        # 尝试从开头或Markdown标题中提取
        lines = content.strip().split('\n')
        
        for line in lines[:10]:  # 检查前10行
            line = line.strip()
            # Markdown标题
            if line.startswith('# '):
                return line[2:].strip()
            # HTML标题标签
            if '<h1>' in line.lower():
                match = re.search(r'<h1[^>]*>(.*?)</h1>', line, re.IGNORECASE)
                if match:
                    return match.group(1).strip()
        
        # 如果没有明确标题,使用第一行非空内容
        for line in lines:
            if line.strip() and len(line.strip()) > 10:
                return line.strip()[:100]
        
        return "未命名文档"
    
    def _extract_keywords(self, content: str, top_n: int = 10) -> List[str]:
        """提取关键词(简化版)"""
        # 移除常见停用词
        stop_words = {'的', '了', '在', '是', '我', '有', '和', '就', '不', '人', '都', '一', '一个'}
        
        # 中文分词(简化处理)
        words = re.findall(r'[\u4e00-\u9fff]{2,}', content)
        
        # 统计词频
        word_freq = {}
        for word in words:
            if word not in stop_words:
                word_freq[word] = word_freq.get(word, 0) + 1
        
        # 返回频率最高的词
        sorted_words = sorted(word_freq.items(), key=lambda x: x[1], reverse=True)
        return [word for word, freq in sorted_words[:top_n]]
    
    def _analyze_structure(self, content: str) -> Dict[str, Any]:
        """分析文档结构"""
        sections = []
        
        # 提取标题
        headings = re.findall(r'^(#{1,6})\s+(.+)$', content, re.MULTILINE)
        
        for level, title in headings:
            sections.append({
                'level': len(level),
                'title': title.strip(),
                'type': 'heading'
            })
        
        # 统计段落
        paragraphs = [p for p in re.split(r'\n\s*\n', content) if p.strip()]
        
        return {
            'sections': sections,
            'paragraph_count': len(paragraphs),
            'has_headings': len(headings) > 0
        }
    
    def _extract_time_info(self, content: str) -> List[str]:
        """提取时间信息"""
        # 匹配常见日期格式
        date_patterns = [
            r'\d{4}年\d{1,2}月\d{1,2}日',
            r'\d{4}-\d{1,2}-\d{1,2}',
            r'\d{4}/\d{1,2}/\d{1,2}',
        ]
        
        dates = []
        for pattern in date_patterns:
            dates.extend(re.findall(pattern, content))
        
        return list(set(dates))  # 去重
    
    def _detect_language(self, content: str) -> str:
        """检测语言"""
        # 简单基于字符判断
        zh_chars = len(re.findall(r'[\u4e00-\u9fff]', content))
        en_chars = len(re.findall(r'[a-zA-Z]', content))
        
        if zh_chars > en_chars:
            return 'zh'
        elif en_chars > zh_chars:
            return 'en'
        else:
            return 'mixed'
    
    def _has_tables(self, content: str) -> bool:
        """判断是否包含表格"""
        # Markdown表格
        if re.search(r'|.*|.*\n|[-:\s|]+|', content):
            return True
        # HTML表格
        if re.search(r'<table[^>]*>', content, re.IGNORECASE):
            return True
        return False
    
    def _has_code_blocks(self, content: str) -> bool:
        """判断是否包含代码块"""
        return bool(re.search(r'```[\s\S]*?```', content))
    
    def _get_file_mtime(self, file_path: str) -> str:
        """获取文件修改时间"""
        mtime = os.path.getmtime(file_path)
        return datetime.fromtimestamp(mtime, self.zh_timezone).isoformat()
    
    def _calculate_content_hash(self, content: str) -> str:
        """计算内容哈希值"""
        return hashlib.md5(content.encode('utf-8')).hexdigest()

2.4 完整的文档处理流水线

python 复制代码
class DocumentProcessingPipeline:
    """文档处理流水线"""
    
    def __init__(self, 
                 chunk_size: int = 1000,
                 chunk_overlap: int = 200):
        
        self.parser = MultiFormatDocumentParser()
        self.chunker = SmartChunker(chunk_size, chunk_overlap)
        self.metadata_extractor = MetadataExtractor()
    
    def process_document(self, file_path: str) -> Dict[str, Any]:
        """处理单个文档"""
        
        print(f"开始处理文档: {file_path}")
        
        # 1. 解析文档
        raw_documents = self.parser.parse_document(file_path)
        
        if not raw_documents:
            raise ValueError(f"无法解析文档: {file_path}")
        
        # 2. 合并所有页面/部分的内容
        full_content = "\n\n".join([doc.page_content for doc in raw_documents])
        
        # 3. 提取文档级元数据
        doc_type = os.path.splitext(file_path)[1][1:].lower()
        document_metadata = self.metadata_extractor.extract_document_metadata(
            full_content, file_path, doc_type
        )
        
        # 4. 智能分块
        chunks = self.chunker.semantic_chunking(full_content, doc_type)
        
        # 5. 为每个块添加元数据
        processed_chunks = []
        for i, chunk_content in enumerate(chunks):
            chunk_metadata = document_metadata.copy()
            chunk_metadata.update({
                'chunk_id': i + 1,
                'chunk_index': i,
                'chunk_length': len(chunk_content),
                'is_first_chunk': i == 0,
                'is_last_chunk': i == len(chunks) - 1,
            })
            
            # 创建Document对象
            chunk_doc = Document(
                page_content=chunk_content,
                metadata=chunk_metadata
            )
            processed_chunks.append(chunk_doc)
        
        # 更新文档级元数据
        document_metadata['chunk_count'] = len(processed_chunks)
        
        print(f"文档处理完成: {file_path}, 生成 {len(processed_chunks)} 个块")
        
        return {
            'document_metadata': document_metadata,
            'chunks': processed_chunks,
            'original_path': file_path
        }
    
    def process_directory(self, 
                         directory_path: str, 
                         extensions: List[str] = None) -> List[Dict[str, Any]]:
        """批量处理目录中的文档"""
        
        if extensions is None:
            extensions = ['.pdf', '.docx', '.txt', '.md', '.html']
        
        all_results = []
        
        for root, _, files in os.walk(directory_path):
            for file in files:
                file_path = os.path.join(root, file)
                ext = os.path.splitext(file)[1].lower()
                
                if extensions and ext not in extensions:
                    continue
                
                try:
                    result = self.process_document(file_path)
                    all_results.append(result)
                except Exception as e:
                    print(f"处理文件失败 {file_path}: {str(e)}")
                    continue
        
        print(f"目录处理完成,共处理 {len(all_results)} 个文档")
        return all_results

# 使用示例
pipeline = DocumentProcessingPipeline(
    chunk_size=1000,
    chunk_overlap=200
)

# 处理单个文档
result = pipeline.process_document("document.pdf")

# 处理整个目录
results = pipeline.process_directory(
    "./knowledge_base",
    extensions=['.pdf', '.docx', '.md', '.txt']
)

# 提取所有块
all_chunks = []
for result in results:
    all_chunks.extend(result['chunks'])

print(f"总共生成 {len(all_chunks)} 个文本块")

三、性能优化和最佳实践

3.1 并行处理加速

python 复制代码
import concurrent.futures
from tqdm import tqdm

class ParallelDocumentProcessor:
    """并行文档处理器"""
    
    def __init__(self, max_workers: int = 4):
        self.pipeline = DocumentProcessingPipeline()
        self.max_workers = max_workers
    
    def process_batch(self, file_paths: List[str]) -> List[Dict[str, Any]]:
        """批量并行处理文档"""
        
        results = []
        
        with concurrent.futures.ProcessPoolExecutor(
            max_workers=self.max_workers
        ) as executor:
            # 提交任务
            future_to_file = {
                executor.submit(self.pipeline.process_document, fp): fp 
                for fp in file_paths
            }
            
            # 处理结果
            for future in tqdm(
                concurrent.futures.as_completed(future_to_file),
                total=len(file_paths),
                desc="处理文档"
            ):
                file_path = future_to_file[future]
                try:
                    result = future.result()
                    results.append(result)
                except Exception as e:
                    print(f"处理失败 {file_path}: {str(e)}")
        
        return results

3.2 增量更新处理

python 复制代码
class IncrementalIndexUpdater:
    """增量索引更新器"""
    
    def __init__(self, index_storage_path: str):
        self.index_storage_path = index_storage_path
        self.processed_files = self._load_processed_files()
    
    def _load_processed_files(self) -> Dict[str, str]:
        """加载已处理文件记录"""
        record_file = os.path.join(self.index_storage_path, "processed_files.json")
        
        if os.path.exists(record_file):
            with open(record_file, 'r', encoding='utf-8') as f:
                return json.load(f)
        return {}
    
    def _save_processed_files(self):
        """保存已处理文件记录"""
        record_file = os.path.join(self.index_storage_path, "processed_files.json")
        with open(record_file, 'w', encoding='utf-8') as f:
            json.dump(self.processed_files, f, ensure_ascii=False, indent=2)
    
    def get_changed_files(self, 
                         directory_path: str, 
                         extensions: List[str] = None) -> List[str]:
        """获取有变化的文件"""
        
        changed_files = []
        
        for root, _, files in os.walk(directory_path):
            for file in files:
                file_path = os.path.join(root, file)
                ext = os.path.splitext(file)[1].lower()
                
                if extensions and ext not in extensions:
                    continue
                
                # 检查文件是否已处理或有更新
                file_hash = self._calculate_file_hash(file_path)
                
                if (file_path not in self.processed_files or 
                    self.processed_files[file_path] != file_hash):
                    changed_files.append(file_path)
                    self.processed_files[file_path] = file_hash
        
        return changed_files
    
    def _calculate_file_hash(self, file_path: str) -> str:
        """计算文件哈希"""
        hasher = hashlib.md5()
        
        with open(file_path, 'rb') as f:
            buf = f.read(65536)
            while len(buf) > 0:
                hasher.update(buf)
                buf = f.read(65536)
        
        return hasher.hexdigest()

四、常见问题与解决方案

问题1:文档格式兼容性

解决方案:使用多个解析库,并实现降级策略

python 复制代码
def robust_pdf_parsing(file_path: str):
    """鲁棒的PDF解析"""
    parsers = [PyPDFLoader, UnstructuredPDFLoader]
    
    for parser_class in parsers:
        try:
            loader = parser_class(file_path)
            return loader.load()
        except Exception as e:
            continue
    
    # 降级到文本提取
    try:
        import textract
        text = textract.process(file_path).decode('utf-8')
        return [Document(page_content=text, metadata={'source': file_path})]
    except:
        raise ValueError(f"无法解析PDF文件: {file_path}")

问题2:中文文档分块效果不佳

解决方案:使用专门的中文分块器

python 复制代码
class ChineseTextSplitter:
    """中文文本分块器"""
    
    def __init__(self, chunk_size=500, chunk_overlap=50):
        self.chunk_size = chunk_size
        self.chunk_overlap = chunk_overlap
    
    def split_text(self, text: str) -> List[str]:
        # 使用中文标点进行分割
        separators = ['\n\n', '\n', '。', '!', '?', ';', '......', '...', ',', '、']
        
        chunks = []
        current_chunk = ""
        
        # 按句子分割
        sentences = self._split_into_sentences(text)
        
        for sentence in sentences:
            if len(current_chunk) + len(sentence) <= self.chunk_size:
                current_chunk += sentence
            else:
                if current_chunk:
                    chunks.append(current_chunk)
                current_chunk = sentence
        
        if current_chunk:
            chunks.append(current_chunk)
        
        return chunks
    
    def _split_into_sentences(self, text: str) -> List[str]:
        """将中文文本分割成句子"""
        import re
        # 中文句子分割规则
        pattern = r'([。!?;...]+\s*)'
        parts = re.split(pattern, text)
        
        sentences = []
        for i in range(0, len(parts)-1, 2):
            sentence = parts[i] + (parts[i+1] if i+1 < len(parts) else '')
            if sentence.strip():
                sentences.append(sentence)
        
        return sentences

五、实战:构建完整的RAG索引系统

python 复制代码
class CompleteRAGIndexer:
    """完整的RAG索引构建系统"""
    
    def __init__(self, 
                 embedding_model=None,
                 vector_store=None):
        
        self.pipeline = DocumentProcessingPipeline()
        self.parallel_processor = ParallelDocumentProcessor()
        self.updater = IncrementalIndexUpdater("./vector_store")
        
        self.embedding_model = embedding_model
        self.vector_store = vector_store
    
    def build_index(self, 
                   knowledge_base_path: str,
                   incremental: bool = True) -> Dict[str, Any]:
        """构建知识库索引"""
        
        print("开始构建RAG索引...")
        
        # 1. 获取需要处理的文件
        if incremental:
            files_to_process = self.updater.get_changed_files(
                knowledge_base_path,
                extensions=['.pdf', '.docx', '.md', '.txt', '.html']
            )
            print(f"增量更新: 发现 {len(files_to_process)} 个变更文件")
        else:
            # 获取所有文件
            all_files = []
            for root, _, files in os.walk(knowledge_base_path):
                for file in files:
                    if file.lower().endswith(('.pdf', '.docx', '.md', '.txt', '.html')):
                        all_files.append(os.path.join(root, file))
            files_to_process = all_files
        
        if not files_to_process:
            print("没有需要处理的文件")
            return {'status': 'no_changes', 'chunks': []}
        
        # 2. 并行处理文档
        processing_results = self.parallel_processor.process_batch(files_to_process)
        
        # 3. 提取所有文本块
        all_chunks = []
        for result in processing_results:
            all_chunks.extend(result['chunks'])
        
        print(f"总共生成 {len(all_chunks)} 个文本块")
        
        # 4. 生成向量嵌入
        if self.embedding_model and self.vector_store:
            print("开始生成向量嵌入...")
            
            texts = [chunk.page_content for chunk in all_chunks]
            metadatas = [chunk.metadata for chunk in all_chunks]
            
            embeddings = self.embedding_model.embed_documents(texts)
            
            # 存储到向量数据库
            self.vector_store.add_embeddings(
                texts=texts,
                embeddings=embeddings,
                metadatas=metadatas
            )
            
            print("向量索引构建完成")
        
        # 5. 保存处理记录
        self.updater._save_processed_files()
        
        return {
            'status': 'success',
            'total_chunks': len(all_chunks),
            'processed_files': len(files_to_process),
            'chunks': all_chunks
        }

# 使用示例
if __name__ == "__main__":
    # 初始化组件
    from langchain.embeddings import OpenAIEmbeddings
    from langchain.vectorstores import Chroma
    
    embedding_model = OpenAIEmbeddings()
    vector_store = Chroma(
        persist_directory="./chroma_db",
        embedding_function=embedding_model
    )
    
    # 构建索引
    indexer = CompleteRAGIndexer(
        embedding_model=embedding_model,
        vector_store=vector_store
    )
    
    # 首次全量构建
    result = indexer.build_index(
        knowledge_base_path="./knowledge_base",
        incremental=False
    )
    
    # 后续增量更新
    result = indexer.build_index(
        knowledge_base_path="./knowledge_base",
        incremental=True
    )

六、总结与展望

文档解析是RAG系统的基石,一个优秀的解析系统应该具备:

    1. 多格式支持:能处理各种常见文档格式
    1. 智能分块:理解文档结构,保持语义完整性
    1. 丰富元数据:提取有用信息辅助检索
    1. 高效处理:支持并行和增量更新
    1. 鲁棒性:能处理各种异常情况

随着大模型技术的发展,未来的文档解析会更加智能化,比如:

  • • 使用视觉模型解析复杂版式
  • • 利用大模型理解文档语义结构
  • • 自动识别文档类型和领域
  • • 智能提取表格、图表等非文本内容

希望本文对你构建自己的RAG系统有所帮助!记住,好的开始是成功的一半,精心设计的文档解析流程会让你的RAG系统事半功倍。

相关推荐
九河云13 小时前
海上风电“AI偏航对风”:把发电量提升2.1%,单台年增30万度
大数据·人工智能·数字化转型
wm104313 小时前
机器学习第二讲 KNN算法
人工智能·算法·机器学习
沈询-阿里13 小时前
Skills vs MCP:竞合关系还是互补?深入解析Function Calling、MCP和Skills的本质差异
人工智能·ai·agent·ai编程
xiaobai17813 小时前
测试工程师入门AI技术 - 前序:跨越焦虑,从优势出发开启学习之旅
人工智能·学习
盛世宏博北京13 小时前
云边协同・跨系统联动:智慧档案馆建设与功能落地
大数据·人工智能
TGITCIC14 小时前
讲透知识图谱Neo4j在构建Agent时到底怎么用(二)
人工智能·知识图谱·neo4j·ai agent·ai智能体·大模型落地·graphrag
逆羽飘扬14 小时前
DeepSeek-mHC深度拆解:流形约束如何驯服狂暴的超连接?
人工智能
bing.shao14 小时前
AI工作流如何开始
人工智能
小途软件14 小时前
用于机器人电池电量预测的Sarsa强化学习混合集成方法
java·人工智能·pytorch·python·深度学习·语言模型