检索增强生成RAG with LangChain、OpenAI and FAISS

参考:RAG with LangChain --- BGE documentation

安装依赖

bash 复制代码
pip install langchain_community langchain_openai langchain_huggingface faiss-cpu pymupdf

注册OpenAI key

API keys - OpenAI APIhttps://platform.openai.com/api-keys

完整代码和注释

LangChainDemo.py

python 复制代码
# For openai key
import os
os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY"

# 1. 初始化OpenAI模型
from langchain_openai.chat_models import ChatOpenAI

llm = ChatOpenAI(model_name="gpt-4o-mini")

# 测试OpenAI调用
response = llm.invoke("What does M3-Embedding stands for?")
print(response.content)

# 2. 加载PDF文档
from langchain_community.document_loaders import PyPDFLoader

# Or download the paper and put a path to the local file instead
loader = PyPDFLoader("https://arxiv.org/pdf/2402.03216")
docs = loader.load()
print(docs[0].metadata)

# 3. 分割文本
from langchain.text_splitter import RecursiveCharacterTextSplitter

# initialize a splitter
splitter = RecursiveCharacterTextSplitter(
    chunk_size=1000,    # Maximum size of chunks to return
    chunk_overlap=150,  # number of overlap characters between chunks
)

# use the splitter to split our paper
corpus = splitter.split_documents(docs)
print("分割后文档片段数:", len(corpus))

# 4. 初始化嵌入模型
from langchain_huggingface.embeddings import HuggingFaceEmbeddings

embedding_model = HuggingFaceEmbeddings(model_name="BAAI/bge-base-en-v1.5",
encode_kwargs={"normalize_embeddings": True})

# 5. 构建向量数据库
from langchain_community.vectorstores import FAISS

vectordb = FAISS.from_documents(corpus, embedding_model)

# (optional) save the vector database to a local directory
# 保存向量库(确保目录权限)
if not os.path.exists("vectorstore.db"):
    vectordb.save_local("vectorstore.db")
print("向量数据库已保存")

# 6. 创建检索链
from langchain_core.prompts import ChatPromptTemplate

template = """
You are a Q&A chat bot.
Use the given context only, answer the question.

<context>
{context}
</context>

Question: {input}
"""

# Create a prompt template
prompt = ChatPromptTemplate.from_template(template)

from langchain.chains.combine_documents import create_stuff_documents_chain
from langchain.chains import create_retrieval_chain

doc_chain = create_stuff_documents_chain(llm, prompt)
# Create retriever for later use
retriever = vectordb.as_retriever(search_kwargs={"k": 3})  # 调整检索数量
chain = create_retrieval_chain(retriever, doc_chain)

# 7. 执行查询
response = chain.invoke({"input": "What does M3-Embedding stands for?"})

# print the answer only
print("\n答案:", response['answer'])

运行

bash 复制代码
python LangChainDemo.py

结果

python 复制代码
M3-Embedding refers to "Multimodal, Multi-Task, and Multi-Lingual" embedding techniques that integrate information from multiple modalities (such as text, images, and audio), support multiple tasks (like classification, generation, or translation), and can operate across multiple languages. This approach helps in building versatile models capable of understanding and generating information across various contexts and formats.

If you are looking for a specific context or application of M3-Embedding, please provide more details!
{'producer': 'pdfTeX-1.40.25', 'creator': 'LaTeX with hyperref', 'creationdate': '2024-07-01T00:26:51+00:00', 'author': '', 'keywords': '', 'moddate': '2024-07-01T00:26:51+00:00', 'ptex.fullbanner': 'This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5', 'subject': '', 'title': '', 'trapped': '/False', 'source': 'https://arxiv.org/pdf/2402.03216', 'total_pages': 18, 'page': 0, 'page_label': '1'}
分割后文档片段数: 87
向量数据库已保存

答案: M3-Embedding stands for Multi-Linguality, Multi-Functionality, and Multi-Granularity.
相关推荐
deephub9 小时前
大规模向量检索优化:Binary Quantization 让 RAG 系统内存占用降低 32 倍
人工智能·大语言模型·向量检索·rag
槿花Hibiscus9 小时前
agent基础:langchain 中 pubMed api(NCBI) 的使用
langchain
拖拉斯旋风10 小时前
深入理解 LangChain 中的 `.pipe()`:构建可组合 AI 应用的核心管道机制
javascript·langchain
胡伯来了12 小时前
06 - 数据收集 - 网络采集
数据采集·request·rag
FreeCode13 小时前
智能体设计模式解析:交接模式(Handoffs)
langchain·agent·ai编程
FreeBuf_13 小时前
“lc“键漏洞:LangChain高危缺陷(CVE-2025-68664)使提示注入攻击可窃取机密
安全·web安全·langchain
鑫_Dev13 小时前
LangChain 第二篇:RAG从文档加载到检索增强生成
langchain
夏日白云15 小时前
《PDF解析工程实录》第 8 章|融合策略:不是兜底,而是信息利用率最大化
pdf·llm·大语言模型·rag·文档解析
淡酒交魂16 小时前
「LangChain」ChatPromptTemplate学习笔记
机器学习·langchain
淡酒交魂17 小时前
「LangChain学习」ChatPromptTemplate学习笔记
机器学习·langchain