LangChain学习文档
概要
Facebook AI 相似性搜索(Faiss)是一个用于高效相似性搜索和密集向量聚类的库。它包含的算法可以搜索任意大小的向量集,甚至可能无法容纳在 RAM 中的向量集。它还包含用于评估和参数调整的支持代码。
本篇文章将展示如何使用与 FAISS 向量数据库相关的功能。
前提条件
python
pip install faiss-gpu # For CUDA 7.5+ Supported GPU's.
# OR
pip install faiss-cpu # For CPU Installation
内容
我们想要使用 OpenAIEmbeddings
,因此我们必须获取 OpenAI API
key。
python
import os
import getpass
os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")
# Uncomment the following line if you need to initialize FAISS with no AVX2 optimization
# 如果您需要在没有 AVX2 优化的情况下初始化 FAISS,请取消以下注释
# os.environ['FAISS_NO_AVX2'] = '1'
python
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import FAISS
from langchain.document_loaders import TextLoader
相关api链接:
OpenAIEmbeddings from langchain.embeddings.openai
CharacterTextSplitter from langchain.text_splitter
FAISS from langchain.vectorstores
TextLoader from langchain.document_loaders
python
from langchain.document_loaders import TextLoader
loader = TextLoader("../../../state_of_the_union.txt")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
参考API
:
TextLoader from langchain.document_loaders
python
db = FAISS.from_documents(docs, embeddings)
query = "What did the president say about Ketanji Brown Jackson"
docs = db.similarity_search(query)
python
print(docs[0].page_content)
结果:
python
今晚。我呼吁参议院: 通过《投票自由法案》。通过约翰·刘易斯投票权法案。当你这样做的时候,通过《披露法案》,这样美国人就可以知道谁在资助我们的选举。
今晚,我要向一位毕生为这个国家服务的人表示敬意:斯蒂芬·布雷耶法官------退伍军人、宪法学者、即将退休的美国最高法院法官。布雷耶法官,感谢您的服务。
总统最重要的宪法责任之一是提名某人在美国最高法院任职。
四天前,当我提名巡回上诉法院法官科坦吉·布朗·杰克逊时,我就这样做了。我们国家最顶尖的法律头脑之一,他将继承布雷耶大法官的卓越遗产。
使用分数进行相似性搜索(Similarity Search with score)
有一些 FAISS
特定方法。其中之一是similarity_search_with_score
,它不仅允许您返回文档,还允许返回查询到它们的距离分数。返回的距离分数是L2
距离。因此,分数越低越好。
python
docs_and_scores = db.similarity_search_with_score(query)
docs_and_scores[0]
结果:
python
(Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you're at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I'd like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer---an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation's top legal minds, who will continue Justice Breyer's legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'}),
0.36913747)
还可以使用similarity_search_by_vector
与给定嵌入向量相似的文档进行搜索,该向量接受嵌入向量作为参数而不是字符串。
python
# embed 向量
embedding_vector = embeddings.embed_query(query)
# embed 向量作为入参:embedding_vector
docs_and_scores = db.similarity_search_by_vector(embedding_vector)
保存和加载(Saving and loading)
您还可以保存和加载 FAISS 索引。这很有用,因此我们不必每次使用它时都重新创建它。
python
db.save_local("faiss_index")
new_db = FAISS.load_local("faiss_index", embeddings)
docs = new_db.similarity_search(query)
docs[0]
结果:
python
Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you're at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I'd like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer---an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation's top legal minds, who will continue Justice Breyer's legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'})
合并(Merging)
您还可以合并两个 FAISS
向量存储
python
db1 = FAISS.from_texts(["foo"], embeddings)
db2 = FAISS.from_texts(["bar"], embeddings)
# 打印第一个FAISS
db1.docstore._dict
结果:
python
{'068c473b-d420-487a-806b-fb0ccea7f711': Document(page_content='foo', metadata={})}
python
# 打印第二个FAISS
db2.docstore._dict
结果:
python
{'807e0c63-13f6-4070-9774-5c6f0fbb9866': Document(page_content='bar', metadata={})}
python
# 合并
db1.merge_from(db2)
# 打印
db1.docstore._dict
结果:
python
{'068c473b-d420-487a-806b-fb0ccea7f711': Document(page_content='foo', metadata={}),
'807e0c63-13f6-4070-9774-5c6f0fbb9866': Document(page_content='bar', metadata={})}
带过滤的相似性搜索(Similarity Search with filtering)
FAISS vectorstore
还可以支持过滤,因为 FAISS 本身不支持过滤,我们必须手动执行。
首先获取多于 k
个结果,然后过滤它们来完成的。您可以根据元数据过滤文档。
您还可以在调用任何搜索方法时设置 fetch_k
参数,以设置在过滤之前要获取的文档数量。这是一个小例子:
python
from langchain.schema import Document
# 先构造文档数据,方便后面的测试
list_of_documents = [
Document(page_content="foo", metadata=dict(page=1)),
Document(page_content="bar", metadata=dict(page=1)),
Document(page_content="foo", metadata=dict(page=2)),
Document(page_content="barbar", metadata=dict(page=2)),
Document(page_content="foo", metadata=dict(page=3)),
Document(page_content="bar burr", metadata=dict(page=3)),
Document(page_content="foo", metadata=dict(page=4)),
Document(page_content="bar bruh", metadata=dict(page=4)),
]
# 构建向量存储
db = FAISS.from_documents(list_of_documents, embeddings)
# 简单搜索下,方便后面的对比
results_with_scores = db.similarity_search_with_score("foo")
# 打印
for doc, score in results_with_scores:
print(f"Content: {doc.page_content}, Metadata: {doc.metadata}, Score: {score}")
相关API:Document from langchain.schema
python
Content: foo, Metadata: {'page': 1}, Score: 5.159960813797904e-15
Content: foo, Metadata: {'page': 2}, Score: 5.159960813797904e-15
Content: foo, Metadata: {'page': 3}, Score: 5.159960813797904e-15
Content: foo, Metadata: {'page': 4}, Score: 5.159960813797904e-15
现在我们进行相同的查询调用,但我们仅过滤 page = 1
:
python
# 开始使用过滤:filter指定过滤元数据page:1的数据
results_with_scores = db.similarity_search_with_score("foo", filter=dict(page=1))
for doc, score in results_with_scores:
print(f"Content: {doc.page_content}, Metadata: {doc.metadata}, Score: {score}")
结果:
python
Content: foo, Metadata: {'page': 1}, Score: 5.159960813797904e-15
Content: bar, Metadata: {'page': 1}, Score: 0.3131446838378906
同样的事情也可以用 max_marginal_relevance_search
来完成。
python
# max_marginal_relevance_search
results = db.max_marginal_relevance_search("foo", filter=dict(page=1))
for doc in results:
print(f"Content: {doc.page_content}, Metadata: {doc.metadata}")
结果:
python
# 相比上面,少了Score
Content: foo, Metadata: {'page': 1}
Content: bar, Metadata: {'page': 1}
以下是调用similarity_search
时如何设置 fetch_k
参数的示例。
通常我们需要 fetch_k参数
>> k 参数
。
这是因为 fetch_k 参数
是过滤之前将获取的文档数 。如果将 fetch_k
设置为较小的数字,则可能无法获得足够的文档进行过滤。
python
# k设置过滤后得到的文档数、fetch_k设置过滤前的文档数
results = db.similarity_search("foo", filter=dict(page=1), k=1, fetch_k=4)
for doc in results:
print(f"Content: {doc.page_content}, Metadata: {doc.metadata}")
结果:
python
Content: foo, Metadata: {'page': 1}
总结
本篇主要讲解FAISS的使用。
基本思路:
- 加载文档、拆分
- 利用embed构造向量存储:
db = FAISS.from_documents(docs, embeddings)
- 在此基础上,就可以
相关性搜索
、搜索过滤
等操作。
参考地址:
https://python.langchain.com/docs/integrations/vectorstores/faiss