作者:来自 Elastic Andre Luiz
逐步介绍如何使用 RAG 和 LlamaIndex 提取数据并进行搜索。
在本文中,我们将使用 LlamaIndex 来索引数据,从而实现一个常见问题搜索引擎。 Elasticsearch 将作为我们的向量数据库,实现向量搜索,而 RAG(Retrieval-Augmented Generation - 检索增强生成)将丰富上下文,提供更准确的响应。

更多阅读,请参阅
什么是 LlamaIndex?
LlamaIndex 是一个框架,它有助于创建由大型语言模型 (Language Models - LLM) 驱动的代理(agents)和工作流,以便与特定或私有数据进行交互。它允许将来自各种来源(API、PDF、数据库)的数据与 LLM 集成,从而实现研究、信息提取和生成情境化响应等任务。
关键概念:
- 代理:使用 LLM 执行任务的智能助手,从简单的响应到复杂的动作。
- 工作流:结合代理、数据连接器和工具以执行高级任务的多步骤流程。
- 上下文增强:一种利用外部数据丰富 LLM 的技术,克服其训练限制。
LlamaIndex 与 Elasticsearch 集成:
Elasticsearch 可以以多种方式与 LlamaIndex 一起使用:
- 数据源:使用 Elasticsearch Reader 提取文档。
- 嵌入模型:将数据编码为向量以进行语义搜索。
- 向量存储:使用 Elasticsearch 作为搜索向量化文档的存储库。
- 高级存储:配置文档摘要或知识图谱等结构。
使用 LlamaIndex 和 Elasticsearch 构建常见问题解答搜索
数据准备
我们将使用 Elasticsearch 服务常见问题解答作为示例。每个问题都从网站中提取出来并保存在单独的文本文件中。你可以使用任何方法来组织数据;在此示例中,我们选择在本地保存文件。
示例文件:
File Name: what-is-elasticsearch-service.txt
Content: Elasticsearch Service is hosted and managed Elasticsearch and Kibana brought to you by the creators of Elasticsearch. Elasticsearch Service is part of Elastic Cloud and ships with features that you can only get from the company behind Elasticsearch, Kibana, Beats, and Logstash. Elasticsearch is a full text search engine that suits a range of uses, from search on websites to big data analytics and more.
注:在上例中,文件名是 what-is-elasticsearch-service.txt。这个文件的内容是 "Elasticsearch Service is ...."。
保存所有问题后,目录将如下所示:

安装依赖项
我们将使用 Python 语言实现提取和搜索,我使用的版本是 3.9。作为先决条件,需要安装以下依赖项:
llama-index-vector-stores-elasticsearch
llama-index
openai
Elasticsearch 和 Kibana 将使用 Docker 创建,并通过 docker-compose.yml 配置以运行版本 8.16.2。这使得创建本地环境变得更加容易。
docker-compose.yml
version: '3.8'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.16.2
container_name: elasticsearch-8.16.2
environment:
- node.name=elasticsearch
- xpack.security.enabled=false
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms1024m -Xmx1024m"
ports:
- 9200:9200
networks:
- shared_network
kibana:
image: docker.elastic.co/kibana/kibana:8.16.2
container_name: kibana-8.16.2
restart: always
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
ports:
- 5601:5601
depends_on:
- elasticsearch
networks:
- shared_network
networks:
shared_network:
注 :你也可以参考文章 "使用 start-local 脚本在本地运行 Elasticsearch" 来进行本地部署。
文档采集
这些文档将使用 LlamaIndex 被索引到 Elasticsearch 中。首先,我们使用 SimpleDirectoryReader 加载文件,它允许从本地目录加载文件。加载文档后,我们将使用 VectorStoreIndex 对其进行索引。
documents = SimpleDirectoryReader("./faq").load_data()
storage_context = StorageContext.from_defaults(vector_store=es)
index = VectorStoreIndex(documents, storage_context=storage_context, embed_model=embed_model)
LlamaIndex 中的向量存储负责存储和管理文档嵌入。 LlamaIndex 支持不同类型的向量存储,在本例中,我们将使用 Elasticsearch。在 StorageContext 中,我们配置 Elasticsearch 实例。由于上下文是本地的,因此不需要额外的参数。其他环境的配置,请参考文档查看必要参数:ElasticsearchStore 配置。
默认情况下,LlamaIndex 使用 OpenAI text-embedding-ada-002 模型来生成嵌入。但是,在这个例子中,我们将使用 text-embedding-3-small 模型。值得注意的是,使用该模型需要 OpenAI API 密钥。
以下是文档提取的完整代码。
import openai
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader, StorageContext
from llama_index.embeddings.openai import OpenAIEmbedding
from llama_index.vector_stores.elasticsearch import ElasticsearchStore
openai.api_key = os.environ["OPENAI_API_KEY"]
es = ElasticsearchStore(
index_name="faq",
es_url="http://localhost:9200"
)
def format_title(filename):
filename_without_ext = filename.replace('.txt', '')
text_with_spaces = filename_without_ext.replace('-', ' ')
formatted_text = text_with_spaces.title()
return formatted_text
embed_model = OpenAIEmbedding(model="text-embedding-3-small")
documents = SimpleDirectoryReader("./faq").load_data()
for doc in documents:
doc.metadata['title'] = format_title(doc.metadata['file_name'])
storage_context = StorageContext.from_defaults(vector_store=es)
index = VectorStoreIndex(documents, storage_context=storage_context, embed_model=embed_model)
执行后,文档将被索引到 faq 索引中,如下所示:

使用 RAG 搜索
为了执行搜索,我们配置 ElasticsearchStore 客户端,并使用 Elasticsearch URL 设置 index_name 和 es_url 字段。在 retrieval_strategy 中,我们定义了用于向量搜索的 AsyncDenseVectorStrategy 。还有其他策略可用,例如 AsyncBM25Strategy (关键字搜索)和 AsyncSparseVectorStrategy (稀疏向量)。更多详细信息请参阅官方文档。
es = ElasticsearchStore(
index_name="faq",
es_url="http://localhost:9200",
retrieval_strategy=AsyncDenseVectorStrategy(
)
)
接下来,将创建一个 VectorStoreIndex 对象,我们在其中使用 ElasticsearchStore 对象配置 vector_store 。使用 as_retriever 方法,我们对查询最相关的文档进行搜索,并通过 similarity_top_k 参数将返回的结果数设置为 5。
index = VectorStoreIndex.from_vector_store(vector_store=es)
retriever = index.as_retriever(similarity_top_k=5)
results = retriever.retrieve(query)
下一步是 RAG。向量搜索的结果被纳入 LLM 的格式化提示中,从而能够根据检索到的信息做出情境化响应。
在 PromptTemplate 中我们定义了提示格式,其中包括:
- Context ({context_str}):检索器检索到的文档。
- Query({query_str}):用户的问题。
- Instructions:指导模型根据上下文做出反应,而不依赖外部知识。
最后,LLM 处理提示并返回精确且具有上下文的响应。
llm = OpenAI(model="gpt-4o")
context_str = "\n\n".join([n.node.get_content() for n in results])
response = llm.complete(
qa_prompt.format(context_str=context_str, query_str=query)
)
print("Answer:")
print(response)
完整代码如下:
es = ElasticsearchStore(
index_name="faq",
es_url="http://localhost:9200",
retrieval_strategy=AsyncDenseVectorStrategy(
)
)
def print_results(results):
for rank, result in enumerate(results, start=1):
title = result.metadata.get("title")
score = result.get_score()
text = result.get_text()
print(f"{rank}. title={title} \nscore={score} \ncontent={text}")
def search(query: str):
index = VectorStoreIndex.from_vector_store(vector_store=es)
retriever = index.as_retriever(similarity_top_k=10)
results = retriever.retrieve(QueryBundle(query_str=query))
print_results(results)
qa_prompt = PromptTemplate(
"You are a helpful and knowledgeable assistant."
"Your task is to answer the user's query based solely on the context provided below."
"Do not use any prior knowledge or external information.\n"
"---------------------\n"
"Context:\n"
"{context_str}\n"
"---------------------\n"
"Query: {query_str}\n"
"Instructions:\n"
"1. Carefully read and understand the context provided.\n"
"2. If the context contains enough information to answer the query, provide a clear and concise answer.\n"
"3. Do not make up or guess any information.\n"
"Answer: "
)
llm = OpenAI(model="gpt-4o")
context_str = "\n\n".join([n.node.get_content() for n in results])
response = llm.complete(
qa_prompt.format(context_str=context_str, query_str=query)
)
print("Answer:")
print(response)
question = "Elastic services are free?"
print(f"Question: {question}")
search(question)
现在我们可以执行搜索,例如 "Elastic services are free?" 并根据常见问题数据本身获得情境化的响应。
Question: Elastic services are free?
Answer:
Elastic services are not entirely free. However, there is a 14-day free trial available for exploring Elastic solutions. After the trial, access to features and services depends on the subscription level.
为了生成此响应,使用了以下文档:
1. title=Can I Try Elasticsearch Service For Free
score=1.0
content=Yes, sign up for a 14-day free trial. The trial starts the moment a cluster is created.
During the free trial period get access to a deployment to explore Elastic solutions for Enterprise Search, Observability, Security, or the latest version of the Elastic Stack.
2. title=Do You Offer Elastic S Commercial Products
score=0.9941274512218439
content=Yes, all Elasticsearch Service customers have access to basic authentication, role-based access control, and monitoring.
Elasticsearch Service Gold, Platinum and Enterprise customers get complete access to all the capabilities in X-Pack: Security, Alerting, Monitoring, Reporting, Graph Analysis & Visualization. Contact us to learn more.
3. title=What Is Elasticsearch Service
score=0.9896776845746571
content=Elasticsearch Service is hosted and managed Elasticsearch and Kibana brought to you by the creators of Elasticsearch. Elasticsearch Service is part of Elastic Cloud and ships with features that you can only get from the company behind Elasticsearch, Kibana, Beats, and Logstash. Elasticsearch is a full text search engine that suits a range of uses, from search on websites to big data analytics and more.
4. title=Can I Run The Full Elastic Stack In Elasticsearch Service
score=0.9880631561979476
content=Many of the products that are part of the Elastic Stack are readily available in Elasticsearch Service, including Elasticsearch, Kibana, plugins, and features such as monitoring and security. Use other Elastic Stack products directly with Elasticsearch Service. For example, both Logstash and Beats can send their data to Elasticsearch Service. What is run is determined by the subscription level.
5. title=What Is The Difference Between Elasticsearch Service And The Amazon Elasticsearch Service
score=0.9835054890793161
content=Elasticsearch Service is the only hosted and managed Elasticsearch service built, managed, and supported by the company behind Elasticsearch, Kibana, Beats, and Logstash. With Elasticsearch Service, you always get the latest versions of the software. Our service is built on best practices and years of experience hosting and managing thousands of Elasticsearch clusters in the Cloud and on premise. For more information, check the following Amazon and Elastic Elasticsearch Service comparison page.
Please note that there is no formal partnership between Elastic and Amazon Web Services (AWS), and Elastic does not provide any support on the AWS Elasticsearch Service.
结论
使用 LlamaIndex,我们演示了如何创建一个高效的常见问题搜索系统,并支持 Elasticsearch 作为向量数据库。使用嵌入来提取和索引文档,从而实现向量搜索。通过 PromptTemplate,搜索结果被纳入上下文并发送到 LLM,LLM 根据检索到的文档生成精确且情境化的响应。
该工作流程将信息检索与情境化响应生成相结合,以提供准确且相关的结果。
参考
- https://www.elastic.co/guide/en/cloud/current/ec-faq-getting-started.html
- https://docs.llamaindex.ai/en/stable/api_reference/readers/elasticsearch/
- https://docs.llamaindex.ai/en/stable/module_guides/indexing/vector_store_index/
- https://docs.llamaindex.ai/en/stable/examples/query_engine/custom_query_engine/
- https://www.elastic.co/search-labs/integrations/llama-index
想要获得 Elastic 认证吗?了解下一期 Elasticsearch 工程师培训何时举行!
Elasticsearch 包含许多新功能,可帮助你为你的用例构建最佳的搜索解决方案。深入了解我们的示例笔记本以了解更多信息,开始免费云试用,或立即在本地机器上试用 Elastic。
原文:How to ingest data to Elasticsearch through LlamaIndex - Elasticsearch Labs