作者:来自 Elastic Han Xiang Choong
讨论并实现 Elastic RAG 的代理流程,其中 LLM 选择调用 Elastic KB。
更多阅读:Elasticsearch:基于 Langchain 的 Elasticsearch Agent 对文档的搜索。
简介
代理是将 LLM 应用于实际用例的合乎逻辑的下一步。本文旨在介绍代理在 RAG 工作流中的概念和用法。总而言之,代理代表了一个极其令人兴奋的领域,具有许多雄心勃勃的应用程序和用例的可能性。
我希望在未来的文章中涵盖更多这些想法。现在,让我们看看如何使用 Elasticsearch 作为我们的知识库,使用 LangChain 作为我们的代理框架来实现 Agentic RAG。
背景
LLMs 的使用始于简单地提示 LLM 执行诸如回答问题和简单计算之类的任务。
但是,现有模型知识的不足意味着 LLMs 无法应用于需要专业技能的领域,例如企业客户服务和商业智能。
很快,提示过渡到检索增强生成 (RAG),这是 Elasticsearch 的天然优势。RAG 是一种有效而简单的方法,可以在查询时快速向 LLM 提供上下文和事实信息。另一种方法是漫长而昂贵的再培训过程,而成功率远不能得到保证。
RAG 的主要操作优势是允许近乎实时地向 LLM 应用程序提供更新的信息。
实施涉及采购向量数据库(例如 Elasticsearch)、部署嵌入模型(例如 ELSER)以及调用 search API 来检索相关文档。
检索到文档后,可以将其插入 LLM 的提示中,并根据内容生成答案。这提供了背景和事实,而 LLM 本身可能缺乏这两者。
直接调用 LLM、使用 RAG 和使用代理之间的区别
但是,标准 RAG 部署模型有一个缺点 - 它很死板。LLM 无法选择从哪个知识库中提取信息。它也无法选择使用其他工具,例如 Google 或 Bing 等网络搜索引擎 API。它无法查看当前天气,无法使用计算器,也无法考虑除给定知识库之外的任何工具的输出。
Agentic 模型的不同之处在于选择。
术语说明
工具使用(Tool usage),即 Langchain 上下文中使用的术语,也称为函数调用。无论出于何种目的,这两个术语都是可以互换的 - 两者都指 LLM 被赋予一组函数或工具,它可以用来补充其能力或影响世界。请耐心等待,因为我在本文的其余部分都使用了 "工具使用 - Tool Usage"。
选择
赋予 LLM 决策能力,并为其提供一套工具。根据对话的状态和历史,LLM 将选择是否使用每个工具,并将工具的输出纳入其响应中。
这些工具可能是知识库、计算器、网络搜索引擎和爬虫 - 种类繁多,没有限制或结束。LLM 能够执行复杂的操作和任务,而不仅仅是生成文本。
个用于研究特定主题的 agentic 流程示例
让我们实现一个代理的简单示例。Elastic 的核心优势在于我们的知识库。因此,此示例将重点介绍如何使用相对较大且复杂的知识库,方法是制作比简单的向量搜索更复杂的查询。
设置
首先,在项目目录中定义一个 .env 文件,并填写这些字段。我使用带有 GPT-4o 的 Azure OpenAI 部署来学习我的 LLM,并使用 Elastic Cloud 部署来学习我的知识库。我的 Python 版本是 Python 3.12.4,我使用 Macbook 进行操作。
ini
1. ELASTIC_ENDPOINT="YOUR ELASTIC ENDPOINT"
2. ELASTIC_API_KEY="YOUR ELASTIC API KEY"
4. OPENAI_API_TYPE="azure"
5. AZURE_OPENAI_ENDPOINT="YOUR AZURE ENDPOINT"
6. AZURE_OPENAI_API_VERSION="2024-06-01"
7. AZURE_OPENAI_API_KEY="YOUR AZURE API KEY"
8. AZURE_OPENAI_GPT4O_MODEL_
9. AZURE_OPENAI_GPT4O_DEPLOYMENT_
你可能必须在终端中安装以下依赖项。
pip install langchain elasticsearch
在你的项目目录中创建一个名为 chat.py 的 python 文件,并粘贴此代码片段以初始化你的 LLM 和与 Elastic Cloud 的连接:
python
1. import os
2. from dotenv import load_dotenv
3. load_dotenv()
5. from langchain.chat_models import AzureChatOpenAI
6. from langchain.agents import initialize_agent, AgentType, Tool
7. from langchain.tools import StructuredTool # Import StructuredTool
8. from langchain.memory import ConversationBufferMemory
9. from typing import Optional
10. from pydantic import BaseModel, Field
12. # LLM setup
13. llm = AzureChatOpenAI(
14. openai_api_version=os.getenv("AZURE_OPENAI_API_VERSION"),
15. azure_deployment=os.getenv("AZURE_OPENAI_GPT4O_DEPLOYMENT_NAME"),
16. temperature=0.5,
17. max_tokens=4096
18. )
20. from elasticsearch import Elasticsearch
21. # Elasticsearch Setup
22. try:
23. # Elasticsearch setup
24. es_endpoint = os.environ.get("ELASTIC_ENDPOINT")
25. es_client = Elasticsearch(
26. es_endpoint,
27. api_key=os.environ.get("ELASTIC_API_KEY")
28. )
29. except Exception as e:
30. es_client = None
Hello World!我们的第一个工具
初始化并定义我们的 LLM 和 Elastic 客户端后,让我们来做一个 Elastic 版的 Hello World。我们将定义一个函数来检查与 Elastic Cloud 的连接状态,并定义一个简单的代理对话链来调用它。
将以下函数定义为 langchain Tool。名称和描述是提示(prompt)工程的关键部分。LLM 依靠它们来确定是否在对话期间使用该工具。
python
1. # Define a function to check ES status
2. def es_ping(*args, **kwargs):
3. if es_client is None:
4. return "ES client is not initialized."
5. else:
6. try:
7. if es_client.ping():
8. return "ES ping returning True, ES is connected."
9. else:
10. return "ES is not connected."
11. except Exception as e:
12. return f"Error pinging ES: {e}"
14. es_status_tool = Tool(
15. ,
16. func=es_ping,
17. description="Checks if Elasticsearch is connected.",
18. )
20. tools = [es_status_tool]
现在,让我们初始化一个对话记忆组件来跟踪对话以及我们的 agent 本身。
ini
1. # Initialize memory to keep track of the conversation
2. memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
4. # Initialize agent
5. agent_chain = initialize_agent(
6. tools,
7. llm,
8. agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
9. memory=memory,
10. verbose=True,
11. )
最后,让我们用这个代码片段运行对话循环:
python
1. # Interactive conversation with the agent
2. def main():
3. print("Welcome to the chat agent. Type 'exit' to quit.")
4. while True:
5. user_input = input("You: ")
6. if user_input.lower() in ['exit', 'quit']:
7. print("Goodbye!")
8. break
9. response = agent_chain.run(input=user_input)
10. print("Assistant:", response)
12. if __name__ == "__main__":
13. main()
在终端中,运行 python chat.py 来初始化对话。
python chat.py
以下是我的操作:
yaml
1. You: Hello
2. Assistant: Hello! How can I assist you today?
3. You: Is Elastic search connected?
5. > Entering new AgentExecutor chain...
6. Thought: Do I need to use a tool? Yes
7. Action: ES Status
8. Action Input:
10. Observation: ES ping returning True, ES is connected.
11. Thought:Do I need to use a tool? No
12. AI: Yes, Elasticsearch is connected. How can I assist you further?
当我询问 Elasticsearch 是否已连接时,LLM 使用 ES Status 工具,ping 了我的 Elastic Cloud 部署,返回 True,然后确认 Elastic Cloud 确实已连接。
恭喜!这是一个成功的 Hello World :)
请注意,观察结果是 es_ping 函数的输出。此观察结果的格式和内容是我们快速工程的关键部分,因为这是 LLM 用来决定下一步的内容。
让我们看看如何针对 RAG 修改此工具。
Agentic RAG
我最近在我的 Elastic Cloud 部署中使用 POLITICS 数据集构建了一个大型而复杂的知识库(knowledge base)。该数据集包含从美国新闻来源抓取的大约 246 万篇政治文章。我将其导入 Elastic Cloud 并将其嵌入 elser_v2 推理端点,遵循上一篇博客中定义的流程。
要部署 elser_v2 推理端点,请确保启用了 ML 节点自动扩展,然后在 Elastic Cloud 控制台中运行以下命令。
bash
1. PUT _inference/sparse_embedding/elser_v2
2. {
3. "service": "elser",
4. "service_settings": {
5. "num_allocations": 4,
6. "num_threads": 8
7. }
8. }
现在,让我们定义一个新工具,对我们的政治知识库索引进行简单的语义搜索。我将其称为 bignews_embedded。此函数接受搜索查询,将其添加到标准语义搜索查询模板,然后使用 Elasticsearch 运行查询。一旦获得搜索结果,它就会将文章内容连接成一个文本块,并将其作为 LLM 观察(observation)返回。
我们将搜索结果的数量限制为 3。这种 Agentic RAG 风格的一个优点是我们可以通过多个对话步骤来制定答案。换句话说,可以使用引导性问题来设置阶段和上下文来回答更复杂的查询。问答变成了基于事实的对话,而不是一次性的答案生成。
日期
为了突出使用代理的重要优势,RAG 搜索函数除了查询之外还包含 dates 参数。在搜索新闻文章时,我们可能希望将搜索结果限制在特定的时间范围内,例如"In 2020"或 "Between 2008 and 2012"。通过添加日期以及解析器,我们允许 LLM 指定搜索的日期范围。
简而言之,如果我指定 "California wildfires in 2020" 之类的内容,我不希望看到 2017 年或任何其他年份的新闻。
此 rag_search 函数是一个日期解析器(从输入中提取日期并将其添加到查询中)和一个 Elastic semantic_search 查询。
python
1. # Define the RAG search function
2. def rag_search(query: str, dates: str):
3. if es_client is None:
4. return "ES client is not initialized."
5. else:
6. try:
7. # Build the Elasticsearch query
8. must_clauses = []
10. # If dates are provided, parse and include in query
11. if dates:
12. # Dates must be in format 'YYYY-MM-DD' or 'YYYY-MM-DD to YYYY-MM-DD'
13. date_parts = dates.strip().split(' to ')
14. if len(date_parts) == 1:
15. # Single date
16. start_date = date_parts[0]
17. end_date = date_parts[0]
18. elif len(date_parts) == 2:
19. start_date = date_parts[0]
20. end_date = date_parts[1]
21. else:
22. return "Invalid date format. Please use YYYY-MM-DD or YYYY-MM-DD to YYYY-MM-DD."
24. date_range = {
25. "range": {
26. "date": {
27. "gte": start_date,
28. "lte": end_date
29. }
30. }
31. }
32. must_clauses.append(date_range)
34. # Add the main query clause
35. main_query = {
36. "nested": {
37. "path": "text.inference.chunks",
38. "query": {
39. "sparse_vector": {
40. "inference_id": "elser_v2",
41. "field": "text.inference.chunks.embeddings",
42. "query": query
43. }
44. },
45. "inner_hits": {
46. "size": 2,
47. "name": "bignews_embedded.text",
48. "_source": False
49. }
50. }
51. }
52. must_clauses.append(main_query)
54. es_query = {
55. "_source": ["text.text", "title", "date"],
56. "query": {
57. "bool": {
58. "must": must_clauses
59. }
60. },
61. "size": 3
62. }
64. response = es_client.search(index="bignews_embedded", body=es_query)
65. hits = response["hits"]["hits"]
66. if not hits:
67. return "No articles found for your query."
68. result_docs = []
69. for hit in hits:
70. source = hit["_source"]
71. title = source.get("title", "No Title")
72. text_content = source.get("text", {}).get("text", "")
73. date = source.get("date", "No Date")
74. doc = f"Title: {title}\nDate: {date}\n{text_content}\n"
75. result_docs.append(doc)
76. return "\n".join(result_docs)
77. except Exception as e:
78. return f"Error during RAG search: {e}"
运行完整的搜索查询后,结果将连接成一个文本块并作为 "observation" 返回,供 LLM 使用。
为了考虑多个可能的参数,请使用 pydantic 的 BaseModel 定义有效的输入格式:
ini
1. class RagSearchInput(BaseModel):
2. query: str = Field(..., description="The search query for the knowledge base.")
3. dates: str = Field(
4. ...,
5. description="Date or date range for filtering results. Specify in format YYYY-MM-DD or YYYY-MM-DD to YYYY-MM-DD."
6. )
我们还需要利用 StructuredTool 定义一个多输入函数,使用上面定义的输入格式:
ini
1. # Define the RAG search tool using StructuredTool
2. rag_search_tool = StructuredTool(
3. ,
4. func=rag_search,
5. description=(
6. "Use this tool to search for information about American politics from the knowledge base. "
7. "**Input must include a search query and a date or date range.** "
8. "Dates must be specified in this format YYYY-MM-DD or YYYY-MM-DD to YYYY-MM-DD."
9. ),
10. args_schema=RagSearchInput
11. )
描述是工具定义的关键要素,也是你快速工程的一部分。它应该全面而详细,并为 LLM 提供足够的背景信息,以便它知道何时使用该工具以及出于什么目的。
描述还应包括 LLM 必须提供的输入类型,以便正确使用该工具。指定格式和期望在这里会产生巨大影响。
不具信息性的描述可能会严重影响 LLM 使用该工具的能力!
请记住将新工具添加到代理要使用的工具列表中:
ini
tools = [es_status_tool, rag_search_tool]
我们还需要使用系统提示进一步修改代理,以提供对代理行为的额外控制。系统提示对于确保不会发生与格式错误的输出和函数输入相关的错误至关重要。我们需要明确说明每个函数的期望以及模型应输出的内容,因为如果 langchain 看到格式不正确的 LLM 响应,它将抛出错误。
我们还需要设置 agent=AgentType.OPENAI_FUNCTIONS 以使用 OpenAI 的函数调用功能。这允许 LLM 根据我们指定的结构化模板与函数交互。
请注意,系统提示包括 LLM 应生成的输入的确切格式的规定,以及具体示例。
LLM 不仅应该检测应该使用哪种工具,还应该检测工具期望的输入! Langchain 只负责函数调用/工具使用,但是否正确使用取决于 LLM。
markdown
1. agent_chain = initialize_agent(
2. tools,
3. llm,
4. agent=AgentType.OPENAI_FUNCTIONS,
5. memory=memory,
6. verbose=True,
7. handle_parsing_errors=True,
8. system_message="""
9. You are an AI assistant that helps with questions about American politics using a knowledge base. Be concise, sharp, to the point, and respond in one paragraph.
10. You have access to the following tools:
11. - **ES_Status**: Checks if Elasticsearch is connected.
12. - **RAG_Search**: Use this to search for information in the knowledge base. **Input must include a search query and a date or date range.** Dates must be specified in this format YYYY-MM-DD or YYYY-MM-DD to YYYY-MM-DD.
13. **Important Instructions:**
14. - **Extract dates or date ranges from the user's question.**
15. - **If the user does not provide a date or date range, politely ask them to provide one before proceeding.**
16. When you decide to use a tool, use the following format *exactly*:
17. Thought: [Your thought process about what you need to do next]
18. Action: [The action to take, should be one of [ES_Status, RAG_Search]]
19. Action Input: {"query": "the search query", "dates": "the date or date range"}
20. If you receive an observation after an action, you should consider it and then decide your next step. If you have enough information to answer the user's question, respond with:
21. Thought: [Your thought process]
22. Assistant: [Your final answer to the user]
23. **Examples:**
24. - **User's Question:** "Tell me about the 2020 California wildfires."
25. Thought: I need to search for information about the 2020 California wildfires.
26. Action: RAG_Search
27. Action Input: {"query" : "California wildfires", "dates" : "2020-01-01 to 2020-12-31"}
28. - **User's Question:** "What happened during the presidential election?"
29. Thought: The user didn't specify a date. I should ask for a date range.
30. Assistant: Could you please specify the date or date range for the presidential election you're interested in?
31. Always ensure that your output strictly follows one of the above formats, and do not include any additional text or formatting.
32. Remember:
33. - **Do not** include any text before or after the specified format.
34. - **Do not** add extra explanations.
35. - **Do not** include markdown, bullet points, or numbered lists unless it is part of the Assistant's final answer.
36. Your goal is to assist the user by effectively using the tools when necessary and providing clear and concise answers.
37. """
38. )
现在,在你的终端中运行 python chat.py 并让我们测试一下!
测试 Agentic RAG
让我们通过询问以下查询来测试它:
vbscript
Summarize the federal response to california wildfires in 2020
有帮助的是,langchain 将输出中间步骤,包括 RAG_Search 函数的输入、搜索结果和最终输出。
vbnet
1. > Entering new AgentExecutor chain...
3. Invoking: `RAG_Search` with `{'query': 'federal response to California wildfires', 'dates': '2020-01-01 to 2020-12-31'}`
6. Title: California federal wildfire aid: Newsom says Trump changed mind
7. Date: 2020-10-16T00:00:00.000Z
8. President Donald Trump abruptly changed course...
10. Title: California governor declares state of emergency in multiple counties as wildfires rage
11. Date: 2020-09-07T00:00:00.000Z
12. In a statement, Newsom noted that tens...
14. Title: Trump approves disaster declaration for California, reversing course
15. Date: 2020-10-16T00:00:00.000Z
16. California secured a Major Disaster Declaration ...
18. > Finished chain.
20. Assistant: The federal response to the California wildfires in 2020 included several key actions and moments:
22. 1. **Initial Denial and Reversal of Federal Aid**: President Trump initially denied California's request for federal aid for the wildfires. However, after Governor Gavin Newsom vowed to appeal the decision, Trump reversed course and approved the Major Disaster Declaration. This declaration allowed California to receive hundreds of millions of dollars in federal aid.
24. 2. **Statewide Emergency Declarations**: Governor Newsom declared a statewide emergency due to the widespread fires and extreme weather conditions. The state worked closely with federal authorities to secure assistance, including Fire Management Assistance Grants from FEMA.
26. 3. **Presidential Major Disaster Declarations**: Newsom secured a Presidential Major Disaster Declaration to bolster the state's emergency response to the Northern California wildfires. This declaration made people in impacted counties eligible for various forms of federal support, including crisis counseling, housing, unemployment assistance, and legal services.
28. 4. **Federal and State Collaboration**: Despite ongoing tensions and disagreements between the state and the Trump administration, particularly over forest management and climate change, federal agencies such as the National Park Service, U.S. Forest Service, and Bureau of Land Management were involved in managing and supporting firefighting efforts in California.
30. 5. **Impact and Scale of Wildfires**: The 2020 wildfire season in California was historically devastating, with more than 8,500 blazes scorching 6,400 square miles, destroying thousands of structures, and claiming lives. The federal aid and disaster declarations were crucial in supporting the state's response and recovery efforts.
32. Overall, the federal response involved a combination of initial resistance followed by critical support and collaboration to address the unprecedented wildfire crisis in California.
最值得注意的是,LLM 创建了一个搜索查询,然后添加了从 2020 年初到年末的日期范围。通过将搜索结果限制在指定年份,我们确保只有相关文档才会传递给 LLM。
我们可以用它做更多的事情,例如根据类别、某些实体的外观或与其他事件的关系进行约束。
可能性无穷无尽,我认为这很酷!
关于错误处理的注意事项
在某些情况下,LLM 可能无法在需要时使用正确的工具/功能。例如,它可能选择使用自己的知识而不是使用可用的知识库来回答有关当前事件的问题。
必须仔细测试和调整系统提示和工具/功能描述。
另一种选择可能是增加可用工具的种类,以增加基于知识库内容而不是 LLM 的固有知识生成答案的可能性。
请注意,LLMs 偶尔会失败,这是其概率性质的自然结果。有用的错误消息或免责声明也可能是用户体验的重要组成部分。
结论和未来前景
对我来说,主要的收获是创建更高级的搜索应用程序的可能性。LLM 可能能够在自然语言对话的背景下即时制作非常复杂的搜索查询。这为大幅提高搜索应用程序的准确性和相关性开辟了道路,也是我兴奋地探索的领域。
通过 LLM 媒介,知识库与其他工具(例如 Web 搜索引擎和监控工具 API)的交互也可以实现一些令人兴奋且复杂的用例。来自 KB 的搜索结果可能会补充实时信息,从而使 LLM 能够执行有效且时间敏感的即时推理。
还有多代理工作流的可能性。在 Elastic 上下文中,这可能是多个代理探索不同的知识库集,以协作构建复杂问题的解决方案。也许是一个联合搜索模型,其中多个组织构建协作、共享的应用程序,类似于联合学习(federated learning)的想法?
多代理流程示例
我想探索 Elasticsearch 的一些用例,希望你也能这样做。
下次见!
附录:chat.py 的完整代码
python
1. import os
2. from dotenv import load_dotenv
3. load_dotenv()
5. from langchain.chat_models import AzureChatOpenAI
6. from langchain.agents import initialize_agent, AgentType, Tool
7. from langchain.tools import StructuredTool # Import StructuredTool
8. from langchain.memory import ConversationBufferMemory
9. from typing import Optional
10. from pydantic import BaseModel, Field
12. llm = AzureChatOpenAI(
13. openai_api_version=os.getenv("AZURE_OPENAI_API_VERSION"),
14. azure_deployment=os.getenv("AZURE_OPENAI_GPT4O_DEPLOYMENT_NAME"),
15. temperature=0.5,
16. max_tokens=4096
17. )
19. from elasticsearch import Elasticsearch
21. try:
22. # Elasticsearch setup
23. es_endpoint = os.environ.get("ELASTIC_ENDPOINT")
24. es_client = Elasticsearch(
25. es_endpoint,
26. api_key=os.environ.get("ELASTIC_API_KEY")
27. )
28. except Exception as e:
29. es_client = None
31. # Define a function to check ES status
32. def es_ping(_input):
33. if es_client is None:
34. return "ES client is not initialized."
35. else:
36. try:
37. if es_client.ping():
38. return "ES is connected."
39. else:
40. return "ES is not connected."
41. except Exception as e:
42. return f"Error pinging ES: {e}"
44. # Define the ES status tool
45. es_status_tool = Tool(
46. ,
47. func=es_ping,
48. description="Checks if Elasticsearch is connected.",
49. )
51. # Define the RAG search function
52. def rag_search(query: str, dates: str):
53. if es_client is None:
54. return "ES client is not initialized."
55. else:
56. try:
57. # Build the Elasticsearch query
58. must_clauses = []
60. # If dates are provided, parse and include in query
61. if dates:
62. # Dates must be in format 'YYYY-MM-DD' or 'YYYY-MM-DD to YYYY-MM-DD'
63. date_parts = dates.strip().split(' to ')
64. if len(date_parts) == 1:
65. # Single date
66. start_date = date_parts[0]
67. end_date = date_parts[0]
68. elif len(date_parts) == 2:
69. start_date = date_parts[0]
70. end_date = date_parts[1]
71. else:
72. return "Invalid date format. Please use YYYY-MM-DD or YYYY-MM-DD to YYYY-MM-DD."
74. date_range = {
75. "range": {
76. "date": {
77. "gte": start_date,
78. "lte": end_date
79. }
80. }
81. }
82. must_clauses.append(date_range)
84. # Add the main query clause
85. main_query = {
86. "nested": {
87. "path": "text.inference.chunks",
88. "query": {
89. "sparse_vector": {
90. "inference_id": "elser_v2",
91. "field": "text.inference.chunks.embeddings",
92. "query": query
93. }
94. },
95. "inner_hits": {
96. "size": 2,
97. "name": "bignews_embedded.text",
98. "_source": False
99. }
100. }
101. }
102. must_clauses.append(main_query)
104. es_query = {
105. "_source": ["text.text", "title", "date"],
106. "query": {
107. "bool": {
108. "must": must_clauses
109. }
110. },
111. "size": 3
112. }
114. response = es_client.search(index="bignews_embedded", body=es_query)
115. hits = response["hits"]["hits"]
116. if not hits:
117. return "No articles found for your query."
118. result_docs = []
119. for hit in hits:
120. source = hit["_source"]
121. title = source.get("title", "No Title")
122. text_content = source.get("text", {}).get("text", "")
123. date = source.get("date", "No Date")
124. doc = f"Title: {title}\nDate: {date}\n{text_content}\n"
125. result_docs.append(doc)
126. return "\n".join(result_docs)
127. except Exception as e:
128. return f"Error during RAG search: {e}"
130. class RagSearchInput(BaseModel):
131. query: str = Field(..., description="The search query for the knowledge base.")
132. dates: str = Field(
133. ...,
134. description="Date or date range for filtering results. Specify in format YYYY-MM-DD or YYYY-MM-DD to YYYY-MM-DD."
135. )
137. # Define the RAG search tool using StructuredTool
138. rag_search_tool = StructuredTool(
139. ,
140. func=rag_search,
141. description=(
142. "Use this tool to search for information about American politics from the knowledge base. "
143. "**Input must include a search query and a date or date range.** "
144. "Dates must be specified in this format YYYY-MM-DD or YYYY-MM-DD to YYYY-MM-DD."
145. ),
146. args_schema=RagSearchInput
147. )
149. # List of tools
150. tools = [es_status_tool, rag_search_tool]
152. # Initialize memory to keep track of the conversation
153. memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
155. agent_chain = initialize_agent(
156. tools,
157. llm,
158. agent=AgentType.OPENAI_FUNCTIONS,
159. memory=memory,
160. verbose=True,
161. handle_parsing_errors=True,
162. system_message="""
163. You are an AI assistant that helps with questions about American politics using a knowledge base. Be concise, sharp, to the point, and respond in one paragraph.
164. You have access to the following tools:
166. - **ES_Status**: Checks if Elasticsearch is connected.
167. - **RAG_Search**: Use this to search for information in the knowledge base. **Input must include a search query and a date or date range.** Dates must be specified in this format YYYY-MM-DD or YYYY-MM-DD to YYYY-MM-DD.
169. **Important Instructions:**
171. - **Extract dates or date ranges from the user's question.**
172. - **If the user does not provide a date or date range, politely ask them to provide one before proceeding.**
174. When you decide to use a tool, use the following format *exactly*:
175. Thought: [Your thought process about what you need to do next]
176. Action: [The action to take, should be one of [ES_Status, RAG_Search]]
177. Action Input: {"query": "the search query", "dates": "the date or date range"}
180. If you receive an observation after an action, you should consider it and then decide your next step. If you have enough information to answer the user's question, respond with:
181. Thought: [Your thought process]
182. Assistant: [Your final answer to the user]
184. **Examples:**
186. - **User's Question:** "Tell me about the 2020 California wildfires."
187. Thought: I need to search for information about the 2020 California wildfires.
188. Action: RAG_Search
189. Action Input: {"query": "California wildfires", "dates": "2020-01-01 to 2020-12-31"}
191. - **User's Question:** "What happened during the presidential election?"
192. Thought: The user didn't specify a date. I should ask for a date range.
193. Assistant: Could you please specify the date or date range for the presidential election you're interested in?
195. Always ensure that your output strictly follows one of the above formats, and do not include any additional text or formatting.
197. Remember:
199. - **Do not** include any text before or after the specified format.
200. - **Do not** add extra explanations.
201. - **Do not** include markdown, bullet points, or numbered lists unless it is part of the Assistant's final answer.
203. Your goal is to assist the user by effectively using the tools when necessary and providing clear and concise answers.
204. """
205. )
207. # Interactive conversation with the agent
208. def main():
209. print("Welcome to the chat agent. Type 'exit' to quit.")
210. while True:
211. user_input = input("You: ")
212. if user_input.lower() in ['exit', 'quit']:
213. print("Goodbye!")
214. break
215. # Update method call to address deprecation warning
216. response = agent_chain.invoke(input=user_input)
217. print("Assistant:", response['output'])
219. if __name__ == "__main__":
220. main()
Elasticsearch 包含许多新功能,可帮助你针对自己的用例构建最佳搜索解决方案。深入了解我们的示例笔记本以了解更多信息,开始免费云试用,或立即在本地机器上试用 Elastic。