分类目录:《自然语言处理从入门到应用》总目录
对话知识图谱记忆(Conversation Knowledge Graph Memory)
这种类型的记忆使用知识图谱来重建记忆:
dart
from langchain.memory import ConversationKGMemory
from langchain.llms import OpenAI
llm = OpenAI(temperature=0)
memory = ConversationKGMemory(llm=llm)
memory.save_context({"input": "say hi to sam"}, {"output": "who is sam"})
memory.save_context({"input": "sam is a friend"}, {"output": "okay"})
memory.load_memory_variables({"input": 'who is sam'})
输出:
dart
{'history': 'On Sam: Sam is friend.'}
我们还可以将历史记录作为消息列表获取,如果我们与聊天模型一起使用时,这将非常有用:
dart
memory = ConversationKGMemory(llm=llm, return_messages=True)
memory.save_context({"input": "say hi to sam"}, {"output": "who is sam"})
memory.save_context({"input": "sam is a friend"}, {"output": "okay"})
memory.load_memory_variables({"input": 'who is sam'})
输出:
dart
{'history': [SystemMessage(content='On Sam: Sam is friend.', additional_kwargs={})]}
我们还可以更模块化地从新消息中获取当前实体,这将使用前面的消息作为上下文:
dart
memory.get_current_entities("what's Sams favorite color?")
输出:
dart
['Sam']
我们还可以更模块化地从新消息中获取知识三元组,这也将使用前面的消息作为上下文:
dart
memory.get_knowledge_triplets("her favorite color is red")
输出:
dart
[KnowledgeTriple(subject='Sam', predicate='favorite color', object_='red')]
在链中使用
现在让我们在一个链中使用这个功能:
dart
llm = OpenAI(temperature=0)
from langchain.prompts.prompt import PromptTemplate
from langchain.chains import ConversationChain
template = """The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. The AI ONLY uses information contained in the "Relevant Information" section and does not hallucinate.
Relevant Information:
{history}
Conversation:
Human: {input}
AI:"""
prompt = PromptTemplate(
input_variables=["history", "input"], template=template
)
conversation_with_kg = ConversationChain(
llm=llm,
verbose=True,
prompt=prompt,
memory=ConversationKGMemory(llm=llm)
)
conversation_with_kg.predict(input="Hi, what's up?")
日志输出:
dart
> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context.
If the AI does not know the answer to a question, it truthfully says it does not know. The AI ONLY uses information contained in the "Relevant Information" section and does not hallucinate.
Relevant Information:
Conversation:
Human: Hi, what's up?
AI:
> Finished chain.
输出:
dart
" Hi there! I'm doing great. I'm currently in the process of learning about the world around me. I'm learning about different cultures, languages, and customs. It's really fascinating! How about you?"
输入:
dart
conversation_with_kg.predict(input="My name is James and I'm helping Will. He's an engineer.")
日志输出:
dart
> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context.
If the AI does not know the answer to a question, it truthfully says it does not know. The AI ONLY uses information contained in the "Relevant Information" section and does not hallucinate.
Relevant Information:
Conversation:
Human: My name is James and I'm helping Will. He's an engineer.
AI:
> Finished chain.
输出:
dart
" Hi James, it's nice to meet you. I'm an AI and I understand you're helping Will, the engineer. What kind of engineering does he do?"
输入:
dart
conversation_with_kg.predict(input="What do you know about Will?")
输入:
dart
> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context.
If the AI does not know the answer to a question, it truthfully says it does not know. The AI ONLY uses information contained in the "Relevant Information" section and does not hallucinate.
Relevant Information:
On Will: Will is an engineer.
Conversation:
Human: What do you know about Will?
AI:
> Finished chain.
输出:
dart
' Will is an engineer.'
对话摘要记忆ConversationSummaryMemory
现在让我们来看一下使用稍微复杂的记忆类型ConversationSummaryMemory
。这种类型的记忆会随着时间的推移创建对话的摘要。这对于从对话中压缩信息非常有用。让我们首先探索一下这种类型记忆的基本功能:
csharp
from langchain.memory import ConversationSummaryMemory, ChatMessageHistory
from langchain.llms import OpenAI
memory = ConversationSummaryMemory(llm=OpenAI(temperature=0))
memory.save_context({"input": "hi"}, {"output": "whats up"})
memory.load_memory_variables({})
输出:
csharp
{'history': '\nThe human greets the AI, to which the AI responds.'}
我们还可以将历史记录作为消息列表获取,如果我们正在与聊天模型一起使用,这将非常有用:
csharp
memory = ConversationSummaryMemory(llm=OpenAI(temperature=0), return_messages=True)
memory.save_context({"input": "hi"}, {"output": "whats up"})
memory.load_memory_variables({})
输出:
csharp
{'history': [SystemMessage(content='\nThe human greets the AI, to which the AI responds.', additional_kwargs={})]}
我们还可以直接使用predict_new_summary
方法:
csharp
messages = memory.chat_memory.messages
previous_summary = ""
memory.predict_new_summary(messages, previous_summary)
输出:
csharp
'\nThe human greets the AI, to which the AI responds.'
使用消息进行初始化
如果我们有类似的消息,则可以很容易地使用ChatMessageHistory
来初始化这个类,它将会计算一个摘要在加载过程中。
csharp
history = ChatMessageHistory()
history.add_user_message("hi")
history.add_ai_message("hi there!")
memory = ConversationSummaryMemory.from_messages(llm=OpenAI(temperature=0), chat_memory=history, return_messages=True)
memory.buffer
输出:
csharp
'\nThe human greets the AI, to which the AI responds with a friendly greeting.'
在对话链中使用
让我们通过一个示例来演示在对话链中使用这个功能,同样设置verbose=True
以便我们可以看到提示。
csharp
from langchain.llms import OpenAI
from langchain.chains import ConversationChain
llm = OpenAI(temperature=0)
conversation_with_summary = ConversationChain(
llm=llm,
memory=ConversationSummaryMemory(llm=OpenAI()),
verbose=True
)
conversation_with_summary.predict(input="Hi, what's up?")
日志输出:
csharp
> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Current conversation:
Human: Hi, what's up?
AI:
> Finished chain.
输出:
csharp
" Hi there! I'm doing great. I'm currently helping a customer with a technical issue. How about you?"
输入:
csharp
conversation_with_summary.predict(input="Tell me more about it!")
日志输出:
csharp
> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Current conversation:
The human greeted the AI and asked how it was doing. The AI replied that it was doing great and was currently helping a customer with a technical issue.
Human: Tell me more about it!
AI:
> Finished chain.
输出:
csharp
" Sure! The customer is having trouble with their computer not connecting to the internet. I'm helping them troubleshoot the issue and figure out what the problem is. So far, we've tried resetting the router and checking the network settings, but the issue still persists. We're currently looking into other possible solutions."
输入:
csharp
conversation_with_summary.predict(input="Very cool -- what is the scope of the project?")
日志输出:
csharp
> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Current conversation:
The human greeted the AI and asked how it was doing. The AI replied that it was doing great and was currently helping a customer with a technical issue where their computer was not connecting to the internet. The AI was troubleshooting the issue and had already tried resetting the router and checking the network settings, but the issue still persisted and they were looking into other possible solutions.
Human: Very cool -- what is the scope of the project?
AI:
> Finished chain.
输出:
csharp
" The scope of the project is to troubleshoot the customer's computer issue and find a solution that will allow them to connect to the internet. We are currently exploring different possibilities and have already tried resetting the router and checking the network settings, but the issue still persists."
会话摘要缓冲记忆 ConversationSummaryBufferMemory
ConversationSummaryBufferMemory
将ConversationBufferMemory
和ConversationSummaryMemory
的概念结合起来。它在内存中保留了最近的一些对话交互,并将它们编译成一个摘要。与先前的实现不同,它使用标记长度来确定何时刷新交互,而不是交互数量。
csharp
from langchain.memory import ConversationSummaryBufferMemory
from langchain.llms import OpenAI
llm = OpenAI()
memory = ConversationSummaryBufferMemory(llm=llm, max_token_limit=10)
memory.save_context({"input": "hi"}, {"output": "whats up"})
memory.save_context({"input": "not much you"}, {"output": "not much"})
memory.load_memory_variables({})
输出:
csharp
{'history': 'System: \nThe human says "hi", and the AI responds with "whats up".\nHuman: not much you\nAI: not much'}
我们还可以将历史记录作为消息列表获取,如果我们正在与聊天模型一起使用,将非常有用:
csharp
memory = ConversationSummaryBufferMemory(llm=llm, max_token_limit=10, return_messages=True)
memory.save_context({"input": "hi"}, {"output": "whats up"})
memory.save_context({"input": "not much you"}, {"output": "not much"})
我们还可以直接利用predict_new_summary
方法:
csharp
messages = memory.chat_memory.messages
previous_summary = ""
memory.predict_new_summary(messages, previous_summary)
输出:
'\nThe human and AI state that they are not doing much.'
在链式结构中的使用
让我们通过一个例子来演示在链式结构中的使用ConversationSummaryBufferMemory
,我们同样设置verbose=True
以便我们可以看到提示信息:
csharp
from langchain.chains import ConversationChain
conversation_with_summary = ConversationChain(
llm=llm,
# We set a very low max_token_limit for the purposes of testing.
memory=ConversationSummaryBufferMemory(llm=OpenAI(), max_token_limit=40),
verbose=True
)
conversation_with_summary.predict(input="Hi, what's up?")
日志输出:
csharp
> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Current conversation:
Human: Hi, what's up?
AI:
> Finished chain.
输出:
" Hi there! I'm doing great. I'm learning about the latest advances in artificial intelligence. What about you?"
输入:
conversation_with_summary.predict(input="Just working on writing some documentation!")
日志输出:
> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Current conversation:
Human: Hi, what's up?
AI: Hi there! I'm doing great. I'm spending some time learning about the latest developments in AI technology. How about you?
Human: Just working on writing some documentation!
AI:
> Finished chain.
输出:
' That sounds like a great use of your time. Do you have experience with writing documentation?'
输入:
# We can see here that there is a summary of the conversation and then some previous interactions
conversation_with_summary.predict(input="For LangChain! Have you heard of it?")
日志输出:
> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Current conversation:
System:
The human asked the AI what it was up to and the AI responded that it was learning about the latest developments in AI technology.
Human: Just working on writing some documentation!
AI: That sounds like a great use of your time. Do you have experience with writing documentation?
Human: For LangChain! Have you heard of it?
AI:
> Finished chain.
输出:
" No, I haven't heard of LangChain. Can you tell me more about it?"
输入:
# We can see here that the summary and the buffer are updated
conversation_with_summary.predict(input="Haha nope, although a lot of people confuse it for that")
日志输出:
> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Current conversation:
System:
The human asked the AI what it was up to and the AI responded that it was learning about the latest developments in AI technology. The human then mentioned they were writing documentation, to which the AI responded that it sounded like a great use of their time and asked if they had experience with writing documentation.
Human: For LangChain! Have you heard of it?
AI: No, I haven't heard of LangChain. Can you tell me more about it?
Human: Haha nope, although a lot of people confuse it for that
AI:
> Finished chain.
输出:
' Oh, okay. What is LangChain?'
参考文献:
[1] LangChain官方网站:https://www.langchain.com/
[2] LangChain 🦜️🔗 中文网,跟着LangChain一起学LLM/GPT开发:https://www.langchain.com.cn/
[3] LangChain中文网 - LangChain 是一个用于开发由语言模型驱动的应用程序的框架:http://www.cnlangchain.com/