文章目录
-
- [一、关于 聊天模型](#一、关于 聊天模型)
- 开始
-
- 1、设置
- 2、Messages
- 3、`call`
-
- [Messages in -> message out](#Messages in -> message out)
- 4、`generate`
- 二、LLMChain
- 三、提示(Prompts)
- [四、实时流媒体 streaming](#四、实时流媒体 streaming)
- 五、Caching
-
- [1、In Memory Cache](#1、In Memory Cache)
- [2、SQLite Cache](#2、SQLite Cache)
- [六、Integrations 示例](#六、Integrations 示例)
-
- [1、使用Anthropic Chat模型入门](#1、使用Anthropic Chat模型入门)
-
- [`ChatAnthropic` also supports async and streaming functionality:](#
ChatAnthropic
also supports async and streaming functionality:)
- [`ChatAnthropic` also supports async and streaming functionality:](#
- 2、OpenAI
本文转载整理自:
https://python.langchain.com.cn/docs/modules/model_io/models/chat/
一、关于 聊天模型
聊天模型是语言模型的一种变体。 虽然聊天模型在底层使用语言模型,但它们使用的接口有点不同。 它们不是使用"输入文本,输出文本"的API,而是使用"聊天消息"作为输入和输出的接口。
聊天模型的API还比较新,因此我们仍在摸索正确的抽象层。
开始
1、设置
首先,我们需要安装 OpenAI Python 包:
bash
pip install openai
访问 API 需要 API 密钥,您可以通过创建帐户并转到此处(https://platform.openai.com/account/api-keys)获取密钥。一旦我们有了密钥,我们将希望通过运行以下命令将其设置为环境变量:
bash
export OPENAI_API_KEY="..."
如果您不想设置环境变量,可以在初始化 OpenAI LLM 类时直接通过"openai_api_key"命名参数传递密钥:
python
from langchain.chat_models import ChatOpenAI
chat = ChatOpenAI(open_api_key="...")
否则,您可以不使用任何参数进行初始化:
python
from langchain.chat_models import ChatOpenAI
chat = ChatOpenAI()
2、Messages
聊天模型界面基于消息而不是原始文本。
LangChain 目前支持的消息类型有"AIMessage","HumanMessage","SystemMessage"和"ChatMessage" - "ChatMessage"接受一个任意角色参数。大多数时候,您只需处理"HumanMessage","AIMessage"和"SystemMessage"
3、__call__
Messages in -> message out
You can get chat completions by passing one or more messages to the chat model. The response will be a message.
python
from langchain.schema import (
AIMessage,
HumanMessage,
SystemMessage
)
chat([HumanMessage(content="Translate this sentence from English to French: I love programming.")])
text
AIMessage(content="J'aime programmer.", additional_kwargs={})
OpenAI's chat model supports multiple messages as input. See here for more information. Here is an example of sending a system and user message to the chat model:
python
messages = [
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="I love programming.")
]
chat(messages)
text
AIMessage(content="J'aime programmer.", additional_kwargs={})
4、generate
批量调用,更丰富的输出 (Batch calls, richer outputs)
您可以进一步使用 generate
为多组消息生成完成。这将返回一个带有额外 message
参数的 LLMResult
。
python
batch_messages = [
[
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="I love programming.")
],
[
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="I love artificial intelligence.")
],
]
result = chat.generate(batch_messages)
result
text
LLMResult(generations=[[ChatGeneration(text="J'aime programmer.", generation_info=None, message=AIMessage(content="J'aime programmer.", additional_kwargs={}))], [ChatGeneration(text="J'aime l'intelligence artificielle.", generation_info=None, message=AIMessage(content="J'aime l'intelligence artificielle.", additional_kwargs={}))]], llm_output={'token_usage': {'prompt_tokens': 57, 'completion_tokens': 20, 'total_tokens': 77}})
您可以从这个 LLMResult 中恢复诸如令牌使用情况之类的东西
python
result.llm_output
text
{'token_usage': {'prompt_tokens': 57,
'completion_tokens': 20,
'total_tokens': 77}}
二、LLMChain
您可以以非常类似的方式使用现有的 LLMChain - 提供一个提示和一个模型。
python
chain = LLMChain(llm=chat, prompt=chat_prompt)
python
chain.run(input_language="English", output_language="French", text="I love programming.")
text
"J'adore la programmation."
三、提示(Prompts)
Chat 模型的提示(Prompts)是围绕消息而构建的,而不仅仅是普通文本。
你可以使用 MessagePromptTemplate
来利用模板。
可见:
四、实时流媒体 streaming
一些聊天模型提供实时流媒体响应。这意味着您无需等待完整响应返回,而是可以在其可用时开始处理响应。如果您希望在生成响应时将其显示给用户,或者希望在生成响应时处理响应,这将非常有用。
python
from langchain.chat_models import ChatOpenAI
from langchain.schema import (
HumanMessage,
)
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
chat = ChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], temperature=0)
resp = chat([HumanMessage(content="Write me a song about sparkling water.")])
text
Verse 1:
Bubbles rising to the top
A refreshing drink that never stops
Clear and crisp, it's pure delight
A taste that's sure to excite
...
Outro:
Sparkling water, you're the one
A drink that's always so much fun
I'll never let you go, my friend
Sparkling
五、Caching
LangChain为聊天模型提供可选的缓存层。这有两个用途:
它可以通过减少对LLM提供者的API调用次数来节省费用,如果您经常多次请求相同的完成。 它可以通过减少对LLM提供者的API调用次数来加快应用程序的速度。
python
import langchain
from langchain.chat_models import ChatOpenAI
llm = ChatOpenAI()
1、In Memory Cache
python
from langchain.cache import InMemoryCache
langchain.llm_cache = InMemoryCache()
# The first time, it is not yet in cache, so it should take longer
llm.predict("Tell me a joke")
text
CPU times: user 35.9 ms, sys: 28.6 ms, total: 64.6 ms
Wall time: 4.83 s
"\n\nWhy couldn't the bicycle stand up by itself? It was...two tired!"
python
# The second time it is, so it goes faster
llm.predict("Tell me a joke")
text
CPU times: user 238 µs, sys: 143 µs, total: 381 µs
Wall time: 1.76 ms
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
2、SQLite Cache
bash
rm .langchain.db
python
# We can do the same thing with a SQLite cache
from langchain.cache import SQLiteCache
langchain.llm_cache = SQLiteCache(database_path=".langchain.db")
python
# The first time, it is not yet in cache, so it should take longer
llm.predict("Tell me a joke")
text
CPU times: user 17 ms, sys: 9.76 ms, total: 26.7 ms
Wall time: 825 ms
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
python
# The second time it is, so it goes faster
llm.predict("Tell me a joke")
text
CPU times: user 2.46 ms, sys: 1.23 ms, total: 3.7 ms
Wall time: 2.67 ms
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
六、Integrations 示例
1、使用Anthropic Chat模型入门
本笔记本介绍了如何开始使用Anthropic Chat模型。
python
from langchain.chat_models import ChatAnthropic
from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
AIMessagePromptTemplate,
HumanMessagePromptTemplate,
)
from langchain.schema import AIMessage, HumanMessage, SystemMessage
python
chat = ChatAnthropic()
python
messages = [
HumanMessage(
content="Translate this sentence from English to French. I love programming."
)
]
chat(messages)
text
AIMessage(content=" J'aime programmer. ", additional_kwargs={})
ChatAnthropic
also supports async and streaming functionality:
python
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
python
await chat.agenerate([messages])
text
LLMResult(generations=[[ChatGeneration(text=" J'aime la programmation.", generation_info=None, message=AIMessage(content=" J'aime la programmation.", additional_kwargs={}))]], llm_output={})
python
chat = ChatAnthropic(
streaming=True,
verbose=True,
callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),
)
chat(messages)
text
J'adore programmer.
AIMessage(content=" J'adore programmer.", additional_kwargs={})
2、OpenAI
本笔记本介绍了如何开始使用OpenAI聊天模型。
python
from langchain.chat_models import ChatOpenAI
from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
AIMessagePromptTemplate,
HumanMessagePromptTemplate,
)
from langchain.schema import AIMessage, HumanMessage, SystemMessage
python
chat = ChatOpenAI(temperature=0)
python
messages = [
SystemMessage(
content="You are a helpful assistant that translates English to French."
),
HumanMessage(
content="Translate this sentence from English to French. I love programming."
),
]
chat(messages)
text
AIMessage(content="J'aime programmer.", additional_kwargs={}, example=False)
您可以使用MessagePromptTemplate
来使用模板。您可以从一个或多个MessagePromptTemplate
构建一个ChatPromptTemplate
。您可以使用ChatPromptTemplate
的format_prompt
方法,它返回一个PromptValue
,您可以将其转换为字符串或消息对象,具体取决于您是否希望将格式化后的值用作LLM或Chat模型的输入。
为了方便起见,模板提供了一个from_template
方法。如果您要使用这个模板,示例如下所示:
python
template = (
"You are a helpful assistant that translates {input_language} to {output_language}."
)
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
human_template = "{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
python
chat_prompt = ChatPromptTemplate.from_messages(
[system_message_prompt, human_message_prompt]
)
# get a chat completion from the formatted messages
chat(
chat_prompt.format_prompt(
input_language="English", output_language="French", text="I love programming."
).to_messages()
)
text
AIMessage(content="J'adore la programmation.", additional_kwargs={})