Langchain 入坑

LangChain是一个用于开发由大型语言模型(LLM)支持的应用程序的框架。

LangChain简化了LLM申请生命周期的每个阶段:

  • 开发:使用LangChain的开源构建块和组件构建您的应用程序。使用第三方集成和模板开始运行。
  • 生产化:使用LangSmith检查、监控和评估您的链,以便您可以充满信心地持续优化和部署。
  • 部署:使用LangServe将任何链变成 API 。

具体来说,该框架由以下开源库组成:

  • langchain-core:基础抽象和LangChain表达式语言。
  • langchain-community :第三方集成。
    • 合作伙伴包(例如langchain-openai,,langchain-anthropic等):一些集成已被进一步拆分为自己的轻量级包,仅依赖于langchain-core.
  • langchain:构成应用程序认知架构的链、代理和检索策略。
  • langgraph:通过将步骤建模为图中的边和节点,使用 LLM 构建健壮且有状态的多角色应用程序。
  • langserve:将 LangChain 链部署为 REST API。

快速开始

在本快速入门中,我们将向您展示如何:

  • 使用 LangChain、LangSmith 和 LangServe 进行设置
  • 使用LangChain最基本、最常用的组件:提示模板、模型和输出解析器
  • 使用 LangChain 表达式语言,这是 LangChain 构建的协议,有助于组件链接
  • 使用LangChain构建一个简单的应用程序
  • 使用 LangSmith 追踪您的应用程序

Jupyter

本指南(以及文档中的大多数其他指南)使用Jupyter 笔记本,并假设读者也使用 Jupyter 笔记本。 Jupyter 笔记本非常适合学习如何使用 LLM 系统,因为事情经常可能会出错(意外输出、API 关闭等),而在交互式环境中阅读指南是更好地理解它们的好方法。

python 复制代码
# 使用下面的命令安装 langchain
!pip install --quiet --upgrade  langchain

使用 LangChain 构建的许多应用程序将包含多个步骤以及多次调用 LLM 调用。随着这些应用程序变得越来越复杂,能够检查链或代理内部到底发生了什么变得至关重要。最好的方法是使用LangSmith。

请注意,LangSmith 不是必需的,但它很有帮助。如果您确实想使用 LangSmith,请在smith.langchain.com/ 注册后,确保设置环境变量以开始记录跟踪

python 复制代码
export LANGCHAIN_TRACING_V2="true"

export LANGCHAIN_API_KEY="..."

使用 Langchain 构建

LangChain 能够构建将外部数据源和计算连接到LLM的应用程序。在本快速入门中,我们将介绍几种不同的方法。

  • 我们将从一个简单的LLM链开始,它仅依赖于提示模板中的信息来响应。
  • 接下来,我们将构建一个检索链,它从单独的数据库中获取数据并将其传递到提示模板中。
  • 然后,我们将添加聊天历史记录,以创建对话检索链。这允许您以聊天方式与该大语言模型进行交互,因此它会记住以前的问题。
  • 最后,我们将构建一个代理 - 它利用 LLM 来确定是否需要获取数据来回答问题。

大语言模型

大语言模型(Large Language Model,简称LLM)是一种基于深度学习的人工智能模型,主要用于理解和生成自然语言文本。这类模型通过大量训练数据(通常涵盖数十亿甚至上百亿个单词或更多的文本片段)进行训练,从而学会捕捉语言的复杂模式、语法规则、上下文关联性和深层次的语义含义。

LLM的核心特征包括:

  1. 大规模参数量:模型通常拥有数十亿乃至数千亿个参数,这样的规模使得模型能够学习到更丰富和细微的语言结构。

  2. 无监督预训练:模型首先在一个未标记的大规模文本语料库上进行无监督训练,以学习语言的基本规律和表征。

  3. 广泛应用:经过预训练后,LLM可以被微调(fine-tuned)用于多种自然语言处理任务,如文本生成、文本摘要、问答系统、机器翻译、情感分析等。

  4. 深度学习架构:LLM通常基于诸如Transformer这样的深度学习架构,这种架构能够有效捕获长距离依赖关系。

  5. 强大的泛化能力:尽管仅在未标注数据上训练,LLM却能在多种不同的NLP任务上展现出优秀的泛化能力和迁移学习能力。

代表性的大语言模型有Google的BERT、OpenAI的GPT系列(如GPT-3、GPT-4)、阿里云的通义千问等。随着技术的发展,LLM在近期内取得了巨大进步,在诸多领域展现出了前所未有的语言理解和生成能力。

我们将展示如何使用通过 API 提供的模型(例如 MistralAI)和本地开源模型,以及使用 Ollama 等集成。

python 复制代码
# 首先我们需要导入LangChain x MistralAI集成包。
!pip install --quiet --upgrade langchain-mistralai dashscope

访问 API 需要 API 密钥,您可以通过创建帐户并前往此处获取该密钥。一旦我们有了密钥,我们就需要通过运行以下命令将其设置为环境变量:

export MISTRALAI_API_KEY="..."

然后我们可以初始化模型:

python 复制代码
from langchain_mistralai import ChatMistralAI
llm=ChatMistralAI()
import os
os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE"
python 复制代码
from langchain_community.llms import Tongyi
llm=Tongyi(model="qwen-max", temperature=0)

一旦您安装并初始化了您选择的 LLM,我们就可以尝试使用它!让我们问它 LangSmith 是什么 - 这是训练数据中不存在的东西,因此它不应该有很好的响应。

python 复制代码
llm.invoke("how can langsmith help with testing?")
swift 复制代码
"Langsmith, as a language model, can assist with testing in several ways:\n\n1. **Automated Test Case Generation**: Langsmith can be used to generate test cases by analyzing the requirements or code and suggesting input combinations that cover a wide range of scenarios. This helps in finding edge cases and potential bugs that might not have been considered during development.\n\n2. **Natural Language Processing (NLP) for Test Documentation**: Langsmith can assist in creating clear and concise test documentation, such as test plans, test cases, and test reports, by providing well-structured text based on the input provided.\n\n."

可以通过提示模板来指导其响应。提示模板将原始用户输入转换为更好的 LLM 输入。

在LangChain这个编程框架中,"chain"这一概念是用来组织和串联多个自然语言处理(NLP)任务或模块的关键机制。具体来说,Chain是LangChain中的一个重要模块,它负责将不同的处理步骤或者说是"环节"链接在一起,形成一个完整的业务流程或工作流,以便完成复杂的、多步骤的自然语言处理任务。

在实际应用中,Chain可以看作是一个执行单元,它可以包含一系列预定义的操作序列,比如先使用一个语言模型进行文本翻译,接着进行文本摘要,然后进一步进行内容分类或其他相关操作。每一个Chain可以封装一个独立的任务逻辑,并且不同的Chain之间可以通过设计相互依赖和数据传递,形成所谓的"链条"。

例如,在LangChain的实际案例中,可能会创建一个SequentialChain,其中包含了几个不同的LLM(大语言模型)Chain,前一个Chain的输出作为后一个Chain的输入,这样就实现了多个模型和功能之间的协同工作,大大简化了复杂NLP应用的开发流程,并增强了整体解决方案的能力与灵活性。通过这种方式,开发者可以轻松地组装和配置各种组件,以解决定制化的自然语言处理问题。

python 复制代码
from langchain_core.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are world class technical documentation writer."),
    ("user", "{input}")
])
#组合成简单的链
chain = prompt | llm 
# 调用它并提出同样的问题
chain.invoke({"input": "how can langsmith help with testing?"})
swift 复制代码
"Langsmith, as a hypothetical tool or service, could assist with testing in several ways, particularly in the domain of natural language processing (NLP) and text-based applications. Here's how Langsmith might be useful:\n\n1. **Automated Testing Framework**: Langsmith could provide an automated testing framework specifically designed for NLP tasks, such as sentiment analysis, named entity recognition, or machine translation. This would allow developers to create test cases, input various texts, and validate the expected outputs against actual results.\n\n2. **Test Case Generation**: Langsmith could generate diverse and challenging test cases to stress-test an NLP model's performance. It might use techniques like adversarial examples, synthetic data, or perturbed inputs to identify edge cases and potential weaknesses.\n\n3. **Natural Language Understanding (NLU) Evaluation**: For conversational AI or chatbot development, Langsmith could assess the system's ability to understand user intent and respond appropriately by providing a range of conversational scenarios and evaluating the responses.\n\n"

"ChatModel"这个词可能是指代一类聊天机器人或对话式人工智能模型,尤其在特定上下文中可能指的是某个聊天模型产品或框架。然而,这里并没有明确指向某一个具体的技术或项目。在AI领域,像阿里云的通义千问、OpenAI的GPT-3、DeepMind的ChatterBot、阿里云的天池社区提及的langchain-ChatGLM等都是不同类型的聊天模型或平台。

如果是在langchain的上下文中提到"ChatModel",可能是指使用langchain框架构建的一个聊天模型应用,该模型能通过学习大量的对话数据集进行训练,以实现与人类用户进行流畅、有意义的对话交互。ChatModel在这种情况下代表了一种具备对话理解与生成能力的AI模型。

ChatModel(因此,该链)的输出是一条消息。然而,使用字符串通常要方便得多。让我们添加一个简单的输出解析器来将聊天消息转换为字符串。

python 复制代码
from langchain_core.output_parsers import StrOutputParser

output_parser = StrOutputParser()

# 将其添加到之前的链中:

chain = prompt | llm | output_parser

# 调用它并提出同样的问题。现在答案将是一个字符串(而不是 ChatMessage)。

chain.invoke({"input": "how can langsmith help with testing?"})
swift 复制代码
"LangSmith, as a hypothetical tool or service, could assist with testing in several ways, particularly in the realm of natural language processing (NLP) and text-based applications. Here's how LangSmith might contribute to different testing scenarios:\n\n1. **Automated Testing**: LangSmith could provide APIs or SDKs for integrating natural language understanding into your test automation frameworks. This would allow you to create tests that validate the functionality and accuracy of text inputs and outputs, such as chatbots, voice assistants, or language translation systems.\n\n2. **Test Case Generation**: LangSmith could generate diverse and realistic test cases by creating various linguistic scenarios, edge cases, and corner cases. This helps ensure that your application handles a wide range of user inputs effectively.\n\n3. **Syntax and Grammar Checking**: For applications that rely on proper language usage, LangSmith could be used to verify that user-generated content or system responses conform to grammatical rules, helping identify issues before they reach end-users.\n\n"

深入探索

我们现在已经成功建立了一个基本的 LLM链。我们只涉及了提示、模型和输出解析器的基础知识

检索链

在LangChain框架中,"检索链"(Retrieval Chain)是指一组有序排列、相互协作的组件,这些组件共同作用于信息检索和处理流程,目的是高效地从大规模数据集中找到与用户查询最相关的信息片段或文档,并将其作为输入传递给后续的处理阶段。

在LangChain中,检索链主要涉及以下几个方面:

  1. 检索器(Retrievers):检索器是检索链中的关键组件之一,它们可以从向量数据库或者其他类型的数据存储中检索相关信息。检索器可以根据输入的查询,通过计算查询与所有候选文档之间的相似度得分来筛选出高相关性文档。

  2. 向量存储(Vector Stores):向量数据库用于存储文档的向量表示,这些向量通常是通过预训练的语言模型得到的,可以反映文档的语义信息。检索器利用这些向量进行高效的相似性检索。

  3. 中间处理步骤:检索链可能还包括额外的中间处理步骤,比如对检索结果进行排序、过滤、合并、剪枝等操作,以提高最终结果的质量和针对性。

  4. 下游任务:检索链的结果通常会被送入下一个处理阶段,这可能是另一个模型(如大型语言模型LLM),进行推理、生成答案、补充上下文信息等任务。

总之,在LangChain中构建检索链是为了实现多阶段的信息检索和整合流程,确保在大型语言模型和其他NLP应用中提供准确且丰富的上下文输入,从而提升整个系统的问答质量和用户体验。检索链的设计让开发者能够灵活地组合和配置不同的检索技术和模型,以应对多样化的需求场景。

为了正确回答最初的问题("langsmith 如何帮助测试?"),我们需要为LLM提供额外的背景信息。我们可以通过检索来做到这一点。当您有太多数据无法直接传递给LLM时,检索非常有用。然后,您可以使用检索器仅获取最相关的部分并将其传递进去。

在此过程中,我们将从检索器中查找相关文档,然后将它们传递到提示符中。检索器可以由任何东西支持 - SQL 表、互联网等 - 但在本例中,我们将填充向量存储并将其用作检索器。

我们需要加载要索引的数据。为此,我们将使用 WebBaseLoader。这需要安装BeautifulSoup:

python 复制代码
pip install --quiet --upgrade beautifulsoup4
vbnet 复制代码
Note: you may need to restart the kernel to use updated packages.
python 复制代码
# 导入并使用WebBaseLoader

from langchain_community.document_loaders import WebBaseLoader
loader = WebBaseLoader("https://docs.smith.langchain.com/user_guide")

docs = loader.load()

将其索引到向量存储中。这需要一些组件,即嵌入模型和向量存储。

对于 Embedding模型,这里使用mistral-embed,向量存储使用 pgvector

python 复制代码
pip install --quiet --upgrade faiss-cpu icecream
vbnet 复制代码
Note: you may need to restart the kernel to use updated packages.
python 复制代码
#确保您安装了"langchain_mistralai"软件包并设置了适当的环境变量(这些变量与LLM所需的相同)。
from langchain_mistralai import MistralAIEmbeddings
embeddings=MistralAIEmbeddings()
#使用此嵌入模型将文档提取到向量存储中
python 复制代码
#创建索引
from langchain_community.vectorstores import FAISS
from langchain_text_splitters import RecursiveCharacterTextSplitter

text_splitter = RecursiveCharacterTextSplitter()
documents = text_splitter.split_documents(docs)
vector = FAISS.from_documents(documents, embeddings)

现在我们已经在矢量存储中索引了这些数据,我们将创建一个检索链。该链将接受传入的问题,查找相关文档,然后将这些文档与原始问题一起传递给 LLM 并要求其回答原始问题。

首先,让我们建立一个链,该链接受问题和检索到的文档并生成答案。

python 复制代码
from langchain.chains.combine_documents import create_stuff_documents_chain

prompt = ChatPromptTemplate.from_template("""Answer the following question based only on the provided context:

<context>
{context}
</context>

Question: {input}""")

document_chain = create_stuff_documents_chain(llm, prompt)
print(document_chain)
ini 复制代码
bound=RunnableBinding(bound=RunnableAssign(mapper={
  context: RunnableLambda(format_docs)
}), config={'run_name': 'format_inputs'})
| ChatPromptTemplate(input_variables=['context', 'input'], messages=[HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['context', 'input'], template='Answer the following question based only on the provided context:\n\n<context>\n{context}\n</context>\n\nQuestion: {input}'))])
| Tongyi(client=<class 'dashscope.aigc.generation.Generation'>, dashscope_api_key='sk-79a0c7b4164749fa89874039945108af')
| StrOutputParser() config={'run_name': 'stuff_documents_chain'}

希望文档首先来自我们刚刚设置的检索器。这样,我们可以使用检索器动态选择最相关的文档并将其传递给给定的问题。

python 复制代码
from langchain.chains import create_retrieval_chain

retriever = vector.as_retriever()
retrieval_chain = create_retrieval_chain(retriever, document_chain)
#调用这个链。这将返回一个字典 - 来自 LLM 的响应位于answer键中
response = retrieval_chain.invoke({"input": "how can langsmith help with testing?"})
print(response["answer"])

# LangSmith offers several features that can help with testing:...
vbnet 复制代码
LangSmith helps with testing in several ways:

1. **Initial Test Set**: Developers can create datasets consisting of input-output pairs for running tests on their LLM applications. These test cases can be uploaded, created on the fly, or exported from application traces.

2. **Custom Evaluations**: LangSmith enables users to run custom evaluations, both LLM-based and heuristic-based, to score test results.

3. **Comparison View**: This feature allows comparing different configurations of an application side-by-side on the same dataset, helping identify performance improvements or regressions across multiple revisions.

4. **Playground**: The playground environment facilitates rapid iteration and experimentation with different prompts and models, and logs these runs for future reference or testing.

对话检索链

到目前为止,我们创建的链只能回答单个问题。人们构建的 LLM 应用程序的主要类型之一是聊天机器人。那么我们如何将这个链条变成一个可以回答后续问题的链条呢?

在LangChain的上下文中,对话检索链(Conversational Retrieval Chain)是一种特别针对对话式应用场景设计的信息检索流程。它结合了自然语言处理、机器学习以及潜在的分布式存储技术,优化了对话系统的响应能力,使得系统能够更好地理解和回应用户的连续性或多轮对话请求。

对话检索链的核心功能包括:

  1. 对话历史理解:能够跟踪和理解之前的对话历史,将历史上下文纳入当前查询的考虑范围,以便提供连贯和相关的答复。

  2. 动态检索策略:根据对话的进展实时调整检索策略,从大量文档或其他数据源中找出与当前对话主题最匹配的信息。

  3. 多模态检索:支持对文本、语音等多种信息形式的检索,适应多模态对话场景。

  4. 交互式反馈:允许系统根据用户的反馈动态更新检索结果,迭代改进对话内容的质量。

  5. 深度集成LLMs:与大型语言模型(LLMs)紧密结合,检索到的相关信息被用来辅助LLM生成更为精准的回答或进一步引导对话。

简而言之,在对话检索链中,系统不仅单纯依赖LLM生成回答,还首先通过一系列检索步骤从已有的知识库或网络资源中获取相关对话素材,再将这些信息融入到最终的回答生成过程中,形成一种既能即时响应又具有深入探讨能力的智能对话体验。

我们仍然可以使用该create_retrieval_chain函数,但我们需要更改两件事:

  • 检索方法现在不应仅适用于最近的输入,而应考虑整个历史记录。

  • 最终的 LLM 链同样应该考虑整个历史

    更新检索

为了更新检索,我们将创建一个新链。该链将接受最近的输入 ( input) 和对话历史记录 ( chat_history) 并使用 LLM 生成搜索查询。

python 复制代码
from langchain.chains import create_history_aware_retriever
from langchain_core.prompts import MessagesPlaceholder

# First we need a prompt that we can pass into an LLM to generate this search query

prompt = ChatPromptTemplate.from_messages([
    MessagesPlaceholder(variable_name="chat_history"),
    ("user", "{input}"),
    ("user", "Given the above conversation, generate a search query to look up to get information relevant to the conversation")
])
retriever_chain = create_history_aware_retriever(llm, retriever, prompt)

from langchain_core.messages import HumanMessage, AIMessage

chat_history = [HumanMessage(content="Can LangSmith help test my LLM applications?"), AIMessage(content="Yes!")]
retriever_chain.invoke({
    "chat_history": chat_history,
    "input": "Tell me how"
})
vbnet 复制代码
[Document(page_content='Skip to main content🦜️🛠️ LangSmith DocsLangChain Python DocsLangChain JS/TS DocsLangSmith API DocsSearchGo to AppQuick StartUser GuidePricingSelf-HostingTracingEvaluationProduction Monitoring & AutomationsPrompt HubProxyCookbookUser GuideOn this pageLangSmith User GuideLangSmith is a platform for LLM application development, monitoring, and testing. In this guide, we'll highlight the breadth of workflows LangSmith supports and how they fit into each stage of the application development lifecycle. We hope this will inform users how to best utilize this powerful platform or give them something to consider if they're just starting their journey.Prototyping\u200bPrototyping LLM applications often involves quick experimentation between prompts, model types, retrieval strategy and other parameters.\nThe ability to rapidly understand how the model is performing --- and debug where it is failing --- is incredibly important for this phase.Debugging\u200bWhen developing new LLM applications, we suggest having LangSmith tracing enabled by default.\nOftentimes, it isn't necessary to look at every single trace. However, when things go wrong (an unexpected end result, infinite agent loop, slower than expected execution, higher than expected token usage), it's extremely helpful to debug by looking through the application traces. LangSmith gives clear visibility and debugging information at each step of an LLM sequence, making it much easier to identify and root-cause issues.\n

创建一个新的链来继续与这些检索到的文档进行对话。

python 复制代码
prompt = ChatPromptTemplate.from_messages([
    ("system", "Answer the user's questions based on the below context:\n\n{context}"),
    MessagesPlaceholder(variable_name="chat_history"),
    ("user", "{input}"),
])
document_chain = create_stuff_documents_chain(llm, prompt)

retrieval_chain = create_retrieval_chain(retriever_chain, document_chain)
#端到端地测试这个:

chat_history = [HumanMessage(content="Can LangSmith help test my LLM applications?"), AIMessage(content="Yes!")]
retrieval_chain.invoke({
    "chat_history": chat_history,
    "input": "Tell me how"
})
vbnet 复制代码
{'chat_history': [HumanMessage(content='Can LangSmith help test my LLM applications?'),
  AIMessage(content='Yes!')],
 'input': 'Tell me how',
 'context': [Document(page_content='Skip to main content🦜️🛠️ LangSmith DocsLangChain Python DocsLangChain JS/TS DocsLangSmith API DocsSearchGo to AppQuick StartUser GuidePricingSelf-HostingTracingEvaluationProduction Monitoring & AutomationsPrompt HubProxyCookbookUser GuideOn this pageLangSmith User GuideLangSmith is a platform for LLM application development, monitoring, and testing. In this guide, we'll highlight the breadth of workflows LangSmith supports and how they fit into each stage of the application development lifecycle. We hope this will inform users how to best utilize this powerful platform or give them something to consider if they're just starting their journey.Prototyping\u200bPrototyping LLM applications often involves quick experimentation between prompts, model types, retrieval strategy and other parameters.\n

构建代理时要做的第一件事就是决定它应该有权访问哪些工具。在此示例中,我们将授予代理访问两个工具的权限:

  • 我们刚刚创建的检索器。这将让它轻松回答有关 LangSmith 的问题
  • 一个搜索工具。这将使它能够轻松回答需要最新信息的问题。

首先,让我们为刚刚创建的检索器设置一个工具:

python 复制代码
from langchain.tools.retriever import create_retriever_tool

retriever_tool = create_retriever_tool(
    retriever,
    "langsmith_search",
    "Search for information about LangSmith. For any questions about LangSmith, you must use this tool!",
)

我们将使用的搜索工具是Tavily。这将需要一个 API 密钥(他们有慷慨的免费套餐)。在他们的平台上创建后,您需要将其设置为环境变量:

python 复制代码
pip install -U --quiet langchain-community tavily-python
vbnet 复制代码
Note: you may need to restart the kernel to use updated packages.
python 复制代码
import getpass
import os

os.environ["TAVILY_API_KEY"] = getpass.getpass()
 ········
python 复制代码
from langchain_community.tools.tavily_search import TavilySearchResults

search = TavilySearchResults()
python 复制代码
# 创建我们想要使用的工具的列表
tools = [retriever_tool, search]
python 复制代码
os.environ['TOKENIZERS_PARALLELISM']='false'
python 复制代码
pip install -U --quiet langchainhub langchain-openai
vbnet 复制代码
Note: you may need to restart the kernel to use updated packages.
python 复制代码
pip install -U certifi cryptography pyOpenSSL
bash 复制代码
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Requirement already satisfied: certifi in /Users/xuefeng/anaconda3/envs/es/lib/python3.11/site-packages (2024.2.2)
Requirement already satisfied: cryptography in /Users/xuefeng/anaconda3/envs/es/lib/python3.11/site-packages (42.0.5)
Requirement already satisfied: pyOpenSSL in /Users/xuefeng/anaconda3/envs/es/lib/python3.11/site-packages (24.1.0)
Requirement already satisfied: cffi>=1.12 in /Users/xuefeng/anaconda3/envs/es/lib/python3.11/site-packages (from cryptography) (1.16.0)
Requirement already satisfied: pycparser in /Users/xuefeng/anaconda3/envs/es/lib/python3.11/site-packages (from cffi>=1.12->cryptography) (2.21)
Note: you may need to restart the kernel to use updated packages.
python 复制代码
from langchain_openai import ChatOpenAI
from langchain import hub
from langchain.agents import create_openai_functions_agent
from langchain.agents import AgentExecutor

# Get the prompt to use - you can modify this!
prompt = hub.pull("hwchase17/openai-functions-agent")

# You need to set OPENAI_API_KEY environment variable or pass it as argument `api_key`.
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0,base_url="https://api.chatanywhere.tech/v1")
agent = create_openai_functions_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
python 复制代码
#调用代理并查看它如何响应! 
#向它询问有关 LangSmith 的问题:
agent_executor.invoke({"input": "how can langsmith help with testing?"})

# 询问天气情况:
agent_executor.invoke({"input": "what is the weather in SF?"})

# 和它进行对话:
chat_history = [HumanMessage(content="Can LangSmith help test my LLM applications?"), AIMessage(content="Yes!")]
agent_executor.invoke({
    "chat_history": chat_history,
    "input": "Tell me how"
})
vbnet 复制代码
> Entering new AgentExecutor chain...


huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
    - Avoid using `tokenizers` before the fork if possible
    - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)



Invoking: `langsmith_search` with `{'query': 'how can LangSmith help with testing'}`


Skip to main content🦜️🛠️ LangSmith DocsLangChain Python DocsLangChain JS/TS DocsLangSmith API DocsSearchGo to AppQuick StartUser GuidePricingSelf-HostingTracingEvaluationProduction Monitoring & AutomationsPrompt HubProxyCookbookUser GuideOn this pageLangSmith User GuideLangSmith is a platform for LLM application development, monitoring, and testing. In this guide, we'll highlight the breadth of workflows LangSmith supports and how they fit into each stage of the application development lifecycle. We hope this will inform users how to best utilize this powerful platform or give them something to consider if they're just starting their journey.Prototyping​Prototyping LLM applications often involves quick experimentation between prompts, model types, retrieval strategy and other parameters.
The ability to rapidly understand how the model is performing --- and debug where it is failing --- is incredibly important for this phase.Debugging​When developing new LLM applications, we suggest having LangSmith tracing enabled by default.
Oftentimes, it isn't necessary to look at every single trace. However, when things go wrong (an unexpected end result, infinite agent loop, slower than expected execution, higher than expected token usage), it's extremely helpful to debug by looking through the application traces. LangSmith gives clear visibility and debugging information at each step of an LLM sequence, making it much easier to identify and root-cause issues.


> Finished chain.


> Entering new AgentExecutor chain...

Invoking: `langsmith_search` with `{'query': 'LangSmith LLM application testing services'}`


Skip to main content🦜️🛠️ LangSmith DocsLangChain Python DocsLangChain JS/TS DocsLangSmith API DocsSearchGo to AppQuick StartUser GuidePricingSelf-HostingTracingEvaluationProduction Monitoring & AutomationsPrompt HubProxyCookbookUser GuideOn this pageLangSmith User GuideLangSmith is a platform for LLM application development, monitoring, and testing. In this guide, we'll highlight the breadth of workflows LangSmith supports and how they fit into each stage of the application development lifecycle. We hope this will inform users how to best utilize this powerful platform or give them something to consider if they're just starting their journey.Prototyping​Prototyping LLM applications often involves quick experimentation between prompts, model types, retrieval strategy and other parameters.
The ability to rapidly understand how the model is performing --- and debug where it is failing --- is incredibly important for this phase.Debugging​When developing new LLM applications, we suggest having LangSmith tracing enabled by default.
Oftentimes, it isn't necessary to look at every single trace. However, when things go wrong (an unexpected end result, infinite agent loop, slower than expected execution, higher than expected token usage), it's extremely helpful to debug by looking through the application traces. LangSmith gives clear visibility and debugging information at each step of an LLM sequence, making it much easier to identify and root-cause issues.


> Finished chain.





{'chat_history': [HumanMessage(content='Can LangSmith help test my LLM applications?'),
  AIMessage(content='Yes!')],
 'input': 'Tell me how',
 'output': 'LangSmith is a platform for LLM application development, monitoring, and testing. It supports various workflows that can help in testing LLM applications effectively. Here are some key features and workflows supported by LangSmith for testing LLM applications:\n\n1. Prototyping: Allows quick experimentation between prompts, model types, retrieval strategy, and other parameters.\n2. Debugging: Provides clear visibility and debugging information at each step of an LLM sequence.\n3. Initial Test Set: Enables developers to create datasets of inputs and reference outputs to run tests on LLM applications.\n4. Comparison View: Helps in comparing results for different configurations on the same data points side-by-side.\n5. Playground: Provides a playground environment for rapid iteration and experimentation with different prompts and models.\n6. Beta Testing: Collects data on how LLM applications perform in real-world scenarios and gathers feedback for improvements.\n7. Capturing Feedback: Allows gathering human feedback on the responses produced by the application.\n8. Annotating Traces: Supports sending runs to annotation queues for closer inspection and annotation.\n9. Adding Runs to a Dataset: Enables adding runs as examples to datasets to expand test coverage on real-world scenarios.\n10. Production Monitoring: Provides monitoring charts to track key metrics over time and ensure desirable results at scale.\n11. A/B Testing: Allows for testing changes in prompt, model, or retrieval strategy side-by-side.\n12. Automations: Enables performing actions on traces in near real-time, such as scoring traces or sending them to datasets.\n13. Threads: Groups traces from a single conversation together for easier tracking and annotation across multiple turns.\n\nThese features and workflows in LangSmith can be utilized to effectively test and improve LLM applications.'}

本文由mdnice多平台发布

相关推荐
Javatutouhouduan9 小时前
如何系统全面地自学Java语言?
java·后端·程序员·编程·架构师·自学·java八股文
前端切圖仔15 小时前
失业,仲裁,都赶上了(二)
前端·javascript·程序员
小李小李不讲道理2 天前
行动+思考 | 2024年度总结
前端·程序员·年终总结
聪小陈2 天前
圣诞节:记一次掘友让我感动的时刻
前端·程序员
百万蹄蹄向前冲3 天前
2024不一样的VUE3期末考查
前端·javascript·程序员
陈哥聊测试4 天前
软件格局在变,谁能扛起国产替代的大旗?
安全·程序员·产品
黄油饼卷咖喱鸡就味增汤拌孜然羊肉炒饭4 天前
SpringBoot如何实现缓存预热?
java·spring boot·spring·缓存·程序员
少年姜太公4 天前
从零开始详解js中的this(下)
前端·javascript·程序员
凌虚4 天前
Kubernetes APF(API 优先级和公平调度)简介
后端·程序员·kubernetes
小华同学ai5 天前
ShowDoc:Star12.3k,福利项目,个人小团队的在线文档“简单、易用、轻量化”还专门针对API文档、技术文档做了优化
前端·程序员·github