LangChain基础篇 (04)

LangChain 核心模块:Data Conneciton - Document Loaders

使用文档加载器从源中加载数据作为文档。一个文档是一段文字和相关的元数据。

如,有用于加载简单 .txt 文件的文档加载器,用于加载 ArXiv 论文,或者任何网页的文本内容

Document 类

这段代码定义了一个名为Document的类,允许用户与文档的内容进行交互,可以查看文档的段落、摘要,以及使用查找功能来查询文档中的特定字符串。

python 复制代码
# 基于BaseModel定义的文档类。
class Document(BaseModel):
    """接口,用于与文档进行交互。"""

    # 文档的主要内容。
    page_content: str
    # 用于查找的字符串。
    lookup_str: str = ""
    # 查找的索引,初次默认为0。
    lookup_index = 0
    # 用于存储任何与文档相关的元数据。
    metadata: dict = Field(default_factory=dict)

    @property
    def paragraphs(self) -> List[str]:
        """页面的段落列表。"""
        # 使用"\n\n"将内容分割为多个段落。
        return self.page_content.split("\n\n")

    @property
    def summary(self) -> str:
        """页面的摘要(即第一段)。"""
        # 返回第一个段落作为摘要。
        return self.paragraphs[0]

    # 这个方法模仿命令行中的查找功能。
    def lookup(self, string: str) -> str:
        """在页面中查找一个词,模仿cmd-F功能。"""
        # 如果输入的字符串与当前的查找字符串不同,则重置查找字符串和索引。
        if string.lower() != self.lookup_str:
            self.lookup_str = string.lower()
            self.lookup_index = 0
        else:
            # 如果输入的字符串与当前的查找字符串相同,则查找索引加1。
            self.lookup_index += 1
        # 找出所有包含查找字符串的段落。
        lookups = [p for p in self.paragraphs if self.lookup_str in p.lower()]
        # 根据查找结果返回相应的信息。
        if len(lookups) == 0:
            return "No Results"
        elif self.lookup_index >= len(lookups):
            return "No More Results"
        else:
            result_prefix = f"(Result {self.lookup_index + 1}/{len(lookups)})"
            return f"{result_prefix} {lookups[self.lookup_index]}"

BaseLoader 类定义

BaseLoader 类定义了如何从不同的数据源加载文档,并提供了一个可选的方法来分割加载的文档。使用这个类作为基础,开发者可以为特定的数据源创建自定义的加载器,并确保所有这些加载器都提供了加载数据的方法。load_and_split方法还提供了一个额外的功能,可以根据需要将加载的文档分割为更小的块。

python 复制代码
# 基础加载器类。
class BaseLoader(ABC):
    """基础加载器类定义。"""

    # 抽象方法,所有子类必须实现此方法。
    @abstractmethod
    def load(self) -> List[Document]:
        """加载数据并将其转换为文档对象。"""

    # 该方法可以加载文档,并将其分割为更小的块。
    def load_and_split(
        self, text_splitter: Optional[TextSplitter] = None
    ) -> List[Document]:
        """加载文档并分割成块。"""
        # 如果没有提供特定的文本分割器,使用默认的字符文本分割器。
        if text_splitter is None:
            _text_splitter: TextSplitter = RecursiveCharacterTextSplitter()
        else:
            _text_splitter = text_splitter
        # 先加载文档。
        docs = self.load()
        # 然后使用_text_splitter来分割每一个文档。
        return _text_splitter.split_documents(docs)

使用 TextLoader 加载 Txt 文件

from langchain.document_loaders import TextLoader

docs = TextLoader('../tests/state_of_the_union.txt',encoding='utf-8').load()

使用 ArxivLoader 加载 ArXiv 论文

ArxivLoader 类定义

ArxivLoader 类专门用于从Arxiv平台获取文档。用户提供一个搜索查询,然后加载器与Arxiv API交互,以检索与该查询相关的文档列表。这些文档然后以标准的Document格式返回。

python 复制代码
# 针对Arxiv平台的加载器类。
class ArxivLoader(BaseLoader):
    """从`Arxiv`加载基于搜索查询的文档。

    此加载器负责将Arxiv的原始PDF文档转换为纯文本格式,以便于处理。
    """

    # 初始化方法。
    def __init__(
        self,
        query: str,
        load_max_docs: Optional[int] = 100,
        load_all_available_meta: Optional[bool] = False,
    ):
        self.query = query
        """传递给Arxiv API进行搜索的特定查询或关键字。"""
        self.load_max_docs = load_max_docs
        """从搜索中检索文档的上限。"""
        self.load_all_available_meta = load_all_available_meta
        """决定是否加载与文档关联的所有元数据的标志。"""

    # 基于查询获取文档的加载方法。
    def load(self) -> List[Document]:
        arxiv_client = ArxivAPIWrapper(
            load_max_docs=self.load_max_docs,
            load_all_available_meta=self.load_all_available_meta,
        )
        docs = arxiv_client.search(self.query)
        return docs

ArxivLoader有以下参数:

  • query:用于在ArXiv中查找文档的文本
  • load_max_docs:默认值为100。使用它来限制下载的文档数量。下载所有100个文档需要时间,因此在实验中请使用较小的数字。
  • load_all_available_meta:默认值为False。默认情况下只下载最重要的字段:发布日期(文档发布/最后更新日期)、标题、作者、摘要。如果设置为True,则还会下载其他字段。

GPT-3 论文(Language Models are Few-Shot Learners) 为例,展示如何使用 ArxivLoader

GPT-3 论文的 Arxiv 链接:https://arxiv.org/abs/2005.14165

from langchain.document_loaders import ArxivLoader
query = "2005.14165"
docs = ArxivLoader(query=query, load_max_docs=5).load()
len(docs)
docs[0].metadata  # meta-information of the Document

{'Published': '2020-07-22',
 'Title': 'Language Models are Few-Shot Learners',
 'Authors': 'Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, Dario Amodei',
 'Summary': "Recent work has demonstrated substantial gains on many NLP tasks and\nbenchmarks by pre-training on a large corpus of text followed by fine-tuning on\na specific task. While typically task-agnostic in architecture, this method\nstill requires task-specific fine-tuning datasets of thousands or tens of\nthousands of examples. By contrast, humans can generally perform a new language\ntask from only a few examples or from simple instructions - something which\ncurrent NLP systems still largely struggle to do. Here we show that scaling up\nlanguage models greatly improves task-agnostic, few-shot performance, sometimes\neven reaching competitiveness with prior state-of-the-art fine-tuning\napproaches. Specifically, we train GPT-3, an autoregressive language model with\n175 billion parameters, 10x more than any previous non-sparse language model,\nand test its performance in the few-shot setting. For all tasks, GPT-3 is\napplied without any gradient updates or fine-tuning, with tasks and few-shot\ndemonstrations specified purely via text interaction with the model. GPT-3\nachieves strong performance on many NLP datasets, including translation,\nquestion-answering, and cloze tasks, as well as several tasks that require\non-the-fly reasoning or domain adaptation, such as unscrambling words, using a\nnovel word in a sentence, or performing 3-digit arithmetic. At the same time,\nwe also identify some datasets where GPT-3's few-shot learning still struggles,\nas well as some datasets where GPT-3 faces methodological issues related to\ntraining on large web corpora. Finally, we find that GPT-3 can generate samples\nof news articles which human evaluators have difficulty distinguishing from\narticles written by humans. We discuss broader societal impacts of this finding\nand of GPT-3 in general."}

使用 UnstructuredURLLoader 加载网页内容

使用非结构化分区函数(Unstructured)来检测MIME类型并将文件路由到适当的分区器(partitioner)。

支持两种模式运行加载程序:"single"和"elements"。如果使用"single"模式,文档将作为单个langchain Document对象返回。如果使用"elements"模式,非结构化库将把文档拆分成标题和叙述文本等元素。您可以在mode后面传入其他非结构化kwargs以应用不同的非结构化设置。

UnstructuredURLLoader 主要参数:

  • urls:待加载网页 URL 列表
  • continue_on_failure:默认True,某个URL加载失败后,是否继续
  • mode:默认single

以 ReAct 网页为例(https://react-lm.github.io/) 展示使用

from langchain.document_loaders import UnstructuredURLLoader
urls = [
    "https://react-lm.github.io/",
]
loader = UnstructuredURLLoader(urls=urls)
data = loader.load()
data[0].metadata
print(data[0].page_content)

输出:

{'source': 'https://react-lm.github.io/'}
ReAct: Synergizing Reasoning and Acting in Language Models

Shunyu Yao,

Jeffrey Zhao,

Dian Yu,

Nan Du,

Izhak Shafran,

Karthik Narasimhan,

Yuan Cao

[Paper]

[Code]

[Blogpost]

[BibTex]

Language models are getting better at reasoning (e.g. chain-of-thought prompting) and acting (e.g. WebGPT, SayCan, ACT-1), but these two directions have remained separate. 
                ReAct asks, what if these two fundamental capabilities are combined?

Abstract

While large language models (LLMs) have demonstrated impressive capabilities across tasks in language understanding and interactive decision making, their abilities for reasoning (e.g. chain-of-thought prompting) and acting (e.g. action plan generation) have primarily been studied as separate topics. In this paper, we explore the use of LLMs to generate both reasoning traces and task-specific actions in an interleaved manner, allowing for greater synergy between the two: reasoning traces help the model induce, track, and update action plans as well as handle exceptions, while actions allow it to interface with external sources, such as knowledge bases or environments, to gather additional information. We apply our approach, named ReAct, to a diverse set of language and decision making tasks and demonstrate its effectiveness over state-of-the-art baselines, as well as improved human interpretability and trustworthiness over methods without reasoning or acting components. Concretely, on question answering (HotpotQA) and fact verification (Fever), ReAct overcomes issues of hallucination and error propagation prevalent in chain-of-thought reasoning by interacting with a simple Wikipedia API, and generates human-like task-solving trajectories that are more interpretable than baselines without reasoning traces. On two interactive decision making benchmarks (ALFWorld and WebShop), ReAct outperforms imitation and reinforcement learning methods by an absolute success rate of 34% and 10% respectively, while being prompted with only one or two in-context examples.

ReAct Prompting

A ReAct prompt consists of few-shot task-solving trajectories, with human-written text reasoning traces and actions, as well as environment observations in response to actions (see examples in paper appendix!) 
                ReAct prompting is intuitive and flexible to design, and achieves state-of-the-art few-shot performances across a variety of tasks, from question answering to online shopping!

HotpotQA Example

The reason-only baseline (i.e. chain-of-thought) suffers from misinformation (in red) as it is not grounded to external environments to obtain and update knowledge, and has to rely on limited internal knowledge. 
                The act-only baseline suffers from the lack of reasoning, unable to synthesize the final answer despite having the same actions and observation as ReAct in this case. 
                In contrast, ReAct solves the task with a interpretable and factual trajectory.

ALFWorld Example

For decision making tasks, we design human trajectories with sparse reasoning traces, letting the LM decide when to think vs. act. 
                ReAct isn't perfect --- below is a failure example on ALFWorld. However, ReAct format allows easy human inspection and behavior correction by changing a couple of model thoughts, an exciting novel approach to human alignment!

ReAct Finetuning: Initial Results

Prompting has limited context window and learning support. 
                    Initial finetuning results on HotpotQA using ReAct prompting trajectories suggest:
                    (1) ReAct is the best fintuning format across model sizes; 
                    (2) ReAct finetuned smaller models outperform prompted larger models!

loader = UnstructuredURLLoader(urls=urls, mode="elements")

new_data = loader.load()

new_data[0].page_content

输出:

'ReAct: Synergizing Reasoning and Acting in Language Models'
相关推荐
张3蜂4 小时前
PromptSource和LangChain哪个更好
人工智能·深度学习·机器学习·langchain
the_3rd_bomb14 小时前
langchain教程-3.OutputParser/输出解析
人工智能·自然语言处理·langchain
nmblr1 天前
LangChain基础篇 (03)
langchain
橙意满满的西瓜大侠2 天前
CSV数据分析智能工具(基于OpenAI API和streamlit)
人工智能·python·langchain·streamlit
MichaelIp2 天前
大模型高级工程师实践 - 将课程内容转为视频
人工智能·python·自然语言处理·langchain·prompt·aigc·音视频
橙意满满的西瓜大侠2 天前
PDF问答工具(基于openai API和streamlit)
人工智能·langchain·streamlit
算家云3 天前
基于Langchain-Chatchat + ChatGLM 本地部署知识库
langchain·chatglm·本地部署·chatchat·算家云·应用社区
三月七(爱看动漫的程序员)3 天前
模型/O功能之提示词模板
java·前端·javascript·人工智能·语言模型·langchain·prompt
nmblr3 天前
LangChain基础篇 (02)
langchain