自然语言处理从入门到应用——LangChain:链(Chains)-[通用功能:LLMChain、RouterChain和SequentialChain]

分类目录:《自然语言处理从入门到应用》总目录


LLMChain

LLMChain是查询LLM对象最流行的方式之一。它使用提供的输入键值(如果有的话,还包括内存键值)格式化提示模板,将格式化的字符串传递给LLM,并返回LLM的输出。下面我们展示了LLMChain类的附加功能:

csharp 复制代码
from langchain import PromptTemplate, OpenAI, LLMChain

prompt_template = "What is a good name for a company that makes {product}?"

llm = OpenAI(temperature=0)
llm_chain = LLMChain(
    llm=llm,
    prompt=PromptTemplate.from_template(prompt_template)
)
llm_chain("colorful socks")

输出:

{'product': 'colorful socks', 'text': '\n\nSocktastic!'}
LLM链条的额外运行方式

除了所有Chain对象共享的__call__run方法之外,LLMChain还提供了几种调用链条逻辑的方式:

  • apply:允许我们对一组输入运行链:
csharp 复制代码
input_list = [
    {"product": "socks"},
    {"product": "computer"},
    {"product": "shoes"}
]

llm_chain.apply(input_list)
[{'text': '\n\nSocktastic!'},
 {'text': '\n\nTechCore Solutions.'},
 {'text': '\n\nFootwear Factory.'}]
  • generate:与apply类似,但返回一个LLMResult而不是字符串。LLMResult通常包含有用的生成信息,例如令牌使用情况和完成原因。
csharp 复制代码
llm_chain.generate(input_list)

输出:

LLMResult(generations=[[Generation(text='\n\nSocktastic!', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\nTechCore Solutions.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\nFootwear Factory.', generation_info={'finish_reason': 'stop', 'logprobs': None})]], llm_output={'token_usage': {'prompt_tokens': 36, 'total_tokens': 55, 'completion_tokens': 19}, 'model_name': 'text-davinci-003'})
  • predict:与run方法类似,只是输入键被指定为关键字参数,而不是Python字典。
csharp 复制代码
# Single input example
llm_chain.predict(product="colorful socks")

输出:

'\n\nSocktastic!'

输入:

# Multiple inputs example

template = """Tell me a {adjective} joke about {subject}."""
prompt = PromptTemplate(template=template, input_variables=["adjective", "subject"])
llm_chain = LLMChain(prompt=prompt, llm=OpenAI(temperature=0))

llm_chain.predict(adjective="sad", subject="ducks")

输出:

'\n\nQ: What did the duck say when his friend died?\nA: Quack, quack, goodbye.'
解析输出结果

默认情况下,即使底层的prompt对象具有输出解析器,LLMChain也不会解析输出结果。如果你想在LLM输出上应用输出解析器,可以使用predict_and_parse代替predict,以及apply_and_parse代替apply

仅使用predict方法:

csharp 复制代码
from langchain.output_parsers import CommaSeparatedListOutputParser

output_parser = CommaSeparatedListOutputParser()
template = """List all the colors in a rainbow"""
prompt = PromptTemplate(template=template, input_variables=[], output_parser=output_parser)
llm_chain = LLMChain(prompt=prompt, llm=llm)

llm_chain.predict()

输出:

'\n\nRed, orange, yellow, green, blue, indigo, violet'

使用predict_and_parser方法:

csharp 复制代码
llm_chain.predict_and_parse()

输出:

['Red', 'orange', 'yellow', 'green', 'blue', 'indigo', 'violet']
从字符串模板初始化

我们还可以直接使用字符串模板构建一个LLMChain。

csharp 复制代码
template = """Tell me a {adjective} joke about {subject}."""
llm_chain = LLMChain.from_string(llm=llm, template=template)
llm_chain.predict(adjective="sad", subject="ducks")

输出:

'\n\nQ: What did the duck say when his friend died?\nA: Quack, quack, goodbye.'

RouterChain

本节演示了如何使用RouterChain创建一个根据给定输入动态选择下一个链条的链条。RouterChain通常由两个组件组成:

  • 路由链本身(负责选择下一个要调用的链条)
  • 目标链条,即路由链可以路由到的链条

本节中,我们将重点介绍不同类型的路由链。我们将展示这些路由链在MultiPromptChain中的应用,创建一个问题回答链条,根据给定的问题选择最相关的提示,然后使用该提示回答问题。

csharp 复制代码
from langchain.chains.router import MultiPromptChain
from langchain.llms import OpenAI
from langchain.chains import ConversationChain
from langchain.chains.llm import LLMChain
from langchain.prompts import PromptTemplate
physics_template = """You are a very smart physics professor. \
You are great at answering questions about physics in a concise and easy to understand manner. \
When you don't know the answer to a question you admit that you don't know.

Here is a question:
{input}"""


math_template = """You are a very good mathematician. You are great at answering math questions. \
You are so good because you are able to break down hard problems into their component parts, \
answer the component parts, and then put them together to answer the broader question.

Here is a question:
{input}"""
prompt_infos = [
    {
        "name": "physics", 
        "description": "Good for answering questions about physics", 
        "prompt_template": physics_template
    },
    {
        "name": "math", 
        "description": "Good for answering math questions", 
        "prompt_template": math_template
    }
]
llm = OpenAI()
destination_chains = {}
for p_info in prompt_infos:
    name = p_info["name"]
    prompt_template = p_info["prompt_template"]
    prompt = PromptTemplate(template=prompt_template, input_variables=["input"])
    chain = LLMChain(llm=llm, prompt=prompt)
    destination_chains[name] = chain
default_chain = ConversationChain(llm=llm, output_key="text")
LLMRouterChain

LLMRouterChain链条使用一个LLM来确定如何进行路由。

csharp 复制代码
from langchain.chains.router.llm_router import LLMRouterChain, RouterOutputParser
from langchain.chains.router.multi_prompt_prompt import MULTI_PROMPT_ROUTER_TEMPLATE
destinations = [f"{p['name']}: {p['description']}" for p in prompt_infos]
destinations_str = "\n".join(destinations)
router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format(
    destinations=destinations_str
)
router_prompt = PromptTemplate(
    template=router_template,
    input_variables=["input"],
    output_parser=RouterOutputParser(),
)
router_chain = LLMRouterChain.from_llm(llm, router_prompt)
chain = MultiPromptChain(router_chain=router_chain, destination_chains=destination_chains, default_chain=default_chain, verbose=True)
print(chain.run("What is black body radiation?"))

日志输出:

> Entering new MultiPromptChain chain...
physics: {'input': 'What is black body radiation?'}
> Finished chain.

输出:

Black body radiation is the term used to describe the electromagnetic radiation emitted by a "black body"---an object that absorbs all radiation incident upon it. A black body is an idealized physical body that absorbs all incident electromagnetic radiation, regardless of frequency or angle of incidence. It does not reflect, emit or transmit energy. This type of radiation is the result of the thermal motion of the body's atoms and molecules, and it is emitted at all wavelengths. The spectrum of radiation emitted is described by Planck's law and is known as the black body spectrum.

输入:

print(chain.run("What is the first prime number greater than 40 such that one plus the prime number is divisible by 3"))

输出:

> Entering new MultiPromptChain chain...
math: {'input': 'What is the first prime number greater than 40 such that one plus the prime number is divisible by 3'}
> Finished chain.

输出:

The answer is 43. One plus 43 is 44 which is divisible by 3.

输入:

print(chain.run("What is the name of the type of cloud that rins"))

日志输出:

> Entering new MultiPromptChain chain...
None: {'input': 'What is the name of the type of cloud that rains?'}
> Finished chain.

输出:

The type of cloud that rains is called a cumulonimbus cloud. It is a tall and dense cloud that is often accompanied by thunder and lightning.
EmbeddingRouterChain

EmbeddingRouterChain使用嵌入和相似性来在目标链条之间进行路由。

csharp 复制代码
from langchain.chains.router.embedding_router import EmbeddingRouterChain
from langchain.embeddings import CohereEmbeddings
from langchain.vectorstores import Chroma
names_and_descriptions = [
    ("physics", ["for questions about physics"]),
    ("math", ["for questions about math"]),
]
router_chain = EmbeddingRouterChain.from_names_and_descriptions(
    names_and_descriptions, Chroma, CohereEmbeddings(), routing_keys=["input"]
)
chain = MultiPromptChain(router_chain=router_chain, destination_chains=destination_chains, default_chain=default_chain, verbose=True)
print(chain.run("What is black body radiation?"))

日志输出:

> Entering new MultiPromptChain chain...
physics: {'input': 'What is black body radiation?'}
> Finished chain.

输出:

Black body radiation is the emission of energy from an idealized physical body (known as a black body) that is in thermal equilibrium with its environment. It is emitted in a characteristic pattern of frequencies known as a black-body spectrum, which depends only on the temperature of the body. The study of black body radiation is an important part of astrophysics and atmospheric physics, as the thermal radiation emitted by stars and planets can often be approximated as black body radiation.

输入:

print(chain.run("What is the first prime number greater than 40 such that one plus the prime number is divisible by 3"))

日志输出:

> Entering new MultiPromptChain chain...
math: {'input': 'What is the first prime number greater than 40 such that one plus the prime number is divisible by 3'}
> Finished chain.

输出:

Answer: The first prime number greater than 40 such that one plus the prime number is divisible by 3 is 43.

参考文献:

[1] LangChain官方网站:https://www.langchain.com/

[2] LangChain 🦜️🔗 中文网,跟着LangChain一起学LLM/GPT开发:https://www.langchain.com.cn/

[3] LangChain中文网 - LangChain 是一个用于开发由语言模型驱动的应用程序的框架:http://www.cnlangchain.com/

相关推荐
Hiweir ·26 分钟前
机器翻译之创建Seq2Seq的编码器、解码器
人工智能·pytorch·python·rnn·深度学习·算法·lstm
Element_南笙29 分钟前
数据结构_1、基本概念
数据结构·人工智能
FutureUniant34 分钟前
GitHub每日最火火火项目(9.21)
人工智能·计算机视觉·ai·github·音视频
菜♕卷1 小时前
深度学习-03 Pytorch
人工智能·pytorch·深度学习
明明真系叻1 小时前
第十二周:机器学习笔记
人工智能·机器学习
你的陈某某1 小时前
【变化检测】基于ChangeStar建筑物(LEVIR-CD)变化检测实战及ONNX推理
深度学习·变化检测
AI王也2 小时前
ChatGPT搭上langchain的知识库RAG应用,效果超预期
人工智能·chatgpt·langchain·aigc
沧穹科技2 小时前
音频北斗定位系统有什么用?
人工智能
哦豁灬3 小时前
NCNN 学习(2)-Mat
深度学习·学习·ncnn
Bruce小鬼3 小时前
最新版本TensorFlow训练模型TinyML部署到ESP32入门实操
人工智能·python·tensorflow