🔗 LangChain for LLM Application Development - DeepLearning.AI
学习目标
1、LLMChain
2、Sequential Chains
3、Router Chain
LLMChain
python
import warnings
warnings.filterwarnings('ignore')
import os
import pandas as pd
from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv()) # read local .env file
python
df = pd.read_csv('Data.csv')
df.head()
引入一些langchain的package
python
from langchain_openai import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.chains import LLMChain
初始化一个语言模型
python
llm = ChatOpenAI(api_key=os.environ.get('ZHIPUAI_API_KEY'),
base_url=os.environ.get('ZHIPUAI_API_URL'),
model="glm-4",
temperature=0.98)
定义一个prompt template
python
prompt = ChatPromptTemplate.from_template(
"What is the best name to describe \
a company that makes {product}?"
)
我们组合成一个chain。然后执行(新版本的方法)
python
chain = prompt | llm
chain.invoke(input="Queen Size Sheet Set")
这里我们的GLM4的输出和GPT就不太一样了,这里给出了很多的选择
python
'''
Naming a company that specializes in manufacturing Queen Size Sheet Sets can be creative and reflective of the products' quality, comfort, or the brand's identity. Here are a few suggestions that might resonate well with customers:
1. Queen's Retreat Textiles
2. Linen Luxe Haven
3. Pillowtop Elegance Co.
4. Comfort Queendom
5. Sheet Splendor Studios
6. Royal Slumber Collections
7. Dreamy Queen Sheets
8. Luxe Linen Queens
9. Cozy Castle Sheets
10. Queen Quilted Comforts
When choosing the best name, consider factors such as the target market, brand positioning, and the image you want to convey. It's also essential to ensure that the name is unique, memorable, and not already trademarked by another company in the same industry.
'''
Sequential Chains
python
from langchain.chains import SimpleSequentialChain
llm = ChatOpenAI(api_key=os.environ.get('ZHIPUAI_API_KEY'),
base_url=os.environ.get('ZHIPUAI_API_URL'),
model="glm-4",
temperature=0.9)
这里我们定义两条链,他们是一个顺序执行的关系
python
# prompt template 1
first_prompt = ChatPromptTemplate.from_template(
"What is the best name to describe \
a company that makes {product}?"
)
# Chain 1
chain_one = LLMChain(llm=llm, prompt=first_prompt)
# prompt template 2
second_prompt = ChatPromptTemplate.from_template(
"Write a 20 words description for the following \
company:{company_name}"
)
# chain 2
chain_two = LLMChain(llm=llm, prompt=second_prompt)
我们实例这个单一顺序链 ,执行以下,这里的效果和gpt差的还是有一些
python
overall_simple_chain = SimpleSequentialChain(chains=[chain_one, chain_two],verbose=True)
overall_simple_chain.run("Queen Size Sheet Set")
'''
The best name for a company that specializes in making Queen Size Sheet Sets could be something that reflects the quality, comfort, and size of the product. Here are a few suggestions:
1. Queen's Linen Luxe
2. Regal Sheet Haven
3. Comfort Queen Sheets
4. Royal Slumber Company
5. Queenly Quilted Sheets
6. Opulent Bedding Emporium
7. Serene Sleep Solutions
8. Queen Comfort Collection
9. Luxe Linens for Her Majesty
10. Dreamy Queen Sheets & More
When choosing a name, consider factors such as the target market, the brand image you want to convey, and the uniqueness of the name for branding purposes. It's also important to ensure that the name isn't already trademarked or heavily associated with another brand in the bedding industry.
"Crafting regal comfort: Queen-sized sheets from Royal Slumber Company."
> Finished chain.
'''
如果要是有多个输入,我们可以使用LangChain的这个SequentialChain
模型初始化不变,我们定义了如下的四个链:
python
# prompt template 1: translate to english
first_prompt = ChatPromptTemplate.from_template(
"Translate the following review to english:"
"\n\n{Review}"
)
# chain 1: input= Review and output= English_Review
chain_one = LLMChain(llm=llm, prompt=first_prompt,
output_key="English_Review"
)
#prompt template 2:summarize the review
second_prompt = ChatPromptTemplate.from_template(
"Can you summarize the following review in 1 sentence:"
"\n\n{English_Review}"
)
# chain 2: input= English_Review and output= summary
chain_two = LLMChain(llm=llm, prompt=second_prompt,
output_key="summary"
)
# prompt template 3: translate to english
third_prompt = ChatPromptTemplate.from_template(
"What language is the following review:\n\n{Review}"
)
# chain 3: input= Review and output= language
chain_three = LLMChain(llm=llm, prompt=third_prompt,
output_key="language"
)
# prompt template 4: follow up message
fourth_prompt = ChatPromptTemplate.from_template(
"Write a follow up response to the following "
"summary in the specified language:"
"\n\nSummary: {summary}\n\nLanguage: {language}"
)
# chain 4: input= summary, language and output= followup_message
chain_four = LLMChain(llm=llm, prompt=fourth_prompt,
output_key="message"
)
我们初始化我们的这个顺序链
python
# overall_chain: input= Review
# and output= English_Review,summary, followup_message
overall_chain = SequentialChain(
chains=[chain_one, chain_two, chain_three, chain_four],
input_variables=["Review"],
output_variables=["English_Review", "summary","language","message"],
verbose=True
)
这是我们使用GLM4的一个输出
python
review = df.Review[5]
overall_chain(review)
'''
{'Review': "Je trouve le goût médiocre. La mousse ne tient pas, c'est bizarre. J'achète les mêmes dans le commerce et le goût est bien meilleur...\nVieux lot ou contrefaçon !?",
'English_Review': "I find the taste mediocre. The foam does not hold, it's strange. I buy the same ones in stores and the taste is much better...\nOld batch or counterfeit!?",
'summary': "The reviewer finds the product's taste mediocre, the foam disappointing, and suspects it might be an old batch or counterfeit compared to the better-tasting ones from stores. \n\n(Note: This summary is a combination of the reviewer's comments and does not reflect my own opinion.)",
'language': 'The review is in French. The translation is:\n\n"I find the taste mediocre. The foam does not hold, it\'s strange. I buy the same ones in stores and the taste is much better... Old batch or counterfeit!?"',
'message': 'Suite à votre résumé en français\xa0: "Je trouve que le goût est médiocre. La mousse ne tient pas, c\'est étrange. J\'achète les mêmes produits dans les magasins et le goût est bien meilleur... Peut-être une vieille loterie ou un produit contrefait!?"\n\nFollow-up response en français :\n\n"Cher client, nous vous remercions pour votre feedback honnête. Votre satisfaction est très importante pour nous. Nous sommes désolés d\'entendre que la qualité du produit que vous avez reçu ne corresponds pas à ce que vous attendiez. Nous prenons vos commentaires très au sérieux et allons enquêter sur les batchs en question pour déterminer si cela pourrait être dû à un lot vieux ou à un problème de contrefaçon. Votre expérience avec notre marque est une priorité et nous ferons de notre mieux pour résoudre cela. Si vous le souhaitez, nous vous proposons également de vous remplacer le produit gratuitement ou de vous offrir un remboursement. Veuillez nous contacter pour discuter de ces options."'}
'''
Router Chain
当我们的任务可能更加复杂的时候,我们可能就需要一个路由链,来帮我们路由到一个特定的子链上。
这里,我们定义了物理,数学,历史,计算机科学的prompt template。
python
from langchain.chains.router import MultiPromptChain
from langchain.chains.router.llm_router import LLMRouterChain,RouterOutputParser
from langchain.prompts import PromptTemplate
llm = ChatOpenAI(api_key=os.environ.get('ZHIPUAI_API_KEY'),
base_url=os.environ.get('ZHIPUAI_API_URL'),
model="glm-4",
temperature=0)
python
physics_template = """You are a very smart physics professor. \
You are great at answering questions about physics in a concise\
and easy to understand manner. \
When you don't know the answer to a question you admit\
that you don't know.
Here is a question:
{input}"""
math_template = """You are a very good mathematician. \
You are great at answering math questions. \
You are so good because you are able to break down \
hard problems into their component parts,
answer the component parts, and then put them together\
to answer the broader question.
Here is a question:
{input}"""
history_template = """You are a very good historian. \
You have an excellent knowledge of and understanding of people,\
events and contexts from a range of historical periods. \
You have the ability to think, reflect, debate, discuss and \
evaluate the past. You have a respect for historical evidence\
and the ability to make use of it to support your explanations \
and judgements.
Here is a question:
{input}"""
computerscience_template = """ You are a successful computer scientist.\
You have a passion for creativity, collaboration,\
forward-thinking, confidence, strong problem-solving capabilities,\
understanding of theories and algorithms, and excellent communication \
skills. You are great at answering coding questions. \
You are so good because you know how to solve a problem by \
describing the solution in imperative steps \
that a machine can easily interpret and you know how to \
choose a solution that has a good balance between \
time complexity and space complexity.
Here is a question:
{input}"""
我们为每个模板定义一个名称和描述来方便路由
python
prompt_infos = [
{
"name": "physics",
"description": "Good for answering questions about physics",
"prompt_template": physics_template
},
{
"name": "math",
"description": "Good for answering math questions",
"prompt_template": math_template
},
{
"name": "History",
"description": "Good for answering history questions",
"prompt_template": history_template
},
{
"name": "computer science",
"description": "Good for answering computer science questions",
"prompt_template": computerscience_template
}
]
接下来我们定义一个目标链 和默认链
python
destination_chains = {}
for p_info in prompt_infos:
name = p_info["name"]
prompt_template = p_info["prompt_template"]
prompt = ChatPromptTemplate.from_template(template=prompt_template)
chain = LLMChain(llm=llm, prompt=prompt)
destination_chains[name] = chain
destinations = [f"{p['name']}: {p['description']}" for p in prompt_infos]
destinations_str = "\n".join(destinations)
default_prompt = ChatPromptTemplate.from_template("{input}")
default_chain = LLMChain(llm=llm, prompt=default_prompt)
定义一个路由的prompt remplate
python
MULTI_PROMPT_ROUTER_TEMPLATE = """Given a raw text input to a \
language model select the model prompt best suited for the input. \
You will be given the names of the available prompts and a \
description of what the prompt is best suited for. \
You may also revise the original input if you think that revising\
it will ultimately lead to a better response from the language model.
<< FORMATTING >>
Return a markdown code snippet with a JSON object formatted to look like:
```json
{{{{
"destination": string \ name of the prompt to use or "DEFAULT"
"next_inputs": string \ a potentially modified version of the original input
}}}}
```
REMEMBER: "destination" MUST be one of the candidate prompt \
names specified below OR it can be "DEFAULT" if the input is not\
well suited for any of the candidate prompts.
REMEMBER: "next_inputs" can just be the original input \
if you don't think any modifications are needed.
<< CANDIDATE PROMPTS >>
{destinations}
<< INPUT >>
{{input}}
<< OUTPUT (remember to include the ```json)>>"""
实例化出这个模板,然后将其装配到LLM 路由链上。
python
router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format(
destinations=destinations_str
)
router_prompt = PromptTemplate(
template=router_template,
input_variables=["input"],
output_parser=RouterOutputParser(),
)
router_chain = LLMRouterChain.from_llm(llm, router_prompt)
我们将其组合到一起
python
chain = MultiPromptChain(router_chain=router_chain,
destination_chains=destination_chains,
default_chain=default_chain, verbose=True
)
我们可以输入不同的question,然后追踪一下我们选择了那个prompt。
python
chain.run("What is black body radiation?")
chain.run("what is 2 + 2")
chain.run("Why does every cell in our body contain DNA?")