LangChain Few-Shot Prompt Templates(two)

https://python.langchain.com.cn/docs/modules/model_io/prompts/prompt_templates/few_shot_examples

This demo shows how the example selector picks the most relevant example to help the LLM answer a user's question effectively.

Step 1: Understand What We're Building

We'll create a tool that:

  • Uses "semantic similarity" to find the most relevant example for a user's question (instead of using all examples).
  • Teaches the LLM to answer questions by first asking follow-ups (just like the examples).
  • Produces a clear, step-by-step answer to a real user question.

Complete Code Demo

python 复制代码
# 1. Import tools we need (from the original text's Part 2)
from langchain.prompts.few_shot import FewShotPromptTemplate
from langchain.prompts.prompt import PromptTemplate
from langchain.prompts.example_selector import SemanticSimilarityExampleSelector
from langchain.vectorstores import Chroma
from langchain.embeddings import OpenAIEmbeddings
from langchain.chat_models import ChatOpenAI
import os

# 2. Set your OpenAI API key (get one from https://platform.openai.com/)
os.environ["OPENAI_API_KEY"] = "your-openai-api-key-here"

# 3. Define our few-shot examples (same as original text)
examples = [
  {
    "question": "Who lived longer, Muhammad Ali or Alan Turing?",
    "answer": 
"""
Are follow up questions needed here: Yes.
Follow up: How old was Muhammad Ali when he died?
Intermediate answer: Muhammad Ali was 74 years old when he died.
Follow up: How old was Alan Turing when he died?
Intermediate answer: Alan Turing was 41 years old when he died.
So the final answer is: Muhammad Ali
"""
  },
  {
    "question": "When was the founder of craigslist born?",
    "answer": 
"""
Are follow up questions needed here: Yes.
Follow up: Who was the founder of craigslist?
Intermediate answer: Craigslist was founded by Craig Newmark.
Follow up: When was Craig Newmark born?
Intermediate answer: Craig Newmark was born on December 6, 1952.
So the final answer is: December 6, 1952
"""
  },
  {
    "question": "Who was the maternal grandfather of George Washington?",
    "answer":
"""
Are follow up questions needed here: Yes.
Follow up: Who was the mother of George Washington?
Intermediate answer: The mother of George Washington was Mary Ball Washington.
Follow up: Who was the father of Mary Ball Washington?
Intermediate answer: The father of Mary Ball Washington was Joseph Ball.
So the final answer is: Joseph Ball
"""
  },
  {
    "question": "Are both the directors of Jaws and Casino Royale from the same country?",
    "answer":
"""
Are follow up questions needed here: Yes.
Follow up: Who is the director of Jaws?
Intermediate Answer: The director of Jaws is Steven Spielberg.
Follow up: Where is Steven Spielberg from?
Intermediate Answer: The United States.
Follow up: Who is the director of Casino Royale?
Intermediate Answer: The director of Casino Royale is Martin Campbell.
Follow up: Where is Martin Campbell from?
Intermediate Answer: New Zealand.
So the final answer is: No
"""
  }
]

# 4. Create the "example formatter" (how each example is displayed)
example_prompt = PromptTemplate(
    input_variables=["question", "answer"],
    template="Question: {question}\n{answer}"  # Format: "Question: X\nAnswer: Y"
)

# 5. Create the Example Selector (the key part from Part 2!)
# This finds the example most similar to the user's question
example_selector = SemanticSimilarityExampleSelector.from_examples(
    examples,  # Our list of examples
    OpenAIEmbeddings(),  # Converts text to "meaning numbers" (to check similarity)
    Chroma,  # Stores these numbers to quickly find matches
    k=1  # Pick only the 1 most similar example
)

# 6. Create the few-shot prompt template (using the selector)
prompt = FewShotPromptTemplate(
    example_selector=example_selector,  # Use the selector instead of all examples
    example_prompt=example_prompt,  # How to format the selected example
    suffix="Question: {input}",  # The user's question at the end
    input_variables=["input"]  # The user's question is called "input"
)

# 7. Define the user's question (let's ask something similar to our examples)
user_question = "Who was the paternal grandfather of George Washington?"

# 8. Generate the full prompt (with only the most relevant example)
full_prompt = prompt.format(input=user_question)
print("=== Generated Prompt (with 1 relevant example) ===")
print(full_prompt)
print("\n=== LLM's Answer ===")

# 9. Use the LLM to generate a response
llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)  # "temperature=0" = consistent answers
response = llm.predict(full_prompt)
print(response)

What You'll See When You Run It

复制代码
=== Generated Prompt (with 1 relevant example) ===
Question: Who was the maternal grandfather of George Washington?

Are follow up questions needed here: Yes.
Follow up: Who was the mother of George Washington?
Intermediate answer: The mother of George Washington was Mary Ball Washington.
Follow up: Who was the father of Mary Ball Washington?
Intermediate answer: The father of Mary Ball Washington was Joseph Ball.
So the final answer is: Joseph Ball


Question: Who was the paternal grandfather of George Washington?

=== LLM's Answer ===
Are follow up questions needed here: Yes.
Follow up: Who was the father of George Washington?
Intermediate answer: The father of George Washington was Augustine Washington.
Follow up: Who was the father of Augustine Washington?
Intermediate answer: The father of Augustine Washington was Lawrence Washington.
So the final answer is: Lawrence Washington

Simple Explanation

  1. Example Selector: Instead of shoving all 4 examples into the prompt, we use a "smart selector." It checks which example is most like the user's question (using "meaning numbers" from the embedding model) and only includes that one. This keeps the prompt short and focused.

  2. How It Works for Our Question : The user asked about George Washington's paternal grandfather . The selector noticed our 3rd example was about his maternal grandfather (similar topic!) and picked that one.

  3. LLM Follows the Pattern: The LLM sees the selected example and copies its style: first asking follow-ups ("Who was George Washington's father?"), then finding intermediate answers, and finally giving a clear final answer.

Why This Matters

  • Faster & More Accurate: The LLM doesn't waste time reading irrelevant examples.
  • Scalable: If you have 100 examples, the selector will still pick the 1 best one---no need to clutter the prompt.

Just replace "your-openai-api-key-here" with your actual API key, and you're good to go!

相关推荐
寂寞恋上夜20 小时前
枚举值怎么管理:固定枚举/字典表/接口动态(附管理策略)
prompt·状态模式·markdown转xmind·deepseek思维导图
进击的松鼠21 小时前
LangChain 实战 | 快速搭建 Python 开发环境
python·langchain·llm
xinxin本尊21 小时前
通过langchain的LCEL创建带历史感知的检索链
langchain
资深web全栈开发1 天前
深度对比 LangChain 8 种文档分割方式:从逻辑底层到选型实战
深度学习·自然语言处理·langchain
FranzLiszt18471 天前
基于One API 将本地 Ollama 模型接入 FastGPT
langchain·fastgpt·rag·ollama·one api
zuozewei1 天前
零基础 | 基于LangChain的角色扮演聊天机器人实现
python·langchain·机器人
猫头虎1 天前
Claude Code 永动机:ralph-loop 无限循环迭代插件详解(安装 / 原理 / 最佳实践 / 避坑)
ide·人工智能·langchain·开源·编辑器·aigc·编程技术
沛沛老爹1 天前
Skills高级设计模式(一):向导式工作流与模板生成
java·人工智能·设计模式·prompt·aigc·agent·web转型
minhuan1 天前
大模型应用:大模型权限管控设计:角色权限分配与违规 Prompt 拦截.49
prompt·大模型应用·大模型权限管控·违规提示词监测
言之。1 天前
LangChain短期内存系统
microsoft·langchain