Prompt Serialization in LangChain

https://python.langchain.com.cn/docs/modules/model_io/prompts/prompt_templates/prompt_serialization

Prompt Serialization in LangChain

Storing prompts as files (instead of writing them directly in Python code) is often better---it makes prompts easier to share, store, and track versions. This guide explains how to serialize (save) and deserialize (load) prompts in LangChain, including different prompt types and serialization options. All content follows the original examples and code without changes or omissions.

Core Design Principles of Serialization

LangChain's prompt serialization follows three key rules:

  1. Supports JSON and YAML: Both formats are human-readable, making them ideal for storing prompts.
  2. Flexible file storage: You can store all prompt components (template, examples, etc.) in one file, or split them into separate files (useful for long templates or reusable parts).
  3. Single loading entry point : Use the load_prompt function to load any type of prompt---no need for different functions for different prompt types.

1. Serialize/Deserialize PromptTemplate

PromptTemplate is the basic prompt type. Below are examples of loading it from YAML, JSON, and a separate template file.

Step 1: Import the load_prompt function

All prompts are loaded with this single function:

python 复制代码
from langchain.prompts import load_prompt

Example 1: Load PromptTemplate from YAML

First, create a YAML file (simple_prompt.yaml) with the prompt details. The !cat command shows the file content (as in the original source):

shell 复制代码
!cat simple_prompt.yaml

File content (output of !cat):

yaml 复制代码
_type: prompt
input_variables:
    ["adjective", "content"]
template: 
    Tell me a {adjective} joke about {content}.

Load and use the prompt:

python 复制代码
prompt = load_prompt("simple_prompt.yaml")
print(prompt.format(adjective="funny", content="chickens"))

Output:

复制代码
Tell me a funny joke about chickens.

Example 2: Load PromptTemplate from JSON

Create a JSON file (simple_prompt.json):

shell 复制代码
!cat simple_prompt.json

File content (output of !cat):

json 复制代码
{
    "_type": "prompt",
    "input_variables": ["adjective", "content"],
    "template": "Tell me a {adjective} joke about {content}."
}

Load and use the prompt:

python 复制代码
prompt = load_prompt("simple_prompt.json")
print(prompt.format(adjective="funny", content="chickens"))

Output:

复制代码
Tell me a funny joke about chickens.

Example 3: Load Template from a Separate File

For long templates, store the template text in a separate file (e.g., simple_template.txt), then reference it in the JSON/YAML config (use template_path instead of template).

  1. First, create the template file:
shell 复制代码
!cat simple_template.txt

File content (output of !cat):

复制代码
Tell me a {adjective} joke about {content}.
  1. Create a JSON config file (simple_prompt_with_template_file.json) that references the template:
shell 复制代码
!cat simple_prompt_with_template_file.json

File content (output of !cat):

json 复制代码
{
    "_type": "prompt",
    "input_variables": ["adjective", "content"],
    "template_path": "simple_template.txt"
}
  1. Load and use the prompt:
python 复制代码
prompt = load_prompt("simple_prompt_with_template_file.json")
print(prompt.format(adjective="funny", content="chickens"))

Output:

复制代码
Tell me a funny joke about chickens.

2. Serialize/Deserialize FewShotPromptTemplate

FewShotPromptTemplate includes examples to guide the model (e.g., for antonyms, translations). Below are examples of loading it from files, with examples stored separately or inline.

First: Prepare Example Files

First, create files to store examples (used in later examples).

Example File 1: examples.json

shell 复制代码
!cat examples.json

File content (output of !cat):

json 复制代码
[
    {"input": "happy", "output": "sad"},
    {"input": "tall", "output": "short"}
]

Example File 2: examples.yaml

shell 复制代码
!cat examples.yaml

File content (output of !cat):

yaml 复制代码
- input: happy
  output: sad
- input: tall
  output: short

Example 1: Load FewShotPromptTemplate from YAML (with JSON examples)

Create a YAML config file (few_shot_prompt.yaml) that references examples.json:

shell 复制代码
!cat few_shot_prompt.yaml

File content (output of !cat):

yaml 复制代码
_type: few_shot
input_variables:
    ["adjective"]
prefix: 
    Write antonyms for the following words.
example_prompt:
    _type: prompt
    input_variables:
        ["input", "output"]
    template:
        "Input: {input}\nOutput: {output}"
examples:
    examples.json
suffix:
    "Input: {adjective}\nOutput:"

Load and use the prompt:

python 复制代码
prompt = load_prompt("few_shot_prompt.yaml")
print(prompt.format(adjective="funny"))

Output:

复制代码
Write antonyms for the following words.
Input: happy
Output: sad
Input: tall
Output: short
Input: funny
Output:

Example 2: Load FewShotPromptTemplate from YAML (with YAML examples)

Create a YAML config file (few_shot_prompt_yaml_examples.yaml) that references examples.yaml:

shell 复制代码
!cat few_shot_prompt_yaml_examples.yaml

File content (output of !cat):

yaml 复制代码
_type: few_shot
input_variables:
    ["adjective"]
prefix: 
    Write antonyms for the following words.
example_prompt:
    _type: prompt
    input_variables:
        ["input", "output"]
    template:
        "Input: {input}\nOutput: {output}"
examples:
    examples.yaml
suffix:
    "Input: {adjective}\nOutput:"

Load and use the prompt:

python 复制代码
prompt = load_prompt("few_shot_prompt_yaml_examples.yaml")
print(prompt.format(adjective="funny"))

Output:

复制代码
Write antonyms for the following words.
Input: happy
Output: sad
Input: tall
Output: short
Input: funny
Output:

Example 3: Load FewShotPromptTemplate from JSON

Create a JSON config file (few_shot_prompt.json):

shell 复制代码
!cat few_shot_prompt.json

File content (output of !cat):

json 复制代码
{
    "_type": "few_shot",
    "input_variables": ["adjective"],
    "prefix": "Write antonyms for the following words.",
    "example_prompt": {
        "_type": "prompt",
        "input_variables": ["input", "output"],
        "template": "Input: {input}\nOutput: {output}"
    },
    "examples": "examples.json",
    "suffix": "Input: {adjective}\nOutput:"
}

Load and use the prompt:

python 复制代码
prompt = load_prompt("few_shot_prompt.json")
print(prompt.format(adjective="funny"))

Output:

复制代码
Write antonyms for the following words.
Input: happy
Output: sad
Input: tall
Output: short
Input: funny
Output:

Example 4: Embed Examples Directly in the Config

Instead of referencing an external example file, embed examples directly in the JSON config (few_shot_prompt_examples_in.json):

shell 复制代码
!cat few_shot_prompt_examples_in.json

File content (output of !cat):

json 复制代码
{
    "_type": "few_shot",
    "input_variables": ["adjective"],
    "prefix": "Write antonyms for the following words.",
    "example_prompt": {
        "_type": "prompt",
        "input_variables": ["input", "output"],
        "template": "Input: {input}\nOutput: {output}"
    },
    "examples": [
        {"input": "happy", "output": "sad"},
        {"input": "tall", "output": "short"}
    ],
    "suffix": "Input: {adjective}\nOutput:"
}

Load and use the prompt:

python 复制代码
prompt = load_prompt("few_shot_prompt_examples_in.json")
print(prompt.format(adjective="funny"))

Output:

复制代码
Write antonyms for the following words.
Input: happy
Output: sad
Input: tall
Output: short
Input: funny
Output:

Example 5: Load example_prompt from a Separate File

For reusable example_prompt (the template that formats individual examples), store it in a separate file and reference it with example_prompt_path (instead of example_prompt).

  1. Create example_prompt.json (the reusable example template):
shell 复制代码
!cat example_prompt.json

File content (output of !cat):

json 复制代码
{
    "_type": "prompt",
    "input_variables": ["input", "output"],
    "template": "Input: {input}\nOutput: {output}" 
}
  1. Create the FewShotPromptTemplate config (few_shot_prompt_example_prompt.json):
shell 复制代码
!cat few_shot_prompt_example_prompt.json

File content (output of !cat):

json 复制代码
{
    "_type": "few_shot",
    "input_variables": ["adjective"],
    "prefix": "Write antonyms for the following words.",
    "example_prompt_path": "example_prompt.json",
    "examples": "examples.json",
    "suffix": "Input: {adjective}\nOutput:"
}
  1. Load and use the prompt:
python 复制代码
prompt = load_prompt("few_shot_prompt_example_prompt.json")
print(prompt.format(adjective="funny"))

Output:

复制代码
Write antonyms for the following words.
Input: happy
Output: sad
Input: tall
Output: short
Input: funny
Output:

3. Serialize/Deserialize PromptTemplate with OutputParser

You can include an OutputParser (to extract structured data from model outputs) in the prompt file. Below is an example with a regex-based parser.

Example: Load Prompt with OutputParser from JSON

  1. Create prompt_with_output_parser.json (includes the parser config):
shell 复制代码
! cat prompt_with_output_parser.json

File content (output of !cat):

json 复制代码
{
    "input_variables": [
        "question",
        "student_answer"
    ],
    "output_parser": {
        "regex": "(.*?)\\nScore: (.*)",
        "output_keys": [
            "answer",
            "score"
        ],
        "default_output_key": null,
        "_type": "regex_parser"
    },
    "partial_variables": {},
    "template": "Given the following question and student answer, provide a correct answer and score the student answer.\nQuestion: {question}\nStudent Answer: {student_answer}\nCorrect Answer:",
    "template_format": "f-string",
    "validate_template": true,
    "_type": "prompt"
}
  1. Load the prompt and use the parser:
python 复制代码
prompt = load_prompt("prompt_with_output_parser.json")

# Parse a sample model output
result = prompt.output_parser.parse(
    "George Washington was born in 1732 and died in 1799.\nScore: 1/2"
)
print(result)

Output:

复制代码
{'answer': 'George Washington was born in 1732 and died in 1799.', 'score': '1/2'}
相关推荐
万粉变现经纪人3 分钟前
如何解决 pip install cx_Oracle 报错 未找到 Oracle Instant Client 问题
数据库·python·mysql·oracle·pycharm·bug·pip
sw1213894 分钟前
使用Plotly创建交互式图表
jvm·数据库·python
2301_810160956 分钟前
如何为开源Python项目做贡献?
jvm·数据库·python
华农DrLai15 分钟前
什么是Prompt注入攻击?为什么恶意输入能操控AI行为?
人工智能·深度学习·大模型·nlp·prompt
SEO-狼术15 分钟前
Detect Aurora PostgreSQL Issues Faster
数据库·postgresql
2501_9454235421 分钟前
使用PyTorch构建你的第一个神经网络
jvm·数据库·python
樹JUMP1 小时前
Python虚拟环境(venv)完全指南:隔离项目依赖
jvm·数据库·python
用什么都重名1 小时前
Redis 入门与实践:从基础到 Stream 消息队列
数据库·redis·缓存
Mistra丶1 小时前
记一次 JVM+Postgresql的 “死锁” 问题排查
jvm·数据库·postgresql·死锁
一然明月1 小时前
Qt QML 锚定(Anchors)全解析
java·数据库·qt