Prompt Serialization in LangChain

https://python.langchain.com.cn/docs/modules/model_io/prompts/prompt_templates/prompt_serialization

Prompt Serialization in LangChain

Storing prompts as files (instead of writing them directly in Python code) is often better---it makes prompts easier to share, store, and track versions. This guide explains how to serialize (save) and deserialize (load) prompts in LangChain, including different prompt types and serialization options. All content follows the original examples and code without changes or omissions.

Core Design Principles of Serialization

LangChain's prompt serialization follows three key rules:

  1. Supports JSON and YAML: Both formats are human-readable, making them ideal for storing prompts.
  2. Flexible file storage: You can store all prompt components (template, examples, etc.) in one file, or split them into separate files (useful for long templates or reusable parts).
  3. Single loading entry point : Use the load_prompt function to load any type of prompt---no need for different functions for different prompt types.

1. Serialize/Deserialize PromptTemplate

PromptTemplate is the basic prompt type. Below are examples of loading it from YAML, JSON, and a separate template file.

Step 1: Import the load_prompt function

All prompts are loaded with this single function:

python 复制代码
from langchain.prompts import load_prompt

Example 1: Load PromptTemplate from YAML

First, create a YAML file (simple_prompt.yaml) with the prompt details. The !cat command shows the file content (as in the original source):

shell 复制代码
!cat simple_prompt.yaml

File content (output of !cat):

yaml 复制代码
_type: prompt
input_variables:
    ["adjective", "content"]
template: 
    Tell me a {adjective} joke about {content}.

Load and use the prompt:

python 复制代码
prompt = load_prompt("simple_prompt.yaml")
print(prompt.format(adjective="funny", content="chickens"))

Output:

复制代码
Tell me a funny joke about chickens.

Example 2: Load PromptTemplate from JSON

Create a JSON file (simple_prompt.json):

shell 复制代码
!cat simple_prompt.json

File content (output of !cat):

json 复制代码
{
    "_type": "prompt",
    "input_variables": ["adjective", "content"],
    "template": "Tell me a {adjective} joke about {content}."
}

Load and use the prompt:

python 复制代码
prompt = load_prompt("simple_prompt.json")
print(prompt.format(adjective="funny", content="chickens"))

Output:

复制代码
Tell me a funny joke about chickens.

Example 3: Load Template from a Separate File

For long templates, store the template text in a separate file (e.g., simple_template.txt), then reference it in the JSON/YAML config (use template_path instead of template).

  1. First, create the template file:
shell 复制代码
!cat simple_template.txt

File content (output of !cat):

复制代码
Tell me a {adjective} joke about {content}.
  1. Create a JSON config file (simple_prompt_with_template_file.json) that references the template:
shell 复制代码
!cat simple_prompt_with_template_file.json

File content (output of !cat):

json 复制代码
{
    "_type": "prompt",
    "input_variables": ["adjective", "content"],
    "template_path": "simple_template.txt"
}
  1. Load and use the prompt:
python 复制代码
prompt = load_prompt("simple_prompt_with_template_file.json")
print(prompt.format(adjective="funny", content="chickens"))

Output:

复制代码
Tell me a funny joke about chickens.

2. Serialize/Deserialize FewShotPromptTemplate

FewShotPromptTemplate includes examples to guide the model (e.g., for antonyms, translations). Below are examples of loading it from files, with examples stored separately or inline.

First: Prepare Example Files

First, create files to store examples (used in later examples).

Example File 1: examples.json

shell 复制代码
!cat examples.json

File content (output of !cat):

json 复制代码
[
    {"input": "happy", "output": "sad"},
    {"input": "tall", "output": "short"}
]

Example File 2: examples.yaml

shell 复制代码
!cat examples.yaml

File content (output of !cat):

yaml 复制代码
- input: happy
  output: sad
- input: tall
  output: short

Example 1: Load FewShotPromptTemplate from YAML (with JSON examples)

Create a YAML config file (few_shot_prompt.yaml) that references examples.json:

shell 复制代码
!cat few_shot_prompt.yaml

File content (output of !cat):

yaml 复制代码
_type: few_shot
input_variables:
    ["adjective"]
prefix: 
    Write antonyms for the following words.
example_prompt:
    _type: prompt
    input_variables:
        ["input", "output"]
    template:
        "Input: {input}\nOutput: {output}"
examples:
    examples.json
suffix:
    "Input: {adjective}\nOutput:"

Load and use the prompt:

python 复制代码
prompt = load_prompt("few_shot_prompt.yaml")
print(prompt.format(adjective="funny"))

Output:

复制代码
Write antonyms for the following words.
Input: happy
Output: sad
Input: tall
Output: short
Input: funny
Output:

Example 2: Load FewShotPromptTemplate from YAML (with YAML examples)

Create a YAML config file (few_shot_prompt_yaml_examples.yaml) that references examples.yaml:

shell 复制代码
!cat few_shot_prompt_yaml_examples.yaml

File content (output of !cat):

yaml 复制代码
_type: few_shot
input_variables:
    ["adjective"]
prefix: 
    Write antonyms for the following words.
example_prompt:
    _type: prompt
    input_variables:
        ["input", "output"]
    template:
        "Input: {input}\nOutput: {output}"
examples:
    examples.yaml
suffix:
    "Input: {adjective}\nOutput:"

Load and use the prompt:

python 复制代码
prompt = load_prompt("few_shot_prompt_yaml_examples.yaml")
print(prompt.format(adjective="funny"))

Output:

复制代码
Write antonyms for the following words.
Input: happy
Output: sad
Input: tall
Output: short
Input: funny
Output:

Example 3: Load FewShotPromptTemplate from JSON

Create a JSON config file (few_shot_prompt.json):

shell 复制代码
!cat few_shot_prompt.json

File content (output of !cat):

json 复制代码
{
    "_type": "few_shot",
    "input_variables": ["adjective"],
    "prefix": "Write antonyms for the following words.",
    "example_prompt": {
        "_type": "prompt",
        "input_variables": ["input", "output"],
        "template": "Input: {input}\nOutput: {output}"
    },
    "examples": "examples.json",
    "suffix": "Input: {adjective}\nOutput:"
}

Load and use the prompt:

python 复制代码
prompt = load_prompt("few_shot_prompt.json")
print(prompt.format(adjective="funny"))

Output:

复制代码
Write antonyms for the following words.
Input: happy
Output: sad
Input: tall
Output: short
Input: funny
Output:

Example 4: Embed Examples Directly in the Config

Instead of referencing an external example file, embed examples directly in the JSON config (few_shot_prompt_examples_in.json):

shell 复制代码
!cat few_shot_prompt_examples_in.json

File content (output of !cat):

json 复制代码
{
    "_type": "few_shot",
    "input_variables": ["adjective"],
    "prefix": "Write antonyms for the following words.",
    "example_prompt": {
        "_type": "prompt",
        "input_variables": ["input", "output"],
        "template": "Input: {input}\nOutput: {output}"
    },
    "examples": [
        {"input": "happy", "output": "sad"},
        {"input": "tall", "output": "short"}
    ],
    "suffix": "Input: {adjective}\nOutput:"
}

Load and use the prompt:

python 复制代码
prompt = load_prompt("few_shot_prompt_examples_in.json")
print(prompt.format(adjective="funny"))

Output:

复制代码
Write antonyms for the following words.
Input: happy
Output: sad
Input: tall
Output: short
Input: funny
Output:

Example 5: Load example_prompt from a Separate File

For reusable example_prompt (the template that formats individual examples), store it in a separate file and reference it with example_prompt_path (instead of example_prompt).

  1. Create example_prompt.json (the reusable example template):
shell 复制代码
!cat example_prompt.json

File content (output of !cat):

json 复制代码
{
    "_type": "prompt",
    "input_variables": ["input", "output"],
    "template": "Input: {input}\nOutput: {output}" 
}
  1. Create the FewShotPromptTemplate config (few_shot_prompt_example_prompt.json):
shell 复制代码
!cat few_shot_prompt_example_prompt.json

File content (output of !cat):

json 复制代码
{
    "_type": "few_shot",
    "input_variables": ["adjective"],
    "prefix": "Write antonyms for the following words.",
    "example_prompt_path": "example_prompt.json",
    "examples": "examples.json",
    "suffix": "Input: {adjective}\nOutput:"
}
  1. Load and use the prompt:
python 复制代码
prompt = load_prompt("few_shot_prompt_example_prompt.json")
print(prompt.format(adjective="funny"))

Output:

复制代码
Write antonyms for the following words.
Input: happy
Output: sad
Input: tall
Output: short
Input: funny
Output:

3. Serialize/Deserialize PromptTemplate with OutputParser

You can include an OutputParser (to extract structured data from model outputs) in the prompt file. Below is an example with a regex-based parser.

Example: Load Prompt with OutputParser from JSON

  1. Create prompt_with_output_parser.json (includes the parser config):
shell 复制代码
! cat prompt_with_output_parser.json

File content (output of !cat):

json 复制代码
{
    "input_variables": [
        "question",
        "student_answer"
    ],
    "output_parser": {
        "regex": "(.*?)\\nScore: (.*)",
        "output_keys": [
            "answer",
            "score"
        ],
        "default_output_key": null,
        "_type": "regex_parser"
    },
    "partial_variables": {},
    "template": "Given the following question and student answer, provide a correct answer and score the student answer.\nQuestion: {question}\nStudent Answer: {student_answer}\nCorrect Answer:",
    "template_format": "f-string",
    "validate_template": true,
    "_type": "prompt"
}
  1. Load the prompt and use the parser:
python 复制代码
prompt = load_prompt("prompt_with_output_parser.json")

# Parse a sample model output
result = prompt.output_parser.parse(
    "George Washington was born in 1732 and died in 1799.\nScore: 1/2"
)
print(result)

Output:

复制代码
{'answer': 'George Washington was born in 1732 and died in 1799.', 'score': '1/2'}
相关推荐
g32308631 小时前
Langchain mcp 可视化界面
langchain·mcp
m0_613856291 小时前
mysql如何利用事务隔离级别解决特定业务冲突_mysql隔离方案选型
jvm·数据库·python
AI_小站1 小时前
6个GitHub爆火的免费大模型教程,助你快速进阶AI编程
人工智能·langchain·github·知识图谱·agent·llama·rag
Adios7941 小时前
VPR:Pitts50K和Norland数据集下载
数据库
东风破1371 小时前
DM用户权限、表、约束等对象的基本操作,SQL日志的开启介绍
数据库·sql·dm达梦数据库
收获不止数据库1 小时前
达梦9发布会归来:AI 时代,我们需要一款什么样的数据库?
数据库·人工智能·ai·语言模型·数据分析
小宇的天下2 小时前
Virtuoso GUI 界面中的关键模块定义
数据库
bqq198610262 小时前
MySQL 5.7 与 MySQL 8.0 的主要区别
数据库·mysql
Elastic 中国社区官方博客3 小时前
Elastic-caveman : 在不损失 Elastic 最佳效果的情况下,将 AI 响应 tokens 减少64%
大数据·运维·数据库·人工智能·elasticsearch·搜索引擎·全文检索
互联网推荐官3 小时前
上海软件定制开发全流程拆解:需求分析、技术选型与交付管理的工程实践
大数据·数据库·需求分析