Prompt Serialization in LangChain

https://python.langchain.com.cn/docs/modules/model_io/prompts/prompt_templates/prompt_serialization

Prompt Serialization in LangChain

Storing prompts as files (instead of writing them directly in Python code) is often better---it makes prompts easier to share, store, and track versions. This guide explains how to serialize (save) and deserialize (load) prompts in LangChain, including different prompt types and serialization options. All content follows the original examples and code without changes or omissions.

Core Design Principles of Serialization

LangChain's prompt serialization follows three key rules:

  1. Supports JSON and YAML: Both formats are human-readable, making them ideal for storing prompts.
  2. Flexible file storage: You can store all prompt components (template, examples, etc.) in one file, or split them into separate files (useful for long templates or reusable parts).
  3. Single loading entry point : Use the load_prompt function to load any type of prompt---no need for different functions for different prompt types.

1. Serialize/Deserialize PromptTemplate

PromptTemplate is the basic prompt type. Below are examples of loading it from YAML, JSON, and a separate template file.

Step 1: Import the load_prompt function

All prompts are loaded with this single function:

python 复制代码
from langchain.prompts import load_prompt

Example 1: Load PromptTemplate from YAML

First, create a YAML file (simple_prompt.yaml) with the prompt details. The !cat command shows the file content (as in the original source):

shell 复制代码
!cat simple_prompt.yaml

File content (output of !cat):

yaml 复制代码
_type: prompt
input_variables:
    ["adjective", "content"]
template: 
    Tell me a {adjective} joke about {content}.

Load and use the prompt:

python 复制代码
prompt = load_prompt("simple_prompt.yaml")
print(prompt.format(adjective="funny", content="chickens"))

Output:

复制代码
Tell me a funny joke about chickens.

Example 2: Load PromptTemplate from JSON

Create a JSON file (simple_prompt.json):

shell 复制代码
!cat simple_prompt.json

File content (output of !cat):

json 复制代码
{
    "_type": "prompt",
    "input_variables": ["adjective", "content"],
    "template": "Tell me a {adjective} joke about {content}."
}

Load and use the prompt:

python 复制代码
prompt = load_prompt("simple_prompt.json")
print(prompt.format(adjective="funny", content="chickens"))

Output:

复制代码
Tell me a funny joke about chickens.

Example 3: Load Template from a Separate File

For long templates, store the template text in a separate file (e.g., simple_template.txt), then reference it in the JSON/YAML config (use template_path instead of template).

  1. First, create the template file:
shell 复制代码
!cat simple_template.txt

File content (output of !cat):

复制代码
Tell me a {adjective} joke about {content}.
  1. Create a JSON config file (simple_prompt_with_template_file.json) that references the template:
shell 复制代码
!cat simple_prompt_with_template_file.json

File content (output of !cat):

json 复制代码
{
    "_type": "prompt",
    "input_variables": ["adjective", "content"],
    "template_path": "simple_template.txt"
}
  1. Load and use the prompt:
python 复制代码
prompt = load_prompt("simple_prompt_with_template_file.json")
print(prompt.format(adjective="funny", content="chickens"))

Output:

复制代码
Tell me a funny joke about chickens.

2. Serialize/Deserialize FewShotPromptTemplate

FewShotPromptTemplate includes examples to guide the model (e.g., for antonyms, translations). Below are examples of loading it from files, with examples stored separately or inline.

First: Prepare Example Files

First, create files to store examples (used in later examples).

Example File 1: examples.json

shell 复制代码
!cat examples.json

File content (output of !cat):

json 复制代码
[
    {"input": "happy", "output": "sad"},
    {"input": "tall", "output": "short"}
]

Example File 2: examples.yaml

shell 复制代码
!cat examples.yaml

File content (output of !cat):

yaml 复制代码
- input: happy
  output: sad
- input: tall
  output: short

Example 1: Load FewShotPromptTemplate from YAML (with JSON examples)

Create a YAML config file (few_shot_prompt.yaml) that references examples.json:

shell 复制代码
!cat few_shot_prompt.yaml

File content (output of !cat):

yaml 复制代码
_type: few_shot
input_variables:
    ["adjective"]
prefix: 
    Write antonyms for the following words.
example_prompt:
    _type: prompt
    input_variables:
        ["input", "output"]
    template:
        "Input: {input}\nOutput: {output}"
examples:
    examples.json
suffix:
    "Input: {adjective}\nOutput:"

Load and use the prompt:

python 复制代码
prompt = load_prompt("few_shot_prompt.yaml")
print(prompt.format(adjective="funny"))

Output:

复制代码
Write antonyms for the following words.
Input: happy
Output: sad
Input: tall
Output: short
Input: funny
Output:

Example 2: Load FewShotPromptTemplate from YAML (with YAML examples)

Create a YAML config file (few_shot_prompt_yaml_examples.yaml) that references examples.yaml:

shell 复制代码
!cat few_shot_prompt_yaml_examples.yaml

File content (output of !cat):

yaml 复制代码
_type: few_shot
input_variables:
    ["adjective"]
prefix: 
    Write antonyms for the following words.
example_prompt:
    _type: prompt
    input_variables:
        ["input", "output"]
    template:
        "Input: {input}\nOutput: {output}"
examples:
    examples.yaml
suffix:
    "Input: {adjective}\nOutput:"

Load and use the prompt:

python 复制代码
prompt = load_prompt("few_shot_prompt_yaml_examples.yaml")
print(prompt.format(adjective="funny"))

Output:

复制代码
Write antonyms for the following words.
Input: happy
Output: sad
Input: tall
Output: short
Input: funny
Output:

Example 3: Load FewShotPromptTemplate from JSON

Create a JSON config file (few_shot_prompt.json):

shell 复制代码
!cat few_shot_prompt.json

File content (output of !cat):

json 复制代码
{
    "_type": "few_shot",
    "input_variables": ["adjective"],
    "prefix": "Write antonyms for the following words.",
    "example_prompt": {
        "_type": "prompt",
        "input_variables": ["input", "output"],
        "template": "Input: {input}\nOutput: {output}"
    },
    "examples": "examples.json",
    "suffix": "Input: {adjective}\nOutput:"
}

Load and use the prompt:

python 复制代码
prompt = load_prompt("few_shot_prompt.json")
print(prompt.format(adjective="funny"))

Output:

复制代码
Write antonyms for the following words.
Input: happy
Output: sad
Input: tall
Output: short
Input: funny
Output:

Example 4: Embed Examples Directly in the Config

Instead of referencing an external example file, embed examples directly in the JSON config (few_shot_prompt_examples_in.json):

shell 复制代码
!cat few_shot_prompt_examples_in.json

File content (output of !cat):

json 复制代码
{
    "_type": "few_shot",
    "input_variables": ["adjective"],
    "prefix": "Write antonyms for the following words.",
    "example_prompt": {
        "_type": "prompt",
        "input_variables": ["input", "output"],
        "template": "Input: {input}\nOutput: {output}"
    },
    "examples": [
        {"input": "happy", "output": "sad"},
        {"input": "tall", "output": "short"}
    ],
    "suffix": "Input: {adjective}\nOutput:"
}

Load and use the prompt:

python 复制代码
prompt = load_prompt("few_shot_prompt_examples_in.json")
print(prompt.format(adjective="funny"))

Output:

复制代码
Write antonyms for the following words.
Input: happy
Output: sad
Input: tall
Output: short
Input: funny
Output:

Example 5: Load example_prompt from a Separate File

For reusable example_prompt (the template that formats individual examples), store it in a separate file and reference it with example_prompt_path (instead of example_prompt).

  1. Create example_prompt.json (the reusable example template):
shell 复制代码
!cat example_prompt.json

File content (output of !cat):

json 复制代码
{
    "_type": "prompt",
    "input_variables": ["input", "output"],
    "template": "Input: {input}\nOutput: {output}" 
}
  1. Create the FewShotPromptTemplate config (few_shot_prompt_example_prompt.json):
shell 复制代码
!cat few_shot_prompt_example_prompt.json

File content (output of !cat):

json 复制代码
{
    "_type": "few_shot",
    "input_variables": ["adjective"],
    "prefix": "Write antonyms for the following words.",
    "example_prompt_path": "example_prompt.json",
    "examples": "examples.json",
    "suffix": "Input: {adjective}\nOutput:"
}
  1. Load and use the prompt:
python 复制代码
prompt = load_prompt("few_shot_prompt_example_prompt.json")
print(prompt.format(adjective="funny"))

Output:

复制代码
Write antonyms for the following words.
Input: happy
Output: sad
Input: tall
Output: short
Input: funny
Output:

3. Serialize/Deserialize PromptTemplate with OutputParser

You can include an OutputParser (to extract structured data from model outputs) in the prompt file. Below is an example with a regex-based parser.

Example: Load Prompt with OutputParser from JSON

  1. Create prompt_with_output_parser.json (includes the parser config):
shell 复制代码
! cat prompt_with_output_parser.json

File content (output of !cat):

json 复制代码
{
    "input_variables": [
        "question",
        "student_answer"
    ],
    "output_parser": {
        "regex": "(.*?)\\nScore: (.*)",
        "output_keys": [
            "answer",
            "score"
        ],
        "default_output_key": null,
        "_type": "regex_parser"
    },
    "partial_variables": {},
    "template": "Given the following question and student answer, provide a correct answer and score the student answer.\nQuestion: {question}\nStudent Answer: {student_answer}\nCorrect Answer:",
    "template_format": "f-string",
    "validate_template": true,
    "_type": "prompt"
}
  1. Load the prompt and use the parser:
python 复制代码
prompt = load_prompt("prompt_with_output_parser.json")

# Parse a sample model output
result = prompt.output_parser.parse(
    "George Washington was born in 1732 and died in 1799.\nScore: 1/2"
)
print(result)

Output:

复制代码
{'answer': 'George Washington was born in 1732 and died in 1799.', 'score': '1/2'}
相关推荐
全栈前端老曹3 小时前
【MongoDB】Node.js 集成 —— Mongoose ORM、Schema 设计、Model 操作
前端·javascript·数据库·mongodb·node.js·nosql·全栈
神梦流4 小时前
ops-math 算子库的扩展能力:高精度与复数运算的硬件映射策略
服务器·数据库
一切尽在,你来4 小时前
1.1 AI大模型应用开发和Langchain的关系
人工智能·langchain
让学习成为一种生活方式4 小时前
trf v4.09.1 安装与使用--生信工具42-version2
数据库
啦啦啦_99994 小时前
Redis-5-doFormatAsync()方法
数据库·redis·c#
生产队队长4 小时前
Redis:Windows环境安装Redis,并将 Redis 进程注册为服务
数据库·redis·缓存
老邓计算机毕设4 小时前
SSM找学互助系统52568(程序+源码+数据库+调试部署+开发环境)带论文文档1万字以上,文末可获取,系统界面在最后面
数据库·ssm 框架·javaweb 毕业设计
痴儿哈哈4 小时前
自动化机器学习(AutoML)库TPOT使用指南
jvm·数据库·python
一切尽在,你来4 小时前
1.2 LangChain 1.2.7 版本核心特性与升级点
人工智能·langchain
Bruk.Liu5 小时前
(LangChain 实战14):基于 ChatMessageHistory 自定义实现对话记忆功能
人工智能·python·langchain·agent