https://python.langchain.com.cn/docs/modules/model_io/prompts/prompt_templates/prompt_serialization
Prompt Serialization in LangChain
Storing prompts as files (instead of writing them directly in Python code) is often better---it makes prompts easier to share, store, and track versions. This guide explains how to serialize (save) and deserialize (load) prompts in LangChain, including different prompt types and serialization options. All content follows the original examples and code without changes or omissions.
Core Design Principles of Serialization
LangChain's prompt serialization follows three key rules:
- Supports JSON and YAML: Both formats are human-readable, making them ideal for storing prompts.
- Flexible file storage: You can store all prompt components (template, examples, etc.) in one file, or split them into separate files (useful for long templates or reusable parts).
- Single loading entry point : Use the
load_promptfunction to load any type of prompt---no need for different functions for different prompt types.
1. Serialize/Deserialize PromptTemplate
PromptTemplate is the basic prompt type. Below are examples of loading it from YAML, JSON, and a separate template file.
Step 1: Import the load_prompt function
All prompts are loaded with this single function:
python
from langchain.prompts import load_prompt
Example 1: Load PromptTemplate from YAML
First, create a YAML file (simple_prompt.yaml) with the prompt details. The !cat command shows the file content (as in the original source):
shell
!cat simple_prompt.yaml
File content (output of !cat):
yaml
_type: prompt
input_variables:
["adjective", "content"]
template:
Tell me a {adjective} joke about {content}.
Load and use the prompt:
python
prompt = load_prompt("simple_prompt.yaml")
print(prompt.format(adjective="funny", content="chickens"))
Output:
Tell me a funny joke about chickens.
Example 2: Load PromptTemplate from JSON
Create a JSON file (simple_prompt.json):
shell
!cat simple_prompt.json
File content (output of !cat):
json
{
"_type": "prompt",
"input_variables": ["adjective", "content"],
"template": "Tell me a {adjective} joke about {content}."
}
Load and use the prompt:
python
prompt = load_prompt("simple_prompt.json")
print(prompt.format(adjective="funny", content="chickens"))
Output:
Tell me a funny joke about chickens.
Example 3: Load Template from a Separate File
For long templates, store the template text in a separate file (e.g., simple_template.txt), then reference it in the JSON/YAML config (use template_path instead of template).
- First, create the template file:
shell
!cat simple_template.txt
File content (output of !cat):
Tell me a {adjective} joke about {content}.
- Create a JSON config file (
simple_prompt_with_template_file.json) that references the template:
shell
!cat simple_prompt_with_template_file.json
File content (output of !cat):
json
{
"_type": "prompt",
"input_variables": ["adjective", "content"],
"template_path": "simple_template.txt"
}
- Load and use the prompt:
python
prompt = load_prompt("simple_prompt_with_template_file.json")
print(prompt.format(adjective="funny", content="chickens"))
Output:
Tell me a funny joke about chickens.
2. Serialize/Deserialize FewShotPromptTemplate
FewShotPromptTemplate includes examples to guide the model (e.g., for antonyms, translations). Below are examples of loading it from files, with examples stored separately or inline.
First: Prepare Example Files
First, create files to store examples (used in later examples).
Example File 1: examples.json
shell
!cat examples.json
File content (output of !cat):
json
[
{"input": "happy", "output": "sad"},
{"input": "tall", "output": "short"}
]
Example File 2: examples.yaml
shell
!cat examples.yaml
File content (output of !cat):
yaml
- input: happy
output: sad
- input: tall
output: short
Example 1: Load FewShotPromptTemplate from YAML (with JSON examples)
Create a YAML config file (few_shot_prompt.yaml) that references examples.json:
shell
!cat few_shot_prompt.yaml
File content (output of !cat):
yaml
_type: few_shot
input_variables:
["adjective"]
prefix:
Write antonyms for the following words.
example_prompt:
_type: prompt
input_variables:
["input", "output"]
template:
"Input: {input}\nOutput: {output}"
examples:
examples.json
suffix:
"Input: {adjective}\nOutput:"
Load and use the prompt:
python
prompt = load_prompt("few_shot_prompt.yaml")
print(prompt.format(adjective="funny"))
Output:
Write antonyms for the following words.
Input: happy
Output: sad
Input: tall
Output: short
Input: funny
Output:
Example 2: Load FewShotPromptTemplate from YAML (with YAML examples)
Create a YAML config file (few_shot_prompt_yaml_examples.yaml) that references examples.yaml:
shell
!cat few_shot_prompt_yaml_examples.yaml
File content (output of !cat):
yaml
_type: few_shot
input_variables:
["adjective"]
prefix:
Write antonyms for the following words.
example_prompt:
_type: prompt
input_variables:
["input", "output"]
template:
"Input: {input}\nOutput: {output}"
examples:
examples.yaml
suffix:
"Input: {adjective}\nOutput:"
Load and use the prompt:
python
prompt = load_prompt("few_shot_prompt_yaml_examples.yaml")
print(prompt.format(adjective="funny"))
Output:
Write antonyms for the following words.
Input: happy
Output: sad
Input: tall
Output: short
Input: funny
Output:
Example 3: Load FewShotPromptTemplate from JSON
Create a JSON config file (few_shot_prompt.json):
shell
!cat few_shot_prompt.json
File content (output of !cat):
json
{
"_type": "few_shot",
"input_variables": ["adjective"],
"prefix": "Write antonyms for the following words.",
"example_prompt": {
"_type": "prompt",
"input_variables": ["input", "output"],
"template": "Input: {input}\nOutput: {output}"
},
"examples": "examples.json",
"suffix": "Input: {adjective}\nOutput:"
}
Load and use the prompt:
python
prompt = load_prompt("few_shot_prompt.json")
print(prompt.format(adjective="funny"))
Output:
Write antonyms for the following words.
Input: happy
Output: sad
Input: tall
Output: short
Input: funny
Output:
Example 4: Embed Examples Directly in the Config
Instead of referencing an external example file, embed examples directly in the JSON config (few_shot_prompt_examples_in.json):
shell
!cat few_shot_prompt_examples_in.json
File content (output of !cat):
json
{
"_type": "few_shot",
"input_variables": ["adjective"],
"prefix": "Write antonyms for the following words.",
"example_prompt": {
"_type": "prompt",
"input_variables": ["input", "output"],
"template": "Input: {input}\nOutput: {output}"
},
"examples": [
{"input": "happy", "output": "sad"},
{"input": "tall", "output": "short"}
],
"suffix": "Input: {adjective}\nOutput:"
}
Load and use the prompt:
python
prompt = load_prompt("few_shot_prompt_examples_in.json")
print(prompt.format(adjective="funny"))
Output:
Write antonyms for the following words.
Input: happy
Output: sad
Input: tall
Output: short
Input: funny
Output:
Example 5: Load example_prompt from a Separate File
For reusable example_prompt (the template that formats individual examples), store it in a separate file and reference it with example_prompt_path (instead of example_prompt).
- Create
example_prompt.json(the reusable example template):
shell
!cat example_prompt.json
File content (output of !cat):
json
{
"_type": "prompt",
"input_variables": ["input", "output"],
"template": "Input: {input}\nOutput: {output}"
}
- Create the
FewShotPromptTemplateconfig (few_shot_prompt_example_prompt.json):
shell
!cat few_shot_prompt_example_prompt.json
File content (output of !cat):
json
{
"_type": "few_shot",
"input_variables": ["adjective"],
"prefix": "Write antonyms for the following words.",
"example_prompt_path": "example_prompt.json",
"examples": "examples.json",
"suffix": "Input: {adjective}\nOutput:"
}
- Load and use the prompt:
python
prompt = load_prompt("few_shot_prompt_example_prompt.json")
print(prompt.format(adjective="funny"))
Output:
Write antonyms for the following words.
Input: happy
Output: sad
Input: tall
Output: short
Input: funny
Output:
3. Serialize/Deserialize PromptTemplate with OutputParser
You can include an OutputParser (to extract structured data from model outputs) in the prompt file. Below is an example with a regex-based parser.
Example: Load Prompt with OutputParser from JSON
- Create
prompt_with_output_parser.json(includes the parser config):
shell
! cat prompt_with_output_parser.json
File content (output of !cat):
json
{
"input_variables": [
"question",
"student_answer"
],
"output_parser": {
"regex": "(.*?)\\nScore: (.*)",
"output_keys": [
"answer",
"score"
],
"default_output_key": null,
"_type": "regex_parser"
},
"partial_variables": {},
"template": "Given the following question and student answer, provide a correct answer and score the student answer.\nQuestion: {question}\nStudent Answer: {student_answer}\nCorrect Answer:",
"template_format": "f-string",
"validate_template": true,
"_type": "prompt"
}
- Load the prompt and use the parser:
python
prompt = load_prompt("prompt_with_output_parser.json")
# Parse a sample model output
result = prompt.output_parser.parse(
"George Washington was born in 1732 and died in 1799.\nScore: 1/2"
)
print(result)
Output:
{'answer': 'George Washington was born in 1732 and died in 1799.', 'score': '1/2'}