机器学习 - save和load训练好的模型

如果已经训练好了一个模型,你就可以save和load这模型。

For saving and loading models in PyTorch, there are three main methods you should be aware of.

PyTorch method What does it do?
torch.save Saves a serialized object to disk using Python's pickle utility. Models, tensors and various other Python objects like dictionaries can be saved using torch.save
torch.load Uses pickle's unpickling features to deserialize and load pickled Python object files (like models, tensors or dictionaries) into memory. You can also set which device to load the object to (CPU, GPU etc)
torch.nn.Module.load_state_dict Loads a model's parameter dictionary (model.state_dict()) using a saved state_dict() object

在 PyTorch 中,pickle 是一个用于序列化和反序列化Python对象的标准库模块。它可以将Python对象转换为字节流 (即序列化),并将字节流转换回Python对象 (即反序列化)。pickle模块在很多情况下都非常有用,特别是在保存和加载模型,保存训练中间状态等方面。

在深度学习中,经常需要保存训练好的模型或者训练过程中的中间结果,以便后续的使用或分析。PyTorch提高了方便的API来保存和加载模型,其中就包括了使用pickle模块进行对象的序列化和反序列化。


save model

python 复制代码
import torch
from pathlib import Path 

# 1. Create models directory
MODEL_PATH = Path("models")
MODEL_PATH.mkdir(parents = True, exist_ok = True)

# 2. Create model save path
MODEL_NAME = "trained_model.pth"
MODEL_SAVE_PATH = MODEL_PATH / MODEL_NAME

# 3. Save the model state dict 
print(f"Saving model to: {MODEL_SAVE_PATH}")
torch.save(obj = model_0.state_dict(),
			f = MODEL_SAVE_PATH)

就能看到 trained_model.pth 文件下载到所属的文件夹位置。


Load the saved PyTorch model

You can load it in using torch.nn.Module.load_state_dict(torch.load(f)) where f is the filepath of the saved model state_dict().

Why call torch.load() inside torch.nn.Module.load_state_dict()?

Because you only saved the model's state_dict() which is a dictionary of learned parameters and not the entire model, you first have to load the state_dict() with torch.load() and then pass that state_dict() to a new instance of the model (which is a subclass of nn.Module).

python 复制代码
# Instantiate a new instance of the model 
loaded_model_0 = LinearRegressionModel()

# Load the state_dict of the saved model
loaded_model_0.load_state_dict(torch.load(f=MODEL_SAVE_PATH))

# 结果如下
<All keys matched successfully>

测试 loaded model。

python 复制代码
# Put the loaded model into evaluation model 
loaded_model_0.eval() 

# 2. Use the inference mode context manager to make predictions
with torch.inference_mode():
  loaded_model_preds = loaded_model_0(X_test)

# Compare previous model predictions with loaded model predictions
print(y_preds == loaded_model_preds) 

# 结果如下
tensor([[True],
        [True],
        [True],
        [True],
        [True],
        [True],
        [True],
        [True],
        [True],
        [True]])

看到这了,点个赞呗~

相关推荐
9呀1 分钟前
【ros2】OccupancyGrid消息里的resolution
人工智能·机器人
DuHz3 分钟前
通过超宽带信号估计位置——论文精读
论文阅读·人工智能·机器学习·自动驾驶·汽车
静听松涛1334 分钟前
大语言模型长上下文技术突破:如何处理超长文本的注意力机制与架构图解
人工智能·语言模型·架构
Physicist in Geophy.5 分钟前
一维波动方程(从变分法角度)
线性代数·算法·机器学习
我送炭你添花5 分钟前
电子世界的奇妙冒险:从一个电阻开始(系列目录)
人工智能·单片机·嵌入式硬件·fpga开发
数据智能老司机6 分钟前
用于构建多智能体系统的智能体架构模式——可解释性与合规性的智能体模式
人工智能·llm·agent
数据智能老司机6 分钟前
用于构建多智能体系统的智能体架构模式——人类—智能体交互模式
人工智能·llm·agent
一个处女座的程序猿7 分钟前
LLMs之Benchmark:《CL-bench: A Benchmark for Context Learn》翻译与解读
人工智能·benchmark·llms
Node全栈8 分钟前
AI时代,不准备换行吗?
人工智能
数据智能老司机9 分钟前
用于构建多智能体系统的智能体架构模式——高级适配:打造具备学习能力的智能体
人工智能·llm·agent