【深度学习PyTorch入门】7.Save and Load the Model 保存和加载模型

Save and Load the Model 保存和加载模型

文章目录

  • [Save and Load the Model 保存和加载模型](#Save and Load the Model 保存和加载模型)
  • [Saving and Loading Model Weights 保存和加载模型权重](#Saving and Loading Model Weights 保存和加载模型权重)
  • [Saving and Loading Models with Shapes 保存和加载带有形状的模型](#Saving and Loading Models with Shapes 保存和加载带有形状的模型)
  • [Related Tutorials 相关教程](#Related Tutorials 相关教程)
  • [References 参考资料](#References 参考资料)
  • Github

在本节中,我们将了解如何通过保存、加载和运行模型预测来持久保存模型状态。

python 复制代码
import torch
import torchvision.models as models

Saving and Loading Model Weights 保存和加载模型权重

PyTorch 模型将学习到的参数存储在内部状态字典中,称为state_dict。这些可以通过torch.save方法保存 :

python 复制代码
model = models.vgg16(weights='IMAGENET1K_V1')
torch.save(model.state_dict(), 'model_weights.pth')

Out:

python 复制代码
Downloading: "https://download.pytorch.org/models/vgg16-397923af.pth" to /var/lib/jenkins/.cache/torch/hub/checkpoints/vgg16-397923af.pth

  0%|          | 0.00/528M [00:00<?, ?B/s]
  3%|2         | 13.6M/528M [00:00<00:03, 143MB/s]
  5%|5         | 28.0M/528M [00:00<00:03, 147MB/s]
  8%|8         | 42.5M/528M [00:00<00:03, 149MB/s]
 11%|#         | 56.9M/528M [00:00<00:03, 150MB/s]
 13%|#3        | 71.2M/528M [00:00<00:03, 150MB/s]
 16%|#6        | 85.7M/528M [00:00<00:03, 151MB/s]
 19%|#8        | 100M/528M [00:00<00:02, 151MB/s]
 22%|##1       | 115M/528M [00:00<00:02, 151MB/s]
 24%|##4       | 129M/528M [00:00<00:02, 151MB/s]
 27%|##7       | 143M/528M [00:01<00:02, 151MB/s]
 30%|##9       | 158M/528M [00:01<00:02, 152MB/s]
 33%|###2      | 172M/528M [00:01<00:02, 151MB/s]
 35%|###5      | 187M/528M [00:01<00:02, 151MB/s]
 38%|###8      | 201M/528M [00:01<00:02, 151MB/s]
 41%|####      | 216M/528M [00:01<00:02, 151MB/s]
 44%|####3     | 230M/528M [00:01<00:02, 152MB/s]
 46%|####6     | 245M/528M [00:01<00:01, 152MB/s]
 49%|####9     | 259M/528M [00:01<00:01, 152MB/s]
 52%|#####1    | 274M/528M [00:01<00:01, 152MB/s]
 55%|#####4    | 288M/528M [00:02<00:01, 152MB/s]
 57%|#####7    | 303M/528M [00:02<00:01, 152MB/s]
 60%|######    | 317M/528M [00:02<00:01, 152MB/s]
 63%|######2   | 332M/528M [00:02<00:01, 152MB/s]
 66%|######5   | 346M/528M [00:02<00:01, 151MB/s]
 68%|######8   | 361M/528M [00:02<00:01, 151MB/s]
 71%|#######1  | 375M/528M [00:02<00:01, 151MB/s]
 74%|#######3  | 390M/528M [00:02<00:00, 151MB/s]
 77%|#######6  | 404M/528M [00:02<00:00, 151MB/s]
 79%|#######9  | 418M/528M [00:02<00:00, 151MB/s]
 82%|########2 | 433M/528M [00:03<00:00, 151MB/s]
 85%|########4 | 447M/528M [00:03<00:00, 151MB/s]
 88%|########7 | 462M/528M [00:03<00:00, 151MB/s]
 90%|######### | 476M/528M [00:03<00:00, 151MB/s]
 93%|#########2| 491M/528M [00:03<00:00, 151MB/s]
 96%|#########5| 505M/528M [00:03<00:00, 151MB/s]
 98%|#########8| 520M/528M [00:03<00:00, 151MB/s]
100%|##########| 528M/528M [00:03<00:00, 151MB/s]

要加载模型权重,您需要先创建同一模型的实例,然后使用load_state_dict()方法加载参数。

python 复制代码
model = models.vgg16() # we do not specify ``weights``, i.e. create untrained model
model.load_state_dict(torch.load('model_weights.pth'))
model.eval()

Out:

python 复制代码
VGG(
  (features): Sequential(
    (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): ReLU(inplace=True)
    (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (3): ReLU(inplace=True)
    (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (6): ReLU(inplace=True)
    (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (8): ReLU(inplace=True)
    (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (11): ReLU(inplace=True)
    (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (13): ReLU(inplace=True)
    (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (15): ReLU(inplace=True)
    (16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (18): ReLU(inplace=True)
    (19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (20): ReLU(inplace=True)
    (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (22): ReLU(inplace=True)
    (23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (25): ReLU(inplace=True)
    (26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (27): ReLU(inplace=True)
    (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (29): ReLU(inplace=True)
    (30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  )
  (avgpool): AdaptiveAvgPool2d(output_size=(7, 7))
  (classifier): Sequential(
    (0): Linear(in_features=25088, out_features=4096, bias=True)
    (1): ReLU(inplace=True)
    (2): Dropout(p=0.5, inplace=False)
    (3): Linear(in_features=4096, out_features=4096, bias=True)
    (4): ReLU(inplace=True)
    (5): Dropout(p=0.5, inplace=False)
    (6): Linear(in_features=4096, out_features=1000, bias=True)
  )
)

笔记

请务必在推理之前,调用model.eval()方法,将 dropout 和批量归一化层设置为评估模式。如果不这样做将会产生不一致的推理结果。

Saving and Loading Models with Shapes 保存和加载带有形状的模型

加载模型权重时,我们需要首先实例化模型类,因为该类定义了网络的结构。我们可能希望将此类的结构与模型一起保存,在这种情况下,我们可以将model(而不是model.state_dict())传递给保存函数:

python 复制代码
torch.save(model, 'model.pth')

然后我们可以像这样加载模型:

python 复制代码
model = torch.load('model.pth')

笔记

此方法在序列化模型时使用 Python pickle模块,因此它依赖于加载模型时可用的实际类定义。

References 参考资料

Save and Load the Model --- PyTorch Tutorials 2.2.0+cu121 documentation

Save and Load the Model --- PyTorch Tutorials 2.2.0+cu121 documentation

Github

storm-ice/Get_started_with_PyTorch

storm-ice/Get_started_with_PyTorch

相关推荐
数据分析能量站13 分钟前
神经网络-AlexNet
人工智能·深度学习·神经网络
Ven%19 分钟前
如何修改pip全局缓存位置和全局安装包存放路径
人工智能·python·深度学习·缓存·自然语言处理·pip
szxinmai主板定制专家32 分钟前
【NI国产替代】基于国产FPGA+全志T3的全国产16振动+2转速(24bits)高精度终端采集板卡
人工智能·fpga开发
YangJZ_ByteMaster41 分钟前
EndtoEnd Object Detection with Transformers
人工智能·深度学习·目标检测·计算机视觉
Anlici42 分钟前
模型训练与数据分析
人工智能·机器学习
余~~185381628001 小时前
NFC 碰一碰发视频源码搭建技术详解,支持OEM
开发语言·人工智能·python·音视频
唔皇万睡万万睡1 小时前
五子棋小游戏设计(Matlab)
人工智能·matlab·游戏程序
视觉语言导航2 小时前
AAAI-2024 | 大语言模型赋能导航决策!NavGPT:基于大模型显式推理的视觉语言导航
人工智能·具身智能
volcanical2 小时前
Bert各种变体——RoBERTA/ALBERT/DistillBert
人工智能·深度学习·bert
知来者逆2 小时前
Binoculars——分析证实大语言模型生成文本的检测和引用量按学科和国家明确显示了使用偏差的多样性和对内容类型的影响
人工智能·深度学习·语言模型·自然语言处理·llm·大语言模型