深度学习基础--ResNet网络的讲解,ResNet50的复现(pytorch)以及用复现的ResNet50做鸟类图像分类

前言

  • 如果说最经典的神经网络,ResNet肯定是一个,这篇文章是本人学习ResNet的学习笔记,并且用pytorch复现了ResNet50,后面用它做了一个鸟类图像分类demo;
  • 欢迎收藏 + 关注,本人将会持续更新

文章目录

ResNet网络讲解

什么是ResNet?

ResNet网络是CNN的经典网络架构,是有大神何凯明提出的,主要为了解决随着网络的加深而引起的" 退化 "问题,主要用于图像分类。

可以说在如今的CV领域里面,大部分网络结构都有参考ResNet网络思想,无论是在图像分类、目标检测、图像识别上,甚至在Transformer 网络模型中,也融合了ResNet网络的思想。

ResNet神经网络突出点

  • 网络结构超过1000层:
    • ❔ ❔ 超过1000层网络结构不是很容易么? 小编在学习深度学习的时候,曾经遇到过这样一个问题,有时候加深网络结构,反而在准确率、损失率上更差,这种现象称为模型" 退化 "现象,而ResNet的残差连接可以保证下一层的输出不会比输入差,从而可以加深网络结构。
  • 提出残差模块(residual):这个是ResNet的核心;
  • 采用大量的归一化在卷积层与激活函数之间.

为什么采用残差连接

模型退化、梯度消失、梯度爆炸

  • 👉 模型退化:指随着网络层数的加深,其效果出现下降趋势,不如层数少的情况。如论文中图示,56层效果不如20层效果;
  • 👉 梯度消失:这个是指随着网络层数的增加,反向传播,梯度更新 的时候可能会造成前面几层的梯度很小、接近于0,这就会导致权重的更新会特别慢,效率低下。
  • 👉 梯度爆炸:指随着网络层数的增加,在反向传播的时候 ,梯度变得非常大,从而在更新权重的时候,权重值发生大幅度变化,这可能导致网络不稳定,甚至是无法收敛

解决方法

  • 梯度消失、梯度爆炸:在数据预处理和网络层之间加入:BN层(Batch Normalization),从而对数据进行归一化
  • 模型退化:采用残差连接,如论文图,随着网络层数的增加,损失率更低了。

残差网络

在讲述前,这里先讲述一下恒等映射的概念:

  • 恒等映射核心是复制,就是复制网络层,什么也不干。

➿ 可以这么理解:假设在一种网络A的后面添加几层形成新的网络B,如果A的输出经过新的层级变成B的输出没有发送变化,那么就可以说网络A和网络B的错误率是相等的,这样就确保了加深的网络层不会比之前的网络层效果差。


resent网络说明了,更深的网络结构可以有更好的效果,而解决这个的核心就是残差连接,网络结果如图所示:

上图就是何凯明提出的残差结构,这种结构实现了恒等映射,网络层的输出由两大模块组成:

  • 其一:正常的卷积层;
  • 其二:有一个分支输出到连接上,这个输出值就是输入的值;

最终结果就是:卷积层输出+分支输出,数学公式如下:

其中F(x)是卷积层的输出,x是分支的输入值。

极端情况 :F(x)的网络层中,所有参数都为0,那么H(x)就是恒等映射。这样就确保了最后的错误率不会因为网络层的增加而导致变大


在ResNet中有两个不同的ResNet模块,如图所示:

左边:

  • 有两层残差单元,输出通道都是3*3
  • 使用情况:用于较浅的ResNet网络。

右边:

  • 三层残差单元,称为blottlenck 模块,作用是:现用1*1卷积进行降维,后用3*3卷积进行特征特权,最后用1*1卷积恢复原来的维度 ,这个可以很好的减少参数个数,用于较深的神经网络

下图参考一个csdn大神笔记图

CNN参数计算公式:卷积核尺寸 * 卷积核速度 * 卷积核组数 == 卷积核尺寸 * 输入特征矩阵深度 * 输出矩阵深度。

ResNet经典的网络结构有ResNet-50,ResNet-101等,本文将用pytorch复现ResNet-50 ,并用其做一个简单的实验--鸟类图片分类

ResNet-50网络结果如下:

ResNet-50复现

1、导入数据

1、导入库

python 复制代码
import torch  
import torch.nn as nn
import torchvision 
import numpy as np 
import os, PIL, pathlib 

# 设置设备
device = "cuda" if torch.cuda.is_available() else "cpu"

device 
复制代码
'cuda'

2、查看数据信息和导入数据

数据目录有两个文件:一个数据文件,一个权重。

python 复制代码
data_dir = "./data/bird_photos"

data_dir = pathlib.Path(data_dir)

# 类别数量
classnames = [str(path).split('/')[0] for path in os.listdir(data_dir)]

classnames
复制代码
['Bananaquit', 'Black Skimmer', 'Black Throated Bushtiti', 'Cockatoo']

3、展示数据

python 复制代码
import matplotlib.pylab as plt  
from PIL import Image 

# 获取文件名称
data_path_name = "./data/bird_photos/Bananaquit/"
data_path_list = [f for f in os.listdir(data_path_name) if f.endswith(('jpg', 'png'))]

# 创建画板
fig, axes = plt.subplots(2, 8, figsize=(16, 6))

for ax, img_file in zip(axes.flat, data_path_list):
    path_name = os.path.join(data_path_name, img_file)
    img = Image.open(path_name) # 打开
    # 显示
    ax.imshow(img)
    ax.axis('off')
    
plt.show()


4、数据导入

python 复制代码
from torchvision import transforms, datasets 

# 数据统一格式
img_height = 224
img_width = 224 

data_tranforms = transforms.Compose([
    transforms.Resize([img_height, img_width]),
    transforms.ToTensor(),
    transforms.Normalize(   # 归一化
        mean=[0.485, 0.456, 0.406],
        std=[0.229, 0.224, 0.225] 
    )
])

# 加载所有数据
total_data = datasets.ImageFolder(root="./data/bird_photos", transform=data_tranforms)

5、数据划分

python 复制代码
# 大小 8 : 2
train_size = int(len(total_data) * 0.8)
test_size = len(total_data) - train_size 

train_data, test_data = torch.utils.data.random_split(total_data, [train_size, test_size])

6、动态加载数据

python 复制代码
batch_size = 32 

train_dl = torch.utils.data.DataLoader(
    train_data,
    batch_size=batch_size,
    shuffle=True
)

test_dl = torch.utils.data.DataLoader(
    test_data,
    batch_size=batch_size,
    shuffle=False
)
python 复制代码
# 查看数据维度
for data, labels in train_dl:
    print("data shape[N, C, H, W]: ", data.shape)
    print("labels: ", labels)
    break
复制代码
data shape[N, C, H, W]:  torch.Size([32, 3, 224, 224])
labels:  tensor([0, 1, 0, 1, 2, 1, 1, 0, 2, 2, 1, 2, 1, 3, 1, 2, 2, 2, 2, 1, 2, 1, 2, 2,
        0, 3, 3, 3, 3, 2, 3, 3])

2、构建ResNet-50网络

python 复制代码
import torch.nn.functional as F

# 定义残差模块一,这个用于处理输入和输出通道一样的情况
'''  
卷积核大小:1       3       1
核心特点:
    尺寸不变:输入和输出的尺寸保持一致。 
    没有下采样:没有使用步长大于1的卷积操作,因此没有改变特征图的空间尺寸
'''
class Identity_block(nn.Module):
    def __init__(self, in_channels, kernel_size, filters):
        super(Identity_block, self).__init__()
        
        # 输出通道
        filter1, filter2, filter3 = filters
        
        # 卷积层一
        self.conv1 = nn.Conv2d(in_channels, filter1, kernel_size=1, stride=1)
        self.bn1 = nn.BatchNorm2d(filter1)
        
        # 卷积层2
        self.conv2 = nn.Conv2d(filter1, filter2, kernel_size=kernel_size, padding=1)   # 通过卷积输入输出公式发现,padding=1,可以保证输入和输出尺寸相同
        self.bn2 = nn.BatchNorm2d(filter2)
        
        # 卷积层3
        self.conv3 = nn.Conv2d(filter2, filter3, kernel_size=1, stride=1)
        self.bn3 = nn.BatchNorm2d(filter3)
        
    def forward(self, x):
        # 记录原始值
        xx = x
        x = F.relu(self.bn1(self.conv1(x)))
        x = F.relu(self.bn2(self.conv2(x)))
        x = self.bn3(self.conv3(x))
        # 残差连接,输入、输出维度不变
        x += xx
        x = F.relu(x)
        
        return x 
    
# 定义卷积模块二:用于处理输入和输出不一样的情况
'''  
* 卷积核还是:1 3 1
* stride=2
* 这里的分支是采用一个Conv2D,和一个归一化BN层,也是为了处理数据维度吧, 这种维度的变化,可以用ai举例子

核心特点:
    尺寸变化,stride=2降维
'''
class ConvBlock(nn.Module):
    def __init__(self, in_channels, kernel_size, filters, stride=2):
        super(ConvBlock, self).__init__()
        
        filter1, filter2, filter3= filters
        
        # 卷积层1
        self.conv1 = nn.Conv2d(in_channels, filter1, kernel_size=1, stride=stride)
        self.bn1 = nn.BatchNorm2d(filter1)
        
        # 卷积2
        self.conv2 = nn.Conv2d(filter1, filter2, kernel_size=kernel_size, padding=1) # 需要维持维度不变
        self.bn2 = nn.BatchNorm2d(filter2)
        
        # 卷积3
        self.conv3 = nn.Conv2d(filter2, filter3, kernel_size=1, stride=1)  # stride = 1,维持通道不变
        self.bn3 = nn.BatchNorm2d(filter3)
        
        # 用于匹配维度的shortcut卷积,这个就是上面Identity_block的x分支
        self.shortcut = nn.Conv2d(in_channels, filter3, kernel_size=1, stride=stride)
        self.shortcut_bn = nn.BatchNorm2d(filter3)
        
    def forward(self, x):
        xx = x
        x = F.relu(self.bn1(self.conv1(x)))
        x = F.relu(self.bn2(self.conv2(x)))
        x = self.bn3(self.conv3(x))
        
        temp = self.shortcut_bn(self.shortcut(xx))
        
        x += temp
        
        x = F.relu(x)
        
        return x 
        
# 定义ResNet50
class ResNet50(nn.Module):
    def __init__(self, classes):   # 类别数量
        super().__init__()
        
        # 头顶
        self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3)
        self.bn1 = nn.BatchNorm2d(64)
        self.max_pool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
        
        # 第一部分
        self.part1_1 = ConvBlock(64, 3, [64, 64, 256], stride=1)
        self.part1_2 = Identity_block(256, 3, [64, 64, 256])
        self.part1_3 = Identity_block(256, 3, [64, 64, 256])
        
        # 第二部分
        self.part2_1 = ConvBlock(256, 3, [128, 128, 512])
        self.part2_2 = Identity_block(512, 3, [128, 128, 512])
        self.part2_3 = Identity_block(512, 3, [128, 128, 512])
        self.part2_4 = Identity_block(512, 3, [128, 128, 512])
        
        # 第三部分
        self.part3_1 = ConvBlock(512, 3, [256, 256, 1024])
        self.part3_2 = Identity_block(1024, 3, [256, 256, 1024])
        self.part3_3 = Identity_block(1024, 3, [256, 256, 1024])
        self.part3_4 = Identity_block(1024, 3, [256, 256, 1024])
        self.part3_5 = Identity_block(1024, 3, [256, 256, 1024])
        self.part3_6 = Identity_block(1024, 3, [256, 256, 1024])
        
        # 第四部分
        self.part4_1 = ConvBlock(1024, 3, [512, 512, 2048])
        self.part4_2 = Identity_block(2048, 3, [512, 512, 2048])
        self.part4_3 = Identity_block(2048, 3, [512, 512, 2048])
        
        # 平均池化
        self.avg_pool = nn.AvgPool2d(kernel_size=7)
        
        # 全连接
        self.fn1 = nn.Linear(2048, classes)
        
    def forward(self, x):
        # 头部
        x = F.relu(self.bn1(self.conv1(x)))
        x = self.max_pool(x)
        
        x = self.part1_1(x)
        x = self.part1_2(x)
        x = self.part1_3(x)
        
        x = self.part2_1(x)
        x = self.part2_2(x)
        x = self.part2_3(x)
        x = self.part2_4(x)
        
        x = self.part3_1(x)
        x = self.part3_2(x)
        x = self.part3_3(x)
        x = self.part3_4(x)
        x = self.part3_5(x)
        x = self.part3_6(x)
        
        x = self.part4_1(x)
        x = self.part4_2(x)
        x = self.part4_3(x)
        
        x = self.avg_pool(x)
        
        x = x.view(x.size(0), -1)  # 扁平化
        x = self.fn1(x)
        
        return x 
        
model = ResNet50(classes=len(classnames)).to(device)

model
复制代码
ResNet50(
  (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3))
  (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  (max_pool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
  (part1_1): ConvBlock(
    (conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1))
    (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1))
    (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (shortcut): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1))
    (shortcut_bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  )
  (part1_2): Identity_block(
    (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1))
    (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1))
    (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  )
  (part1_3): Identity_block(
    (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1))
    (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1))
    (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  )
  (part2_1): ConvBlock(
    (conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(2, 2))
    (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1))
    (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (shortcut): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2))
    (shortcut_bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  )
  (part2_2): Identity_block(
    (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1))
    (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1))
    (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  )
  (part2_3): Identity_block(
    (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1))
    (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1))
    (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  )
  (part2_4): Identity_block(
    (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1))
    (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1))
    (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  )
  (part3_1): ConvBlock(
    (conv1): Conv2d(512, 256, kernel_size=(1, 1), stride=(2, 2))
    (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1))
    (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (shortcut): Conv2d(512, 1024, kernel_size=(1, 1), stride=(2, 2))
    (shortcut_bn): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  )
  (part3_2): Identity_block(
    (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1))
    (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1))
    (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  )
  (part3_3): Identity_block(
    (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1))
    (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1))
    (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  )
  (part3_4): Identity_block(
    (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1))
    (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1))
    (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  )
  (part3_5): Identity_block(
    (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1))
    (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1))
    (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  )
  (part3_6): Identity_block(
    (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1))
    (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1))
    (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  )
  (part4_1): ConvBlock(
    (conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(2, 2))
    (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1))
    (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (shortcut): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(2, 2))
    (shortcut_bn): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  )
  (part4_2): Identity_block(
    (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1))
    (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1))
    (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  )
  (part4_3): Identity_block(
    (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1))
    (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1))
    (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  )
  (avg_pool): AvgPool2d(kernel_size=7, stride=7, padding=0)
  (fn1): Linear(in_features=2048, out_features=4, bias=True)
)
python 复制代码
model(torch.randn(32, 3, 224, 224).to(device)).shape
复制代码
torch.Size([32, 4])

3、模型训练

1、构建训练集

python 复制代码
def train(dataloader, model, loss_fn, optimizer):
    size = len(dataloader.dataset)
    batch_size = len(dataloader)
    
    train_acc, train_loss = 0, 0 
    
    for X, y in dataloader:
        X, y = X.to(device), y.to(device)
        
        # 训练
        pred = model(X)
        loss = loss_fn(pred, y)
        
        # 梯度下降法
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        
        # 记录
        train_loss += loss.item()
        train_acc += (pred.argmax(1) == y).type(torch.float).sum().item()
        
    train_acc /= size
    train_loss /= batch_size
    
    return train_acc, train_loss

2、构建测试集

python 复制代码
def test(dataloader, model, loss_fn):
    size = len(dataloader.dataset)
    batch_size = len(dataloader)
    
    test_acc, test_loss = 0, 0 
    
    with torch.no_grad():
        for X, y in dataloader:
            X, y = X.to(device), y.to(device)
        
            pred = model(X)
            loss = loss_fn(pred, y)
        
            test_loss += loss.item()
            test_acc += (pred.argmax(1) == y).type(torch.float).sum().item()
        
    test_acc /= size
    test_loss /= batch_size
    
    return test_acc, test_loss

3、设置超参数

python 复制代码
loss_fn = nn.CrossEntropyLoss()  # 损失函数     
learn_lr = 1e-4             # 超参数
optimizer = torch.optim.Adam(model.parameters(), lr=learn_lr)   # 优化器

4、模型训练

python 复制代码
train_acc = []
train_loss = []
test_acc = []
test_loss = []

epoches = 80

for i in range(epoches):
    model.train()
    epoch_train_acc, epoch_train_loss = train(train_dl, model, loss_fn, optimizer)
    
    model.eval()
    epoch_test_acc, epoch_test_loss = test(test_dl, model, loss_fn)
    
    train_acc.append(epoch_train_acc)
    train_loss.append(epoch_train_loss)
    test_acc.append(epoch_test_acc)
    test_loss.append(epoch_test_loss)
    
    # 输出
    template = ('Epoch:{:2d}, Train_acc:{:.1f}%, Train_loss:{:.3f}, Test_acc:{:.1f}%, Test_loss:{:.3f}')
    print(template.format(i + 1, epoch_train_acc*100, epoch_train_loss, epoch_test_acc*100, epoch_test_loss))
    
print("Done")

5、结果可视化

python 复制代码
import matplotlib.pyplot as plt
#隐藏警告
import warnings
warnings.filterwarnings("ignore")               #忽略警告信息

epochs_range = range(epoches)

plt.figure(figsize=(12, 3))
plt.subplot(1, 2, 1)

plt.plot(epochs_range, train_acc, label='Training Accuracy')
plt.plot(epochs_range, test_acc, label='Test Accuracy')
plt.legend(loc='lower right')
plt.title('Training Accuracy')

plt.subplot(1, 2, 2)
plt.plot(epochs_range, train_loss, label='Training Loss')
plt.plot(epochs_range, test_loss, label='Test Loss')
plt.legend(loc='upper right')
plt.title('Training= Loss')
plt.show()


参考资料

【深度学习】ResNet网络讲解-CSDN博客

K同学啊,训练营文档

相关推荐
virelin_Y.lin1 小时前
系统与网络安全------弹性交换网络(2)
网络·安全·web安全·链路聚合·lacp·eth-trunk
Y1nhl4 小时前
搜广推校招面经八十一
开发语言·人工智能·pytorch·深度学习·机器学习·推荐算法·搜索算法
EasyDSS6 小时前
视频监控从安装到优化的技术指南,视频汇聚系统EasyCVR智能安防系统构建之道
大数据·网络·网络协议·音视频
rufeike6 小时前
UDP协议理解
网络·网络协议·udp
江理不变情7 小时前
海思ISP调试记录
网络·接口隔离原则
小墙程序员7 小时前
机器学习入门(二)线性回归
机器学习
鸿蒙布道师8 小时前
OpenAI为何觊觎Chrome?AI时代浏览器争夺战背后的深层逻辑
前端·人工智能·chrome·深度学习·opencv·自然语言处理·chatgpt
世界尽头与你8 小时前
【安全扫描器原理】网络扫描算法
网络·安全
GKoSon8 小时前
加入RPC shell指令 温箱长时间监控
网络·网络协议·rpc
追逐☞8 小时前
机器学习(7)——K均值聚类
机器学习·均值算法·聚类