一、前言
上一篇博客,我简单介绍了AlexNet网络的简单实现,其开启了深度学习的狂潮。
https://blog.csdn.net/Wu_Deng_Sheng/article/details/157254935
随后的2014,VggNet由牛津大学 VGG 研究组提出,其创新点在于:用堆叠的小尺寸卷积核代替大尺寸卷积核。旨在保持感受野不变的条件下,减少模型参数量,并可以引入更多的激活层来增强网络的非线性表达能力。
本文通过模拟还原VGG13模型,来帮助理解和学习VGG的模型框架。
二、实现步骤
2.1环境配置
import torchvision.models as models
import torch.nn as nn
import torch
2.2与官方VGG13进行对照
vgg = models.vgg13()
# 打印官方VGG13结构,用于对照
print(vgg)
2.3自定义VGG基础层
class vggLayer(nn.Module):
def __init__(self, in_cha, mid_cha, out_cha):
super(vggLayer, self).__init__()
self.relu = nn.ReLU()
self.pool = nn.MaxPool2d(2)
# ①输入通道 ②输出通道 ③卷积核大小3*3 ④步长 ⑤padding
# H+2×1-3)/1 +1 = H 尺寸不变,只改变通道数
self.conv1 = nn.Conv2d(in_cha, mid_cha, 3, 1, 1)
# 两个3*3卷积堆叠,感受野等价于1个5*5卷积
# 但参数更少(2×(3×3)=18 < 5×5=25),且多一次 ReLU,非线性更强
self.conv2 = nn.Conv2d(mid_cha, out_cha, 3, 1, 1)
# 前向传播
def forward(self, x):
x = self.conv1(x)
x = self.relu(x)
x = self.conv2(x)
x = self.relu(x)
x = self.pool(x)
return x
2.4构建VGG13网络
class MyVgg(nn.Module):
def __init__(self):
super(MyVgg, self).__init__()
# (1,3,224,224) → (1,64,112,112)(尺寸减半,通道 64)
self.layer1 = vggLayer(3, 64, 64)
# (1,64,112,112) → (1,128,56,56)(尺寸再减半,通道 128)
self.layer2 = vggLayer(64, 128, 128)
# (1,128,56,56) → (1,256,28,28)
self.layer3 = vggLayer(128, 256, 256)
# (1,256,28,28) → (1,512,14,14)
self.layer4 = vggLayer(256, 512, 512)
# (1,512,14,14) → (1,512,7,7)
self.layer5 = vggLayer(512, 512, 512)
# 自适应平均池化层 , 强制输出 7×7 的特征图
self.adapool = nn.AdaptiveAvgPool2d(7)
# 实例化 Dropout 层(丢弃概率 0.5)
self.drop = nn.Dropout(0.5)
self.relu = nn.ReLU()
# 25088=512(通道)×7*7
self.fc1 = nn.Linear(25088, 4096)
self.fc2 = nn.Linear(4096, 4096)
self.fc3 = nn.Linear(4096, 1000)
# 前向传播
def forward(self, x):
# 五层卷积+池化
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
x = self.layer5(x)
# 自适应平均池化
x = self.adapool(x)
# 展平
x = x.view(x.size()[0], -1)
# 三层全连接
x = self.fc1(x)
x = self.relu(x)
x = self.drop(x)
x = self.fc2(x)
x = self.relu(x)
x = self.drop(x)
x = self.fc3(x)
x = self.relu(x)
return x
2.5输入,验证模型
# 初始化VGG13
myVgg = MyVgg()
# 批量大小1、3通道RGB、224*224
img = torch.zeros(1, 3, 224, 224)
# 前向传播测试
out = myVgg(img)
# 打印输出维度(1, 1000)
print("模型输出维度:", out.size())
# 统计总参数和可训练参数
def get_parameter_number(model):
total_num = sum(p.numel() for p in model.parameters())
trainable_num = sum(p.numel() for p in model.parameters() if p.requires_grad)
return {'Total': total_num, 'Trainable': trainable_num}
# 验证不同层级的参数规模
print("VGG13整体参数:", get_parameter_number(myVgg))
print("第一层(layer1)参数:", get_parameter_number(myVgg.layer1))
print("layer1的conv1参数:", get_parameter_number(myVgg.layer1.conv1))
三、总结
VGG13的核心设计是:用两个堆叠3*3卷积核来代替1个5*5卷积核,减少参数、增强非线性。
每一层卷积步骤相同,即:两个3*3卷积+1个2*2最大池化,故用新的方法封装