现代卷积网络实战系列3:PyTorch从零构建AlexNet训练MNIST数据集

1、AlexNet

AlexNet提出了一下5点改进:

  1. 使用了Dropout,防止过拟合
  2. 使用Relu作为激活函数,极大提高了特征提取效果
  3. 使用MaxPooling池化进行特征降维,极大提高了特征提取效果
  4. 首次使用GPU进行训练
  5. 使用了LRN局部响应归一化(对局部神经元的活动创建竞争机制,使得其中响应比较大的值变得相对更大,并抑制其他反馈较小的神经元,增强了模型的泛化能力)

2、AlexNet网络结构

AlexNet(

(feature): Sequential(

(0): Conv2d(1, 32, kernel_size=(5, 5), stride=(1, 1), padding=(1, 1))

(1): ReLU(inplace=True)

(2): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))

(3): ReLU(inplace=True)

(4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)

(5): Conv2d(64, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))

(6): ReLU(inplace=True)

(7): Conv2d(96, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))

(8): ReLU(inplace=True)

(9): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))

(10): ReLU(inplace=True)

(11): MaxPool2d(kernel_size=2, stride=1, padding=0, dilation=1, ceil_mode=False)

)

(classifier): Sequential(

(0): Dropout(p=0.5, inplace=False)

(1): Linear(in_features=4608, out_features=2048, bias=True)

(2): ReLU(inplace=True)

(3): Dropout(p=0.5, inplace=False)

(4): Linear(in_features=2048, out_features=1024, bias=True)

(5): ReLU(inplace=True)

(6): Linear(in_features=1024, out_features=10, bias=True)

)

)

3、PyTorch构建AlexNet

python 复制代码
class AlexNet(nn.Module):
    def __init__(self, num=10):
        super(AlexNet, self).__init__()
        self.feature = nn.Sequential(
            nn.Conv2d(1, 32, kernel_size=5, stride=1, padding=1),
            nn.ReLU(inplace=True),
            nn.Conv2d(32, 64, kernel_size=3, padding=1),
            nn.ReLU(inplace=True),
            nn.MaxPool2d(kernel_size=2, stride=2),
            nn.Conv2d(64, 96, kernel_size=3, padding=1),
            nn.ReLU(inplace=True),
            nn.Conv2d(96, 64, kernel_size=3, padding=1),
            nn.ReLU(inplace=True),
            nn.Conv2d(64, 32, kernel_size=3, padding=1),
            nn.ReLU(inplace=True),
            nn.MaxPool2d(kernel_size=2, stride=1),
        )
        self.classifier = nn.Sequential(
            nn.Dropout(),
            nn.Linear(32 * 12 * 12, 2048),
            nn.ReLU(inplace=True),
            nn.Dropout(),
            nn.Linear(2048, 1024),
            nn.ReLU(inplace=True),
            nn.Linear(1024, num),
        )

    def forward(self, x):
        x = self.feature(x)
        x = x.view(-1, 32 * 12 * 12)
        x = self.classifier(x)
        return x

10个epoch训练过程的打印:

D:\conda\envs\pytorch\python.exe A:\0_MNIST\train.py

Reading data...

train_data: (60000, 28, 28) train_label (60000,)

test_data: (10000, 28, 28) test_label (10000,)

Initialize neural network

test loss: 2302.56

test accuracy: 10.1 %

epoch step: 1

training loss: 167.49

test loss: 46.66

test accuracy: 98.73 %

epoch step: 2

training loss: 59.43

test loss: 36.14

test accuracy: 98.95 %

epoch step: 3

training loss: 49.94

test loss: 24.93

test accuracy: 99.22 %

epoch step: 4

training loss: 38.7

test loss: 20.42

test accuracy: 99.45 %

epoch step: 5

training loss: 35.07

test loss: 26.18

test accuracy: 99.17 %

epoch step: 6

training loss: 30.65

test loss: 22.65

test accuracy: 99.34 %

epoch step: 7

training loss: 26.34

test loss: 20.5

test accuracy: 99.31 %

epoch step: 8

training loss: 26.24

test loss: 27.69

test accuracy: 99.11 %

epoch step: 9

training loss: 23.14

test loss: 22.55

test accuracy: 99.39 %

epoch step: 10

training loss: 20.22

test loss: 28.51

test accuracy: 99.24 %

Training finished

进程已结束,退出代码为 0

效果已经非常好了

相关推荐
无风听海7 分钟前
神经网络之交叉熵与 Softmax 的梯度计算
人工智能·深度学习·神经网络
java1234_小锋10 分钟前
TensorFlow2 Python深度学习 - TensorFlow2框架入门 - 神经网络基础原理
python·深度学习·tensorflow·tensorflow2
JJJJ_iii12 分钟前
【深度学习03】神经网络基本骨架、卷积、池化、非线性激活、线性层、搭建网络
网络·人工智能·pytorch·笔记·python·深度学习·神经网络
玉石观沧海17 分钟前
高压变频器故障代码解析F67 F68
运维·经验分享·笔记·分布式·深度学习
JJJJ_iii21 分钟前
【深度学习05】PyTorch:完整的模型训练套路
人工智能·pytorch·python·深度学习
DP+GISer42 分钟前
自己制作遥感深度学习数据集进行遥感深度学习地物分类-试读
人工智能·深度学习·分类
paid槮1 小时前
《深度学习》【项目】自然语言处理——情感分析 <上>
深度学习·自然语言处理·easyui
荼蘼2 小时前
使用 Flask 实现本机 PyTorch 模型部署:从服务端搭建到客户端调用
人工智能·pytorch·python
mmq在路上4 小时前
SLAM-Former: Putting SLAM into One Transformer论文阅读
论文阅读·深度学习·transformer
edward11104 小时前
[C++]探索现代C++中的移动语义与完美转发从底层原理到高级应用
计算机视觉