pytorch CV入门 - 汇总

初次编辑:2024/2/14;最后编辑:2024/3/9

参考网站-微软教程:https://learn.microsoft.com/en-us/training/modules/intro-computer-vision-pytorch

更多的内容可以参考本作者其他专栏:

Pytorch基础:https://blog.csdn.net/qq_33345365/category_12591348.html

Pytorch NLP基础:https://blog.csdn.net/qq_33345365/category_12597850.html


目录:

  1. 简介
  2. 处理映像数据的简介
  3. 训练简单的密集神经网络
  4. 使用卷积神经网络
  5. 训练多层卷积神经网络
  6. 使用预先训练的网络进行迁移学习
  7. 使用MobileNet解决视觉相关问题
  8. 总结

1. 简介

计算机视觉是人工智能的一个成功的分支,它使计算机能够从数字图像和/或视频中获得一些见解。 神经网络可有成效地用于计算机视觉任务。

假设你正在开发一个用于识别印刷正文的系统。 你已使用某种算法途径来对齐页面和裁剪文本中的各个字符,现在你需要识别各个字母。 我们将此问题称作"图像分类",因为我们需要将输入图像分到不同类中。 此类问题的其他示例是自动根据图像对明信片分类,或根据照片确定配送系统中的产品类型。

在本模块中,我们将学习如何使用 PyTorch(用于构建神经网络的最常用的 Python 库之一)来训练图像分类神经网络模型。 首先了解最简单的模型(即一个完全连接的密集神经网络)以及一个简单的手写数字 MNIST 数据集。 接下来,我们将了解卷积神经网络(它们旨在捕获二维图像模式),然后转到较复杂的数据集 CIFAR-10。 最后,我们将使用预先训练过的网络并进行迁移学习,以便能够基于相对较小的数据集训练模型。

学完本模块后,你能够基于真实的照片(例如猫狗数据集)训练图像分类模型,并为自己的方案开发图像分类器。

1.1 学习目标

通过学习本模块,你将能够:

  • 了解最常使用神经网络解决的计算机视觉任务
  • 了解卷积神经网络 (CNN) 的工作原理
  • 训练神经网络以识别手写数字,并将猫和狗分类
  • 了解如何使用迁移学习通过 PyTorch 解决现实世界的分类问题

1.2 先决条件

  • 对 Python有基本了解
  • 熟悉 PyTorch 框架,包括张量、反向传播和生成模型的基础知识
  • 了解机器学习概念,例如分类、训练/测试数据集、准确度等。

2处理映像数据的简介

2.1 数据集

使用torchvision提供的MNIST数据集,这是一个手写数字的数据集,每个数据由两个元素组成:

  1. 一个数字的实际图像,由形状为1x28x28的张量表示。
  2. 是一个标签,用于表示这个张量表示哪个数字。

所需库

python 复制代码
import torchvision
import matplotlib.pyplot as plt
from torchvision.transforms import ToTensor

加载数据集,使用6000张训练图片和1000张测试图片

python 复制代码
data_train = torchvision.datasets.MNIST("./data", download=True, train=True, transform=ToTensor())
data_test = torchvision.datasets.MNIST("./data", download=True, train=False, transform=ToTensor())

使用可视化方式展示前8个图片

使用matplotlib.pyplot展示图片

注意使用pycharm等可以可视化的IDE。

python 复制代码
fig, ax = plt.subplots(1, 7)
for i in range(7):
    ax[i].imshow(data_train[i][0].view(28, 28))
    ax[i].set_title(data_train[i][1])
    ax[i].axis("off")

plt.show()

展示数据集的特性

  1. 数据集的数据量
  2. 张量大小和标签
  3. 像素强度
python 复制代码
print('Training samples:', len(data_train))  # 60000
print('Test samples:', len(data_test))  # 10000

print('Tensor size:', data_train[0][0].size())  # torch.Size([1, 28, 28])
print('First 10 digits are:', [data_train[i][1] for i in range(10)])  # [5, 0, 4, 1, 9, 2, 1, 3, 1, 4]

# All pixel intensities of the images are represented by floating-point values in between 0 and 1:
print('Min intensity value: ', data_train[0][0].min().item())  # 0.0
print('Max intensity value: ', data_train[0][0].max().item())  # 1.0

整体代码如下:

python 复制代码
import torchvision
import matplotlib.pyplot as plt
from torchvision.transforms import ToTensor

data_train = torchvision.datasets.MNIST("./data", download=True, train=True, transform=ToTensor())
data_test = torchvision.datasets.MNIST("./data", download=True, train=False, transform=ToTensor())

fig, ax = plt.subplots(1, 7)
for i in range(7):
    ax[i].imshow(data_train[i][0].view(28, 28))
    ax[i].set_title(data_train[i][1])
    ax[i].axis("off")

plt.show()

print('Training samples:', len(data_train))  # 60000
print('Test samples:', len(data_test))  # 10000

print('Tensor size:', data_train[0][0].size())  # torch.Size([1, 28, 28])
print('First 10 digits are:', [data_train[i][1] for i in range(10)])  # [5, 0, 4, 1, 9, 2, 1, 3, 1, 4]

print('Min intensity value: ', data_train[0][0].min().item())  # 0.0
print('Max intensity value: ', data_train[0][0].max().item())  # 1.0

3 训练简单的密集神经网络


本章介绍最简单的图像分类方法-全连接神经网络a fully-connected neural network,也叫做感知机perceptron

新建一个文件dense.py

加载辅助文件

首先加载一个辅助python文件pytorchcv.py,放到和dense.py同一级目录下,用于加载之前已经讲述过的内容:

从下述网站获得,

wget https://raw.githubusercontent.com/MicrosoftDocs/pytorchfundamentals/main/computer-vision-pytorch/pytorchcv.py

具体内容如下:

python 复制代码
# Script file to hide implementation details for PyTorch computer vision module

import builtins
import torch
import torch.nn as nn
from torch.utils import data
import torchvision
from torchvision.transforms import ToTensor
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image
import glob
import os
import zipfile

default_device = 'cuda' if torch.cuda.is_available() else 'cpu'


def load_mnist(batch_size=64):
    builtins.data_train = torchvision.datasets.MNIST('./data',
                                                     download=True, train=True, transform=ToTensor())
    builtins.data_test = torchvision.datasets.MNIST('./data',
                                                    download=True, train=False, transform=ToTensor())
    builtins.train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size)
    builtins.test_loader = torch.utils.data.DataLoader(data_test, batch_size=batch_size)


def train_epoch(net, dataloader, lr=0.01, optimizer=None, loss_fn=nn.NLLLoss()):
    optimizer = optimizer or torch.optim.Adam(net.parameters(), lr=lr)
    net.train()
    total_loss, acc, count = 0, 0, 0
    for features, labels in dataloader:
        optimizer.zero_grad()
        lbls = labels.to(default_device)
        out = net(features.to(default_device))
        loss = loss_fn(out, lbls)  # cross_entropy(out,labels)
        loss.backward()
        optimizer.step()
        total_loss += loss
        _, predicted = torch.max(out, 1)
        acc += (predicted == lbls).sum()
        count += len(labels)
    return total_loss.item() / count, acc.item() / count


def validate(net, dataloader, loss_fn=nn.NLLLoss()):
    net.eval()
    count, acc, loss = 0, 0, 0
    with torch.no_grad():
        for features, labels in dataloader:
            lbls = labels.to(default_device)
            out = net(features.to(default_device))
            loss += loss_fn(out, lbls)
            pred = torch.max(out, 1)[1]
            acc += (pred == lbls).sum()
            count += len(labels)
    return loss.item() / count, acc.item() / count


def train(net, train_loader, test_loader, optimizer=None, lr=0.01, epochs=10, loss_fn=nn.NLLLoss()):
    optimizer = optimizer or torch.optim.Adam(net.parameters(), lr=lr)
    res = {'train_loss': [], 'train_acc': [], 'val_loss': [], 'val_acc': []}
    for ep in range(epochs):
        tl, ta = train_epoch(net, train_loader, optimizer=optimizer, lr=lr, loss_fn=loss_fn)
        vl, va = validate(net, test_loader, loss_fn=loss_fn)
        print(f"Epoch {ep:2}, Train acc={ta:.3f}, Val acc={va:.3f}, Train loss={tl:.3f}, Val loss={vl:.3f}")
        res['train_loss'].append(tl)
        res['train_acc'].append(ta)
        res['val_loss'].append(vl)
        res['val_acc'].append(va)
    return res


def train_long(net, train_loader, test_loader, epochs=5, lr=0.01, optimizer=None, loss_fn=nn.NLLLoss(), print_freq=10):
    optimizer = optimizer or torch.optim.Adam(net.parameters(), lr=lr)
    for epoch in range(epochs):
        net.train()
        total_loss, acc, count = 0, 0, 0
        for i, (features, labels) in enumerate(train_loader):
            lbls = labels.to(default_device)
            optimizer.zero_grad()
            out = net(features.to(default_device))
            loss = loss_fn(out, lbls)
            loss.backward()
            optimizer.step()
            total_loss += loss
            _, predicted = torch.max(out, 1)
            acc += (predicted == lbls).sum()
            count += len(labels)
            if i % print_freq == 0:
                print("Epoch {}, minibatch {}: train acc = {}, train loss = {}".format(epoch, i, acc.item() / count,
                                                                                       total_loss.item() / count))
        vl, va = validate(net, test_loader, loss_fn)
        print("Epoch {} done, validation acc = {}, validation loss = {}".format(epoch, va, vl))


def plot_results(hist):
    plt.figure(figsize=(15, 5))
    plt.subplot(121)
    plt.plot(hist['train_acc'], label='Training acc')
    plt.plot(hist['val_acc'], label='Validation acc')
    plt.legend()
    plt.subplot(122)
    plt.plot(hist['train_loss'], label='Training loss')
    plt.plot(hist['val_loss'], label='Validation loss')
    plt.legend()


def plot_convolution(t, title=''):
    with torch.no_grad():
        c = nn.Conv2d(kernel_size=(3, 3), out_channels=1, in_channels=1)
        c.weight.copy_(t)
        fig, ax = plt.subplots(2, 6, figsize=(8, 3))
        fig.suptitle(title, fontsize=16)
        for i in range(5):
            im = data_train[i][0]
            ax[0][i].imshow(im[0])
            ax[1][i].imshow(c(im.unsqueeze(0))[0][0])
            ax[0][i].axis('off')
            ax[1][i].axis('off')
        ax[0, 5].imshow(t)
        ax[0, 5].axis('off')
        ax[1, 5].axis('off')
        # plt.tight_layout()
        plt.show()


def display_dataset(dataset, n=10, classes=None):
    fig, ax = plt.subplots(1, n, figsize=(15, 3))
    mn = min([dataset[i][0].min() for i in range(n)])
    mx = max([dataset[i][0].max() for i in range(n)])
    for i in range(n):
        ax[i].imshow(np.transpose((dataset[i][0] - mn) / (mx - mn), (1, 2, 0)))
        ax[i].axis('off')
        if classes:
            ax[i].set_title(classes[dataset[i][1]])


def check_image(fn):
    try:
        im = Image.open(fn)
        im.verify()
        return True
    except:
        return False


def check_image_dir(path):
    for fn in glob.glob(path):
        if not check_image(fn):
            print("Corrupt image: {}".format(fn))
            os.remove(fn)


def common_transform():
    std_normalize = torchvision.transforms.Normalize(mean=[0.485, 0.456, 0.406],
                                                     std=[0.229, 0.224, 0.225])
    trans = torchvision.transforms.Compose([
        torchvision.transforms.Resize(256),
        torchvision.transforms.CenterCrop(224),
        torchvision.transforms.ToTensor(),
        std_normalize])
    return trans


def load_cats_dogs_dataset():
    if not os.path.exists('data/PetImages'):
        with zipfile.ZipFile('data/kagglecatsanddogs_5340.zip', 'r') as zip_ref:
            zip_ref.extractall('data')

    check_image_dir('data/PetImages/Cat/*.jpg')
    check_image_dir('data/PetImages/Dog/*.jpg')

    dataset = torchvision.datasets.ImageFolder('data/PetImages', transform=common_transform())
    trainset, testset = torch.utils.data.random_split(dataset, [20000, len(dataset) - 20000])
    trainloader = torch.utils.data.DataLoader(trainset, batch_size=32)
    testloader = torch.utils.data.DataLoader(trainset, batch_size=32)
    return dataset, trainloader, testloader

加载数据集和库:

创建批处理来加快训练速度

python 复制代码
import torch
import torch.nn as nn
import torchvision
import matplotlib.pyplot as plt
from torchinfo import summary
from torchvision.transforms import ToTensor
from pytorchcv import load_mnist, plot_results
from torch.utils import data
from torch.nn.functional import relu, log_softmax

# load_mnist()  # 下面四行和这个函数同样功能,但是不推荐直接修改builtins
data_train = torchvision.datasets.MNIST("./data", download=True, train=True, transform=ToTensor())
data_test = torchvision.datasets.MNIST("./data", download=True, train=False, transform=ToTensor())
train_loader = torch.utils.data.DataLoader(data_train, batch_size=64)
test_loader = torch.utils.data.DataLoader(data_test, batch_size=64)  # 64可以更大

3.1 最简单模型

  • 一个最简单模型就是一个全连接层(fully-connected layer),也叫做线性层(Linear layer)。当使用MNIST数据集时,输入是784(28 * 28)个,输出是10(0 - 9)个。之所以称为linear,因为它对其输入进行线性变换,可以定义为y=Wx+b,其中W是一个权重矩阵,b是偏置,W的大小为784×10。

  • 全连接层的输入是一维向量,因此需要将图片从1×28×28转变为784,需要使用扁平化Flatten函数

  • 因为全连接层的输出没有归一化到0到1之间,它不能被认为是概率。此外,如果想要输出不同数字的概率,它们都需要加起来为1。为了将输出向量转化为概率向量,Softmax函数经常被用作分类神经网络的最后一个激活函数。例如,softmax([−1,1,2])=[0.035,0.25,0.705]。在PyTorch中,通常使用LogSoftmax函数,它会计算输出概率的对数。为了将输出向量转换为实际概率,我们需要取torch.exp的输出。

这样得到模型:

python 复制代码
net = nn.Sequential(
    nn.Flatten(),
    nn.Linear(784, 10),
    nn.LogSoftmax(dim=1))

3.2 模型训练

本章介绍训练一个epoch的函数

epoch:使用训练数据集进行一次训练称为一个epoch

训练的大致过程如下所示:

  1. 从输入数据集中取一批数据(64),计算了这个这批数据的预测结果。
  2. 这个结果和预期结果之间的差异是用"损失函数(Loss function)"来计算。损失函数显示了网络的输出与预期输出的差异,训练的目标是尽量减少损失。
  3. 我们计算这个损失函数相对于模型权重(参数)的梯度,然后用它来调整权重以优化网络的性能。调整量由一个称为学习率的参数控制,优化算法的细节在优化器对象中定义。
  4. 重复这些步骤,直到处理完整个数据集。下面是一个执行一次epoch训练的函数:
python 复制代码
def train_epoch(net, dataloader, lr=0.01, optimizer=None, loss_fn=nn.NLLLoss()):
    optimizer = optimizer or torch.optim.Adam(net.parameters(), lr=lr)  # 优化器Adam
    net.train()  # 将网络切换到训练模式
    total_loss, acc, count = 0, 0, 0
    for features, labels in dataloader:  # 遍历数据集中的所有批次
        optimizer.zero_grad()
        out = net(features)  # 计算网络对这批(out)所做的预测
        loss = loss_fn(out, labels)  # cross_entropy(out,labels)
        loss.backward()  # 计算损失,即预测值与期望值之间的差异
        optimizer.step()  # 通过调整网络的权重来尽量减少损失
        total_loss += loss
        _, predicted = torch.max(out, 1)
        acc += (predicted == labels).sum()  # 计算正确预测案例的数量(准确性)
        count += len(labels)
    return total_loss.item() / count, acc.item() / count  # 损失率 正确率

a = train_epoch(net, train_loader)
print(a)
#(0.005937494913736979, 0.8927666666666667)

参数含义:

  • 神经网络
  • DataLoader,定义要训练的数据
  • Learning rate(lr)学习率:定义了网络学习的速度。在学习过程中,我们多次使用相同数据,每次都调整权重。如果学习率过高,新的值将覆盖旧的知识,网络将表现不佳。如果学习率太小,则会导致学习过程非常缓慢。
  • loss function (loss_fn)损失函数:这是一个衡量预期结果与网络产生的结果之间差异的函数。在大多数分类任务中都使用了NLLLoss,因此其被设置为默认值。
  • Optimizer优化器,它定义了一个优化算法。最传统的算法是随机梯度下降(stochastic gradient descent),但我们将使用一个更高级的版本,默认称为Adam。
  • 返回:我们返回测试数据集上的平均损失和准确率。

该函数计算并返回每个数据项的平均损失,以及训练准确率(猜对的案例百分比)。通过在训练过程中观察这种损失,可以看到网络是否正在改进并从提供的数据中学习。

3.3 验证准确性

使用测试集验证准确性

python 复制代码
def validate(net, dataloader, loss_fn=nn.NLLLoss()):
    net.eval()
    count, acc, loss = 0, 0, 0
    with torch.no_grad():  # 不需要反向,因为不需要训练
        for features, labels in dataloader:
            out = net(features)
            loss += loss_fn(out, labels)  # 直接统计损失值即可
            pred = torch.max(out, 1)[1]  # 取out中最大的记录在pref,即概率最大的
            acc += (pred == labels).sum()  # 如果正确,则加1
            count += len(labels)
    return loss.item() / count, acc.item() / count


b = validate(net, test_loader)
print(b)
#(0.005853596115112305, 0.8942)

与训练函数类似,我们返回测试数据集上的平均损失和准确率。

3.4 过拟合

通常在训练神经网络时的一开始,训练和验证的准确性都提高,然而,在某些情况下,可能会发生训练精度提高而验证精度开始下降的情况。这将是一个过拟合的迹象,即模型在训练数据集上表现良好,但在新数据上表现不佳。

下面是训练函数,可以用来执行训练和验证。它打印每个epoch的训练和验证精度,并返回可用于在图上绘制损失和精度的历史。

python 复制代码
def train(net,train_loader,test_loader,optimizer=None,lr=0.01,epochs=10,loss_fn=nn.NLLLoss()):
    optimizer = optimizer or torch.optim.Adam(net.parameters(),lr=lr)
    res = { 'train_loss' : [], 'train_acc': [], 'val_loss': [], 'val_acc': []}
    for ep in range(epochs):
        tl,ta = train_epoch(net,train_loader,optimizer=optimizer,lr=lr,loss_fn=loss_fn)
        vl,va = validate(net,test_loader,loss_fn=loss_fn)
        print(f"Epoch {ep:2}, Train acc={ta:.3f}, Val acc={va:.3f}, Train loss={tl:.3f}, Val loss={vl:.3f}")
        res['train_loss'].append(tl)
        res['train_acc'].append(ta)
        res['val_loss'].append(vl)
        res['val_acc'].append(va)
    return res
  
# Epoch  0, Train acc=0.893, Val acc=0.894, Train loss=0.006, Val loss=0.006
# Epoch  1, Train acc=0.910, Val acc=0.899, Train loss=0.005, Val loss=0.006
# Epoch  2, Train acc=0.913, Val acc=0.899, Train loss=0.005, Val loss=0.006
# Epoch  3, Train acc=0.915, Val acc=0.897, Train loss=0.005, Val loss=0.006
# Epoch  4, Train acc=0.916, Val acc=0.897, Train loss=0.005, Val loss=0.006


# 重新初始化网络以从头开始
net = nn.Sequential(
        nn.Flatten(), 
        nn.Linear(784,10), # 784 inputs, 10 outputs
        nn.LogSoftmax())

hist = train(net,train_loader,test_loader,epochs=5)

plt.figure(figsize=(15, 5))
plt.subplot(121)
plt.plot(hist['train_acc'], label='Training acc')
plt.plot(hist['val_acc'], label='Validation acc')
plt.legend()
plt.subplot(122)
plt.plot(hist['train_loss'], label='Training loss')
plt.plot(hist['val_loss'], label='Validation loss')
plt.legend()
plt.show()

结果:

3.5 多层感知机

为了进一步提高准确率,我们可能需要包含一个或多个隐藏层。事实上,可以用数学方法证明,如果一个网络只由一系列线性层组成,那么狭缝本质上相当于一个线性层。因此在层与层之间插入非线性函数是很重要的!

ReLU是一种最简单的激活函数,深度学习中使用的其他激活函数有sigmoid和tanh,但ReLU最常用于计算机视觉,因为它可以快速计算,使用其他函数不会带来任何显著的好处。ReLU的定义如下:

R e L U ( x ) = 0 , x < 0 ; R e L U ( x ) = x , x > = 0 ReLU(x) = 0,\ x < 0; ReLU(x) = x,\ x >= 0 ReLU(x)=0, x<0;ReLU(x)=x, x>=0

python 复制代码
net = nn.Sequential(
    nn.Flatten(),
    nn.Linear(784, 100),  # 784 inputs, 100 outputs
    nn.ReLU(),  # Activation Function
    nn.Linear(100, 10),  # 100 inputs, 10 outputs
    nn.LogSoftmax(dim=0))

summary(net, input_size=(1, 28, 28))  # 参数总量784×100+100+100×10+10=78500

输出:

bash 复制代码
==========================================================================================
Layer (type:depth-idx)                   Output Shape              Param #
==========================================================================================
Sequential                               [1, 10]                   --
├─Flatten: 1-1                           [1, 784]                  --
├─Linear: 1-2                            [1, 100]                  78,500
├─ReLU: 1-3                              [1, 100]                  --
├─Linear: 1-4                            [1, 10]                   1,010
├─LogSoftmax: 1-5                        [1, 10]                   --
==========================================================================================
Total params: 79,510
Trainable params: 79,510
Non-trainable params: 0
Total mult-adds (M): 0.08
==========================================================================================
Input size (MB): 0.00
Forward/backward pass size (MB): 0.00
Params size (MB): 0.32
Estimated Total Size (MB): 0.32
==========================================================================================

3.6 另一种建模方式

python 复制代码
class MyNet(nn.Module):
    def __init__(self):
        super(MyNet, self).__init__()
        self.flatten = nn.Flatten()
        self.hidden = nn.Linear(784, 100)
        self.out = nn.Linear(100, 10)

    def forward(self, x):
        x = self.flatten(x)
        x = self.hidden(x)
        x = relu(x)  # !
        x = self.out(x)
        x = log_softmax(x, dim=0)  # !
        return x


net = MyNet()

summary(net, input_size=(1, 28, 28), device='cpu')

输出,不包括隐层:

bash 复制代码
==========================================================================================
Layer (type:depth-idx)                   Output Shape              Param #
==========================================================================================
MyNet                                    [1, 10]                   --
├─Flatten: 1-1                           [1, 784]                  --
├─Linear: 1-2                            [1, 100]                  78,500
├─Linear: 1-3                            [1, 10]                   1,010
==========================================================================================
Total params: 79,510
Trainable params: 79,510
Non-trainable params: 0
Total mult-adds (M): 0.08
==========================================================================================
Input size (MB): 0.00
Forward/backward pass size (MB): 0.00
Params size (MB): 0.32
Estimated Total Size (MB): 0.32
==========================================================================================

3.7 使用隐层后准确率的变化

python 复制代码
net = MyNet()

summary(net, input_size=(1, 28, 28), device='cpu')

hist = train(net, train_loader, test_loader, epochs=5)
plot_results(hist)
plt.show()

3.8 代码总结:

python 复制代码
import torch
import torch.nn as nn
import torchvision
import matplotlib.pyplot as plt
from torchinfo import summary
from torchvision.transforms import ToTensor
from pytorchcv import load_mnist, plot_results
from torch.utils import data
from torch.nn.functional import relu, log_softmax

# load_mnist()  # 下面四行和这个函数同样功能,但是不推荐直接修改builtins

data_train = torchvision.datasets.MNIST("./data", download=True, train=True, transform=ToTensor())
data_test = torchvision.datasets.MNIST("./data", download=True, train=False, transform=ToTensor())
train_loader = torch.utils.data.DataLoader(data_train, batch_size=64)
test_loader = torch.utils.data.DataLoader(data_test, batch_size=64)  # we can use larger batch size for testing

net = nn.Sequential(
    nn.Flatten(),
    nn.Linear(784, 10),  # 784 inputs, 10 outputs
    nn.LogSoftmax(dim=1))


# print('Digit to be predicted: ', data_train[0][1])
# print(torch.exp(net(data_train[0][0])))


def train_epoch(net, dataloader, lr=0.01, optimizer=None, loss_fn=nn.NLLLoss()):
    optimizer = optimizer or torch.optim.Adam(net.parameters(), lr=lr)
    net.train()  # 将网络切换到训练模式
    total_loss, acc, count = 0, 0, 0
    for features, labels in dataloader:  # 遍历数据集中的所有批次
        optimizer.zero_grad()
        out = net(features)  # 计算网络对这批(out)所做的预测
        loss = loss_fn(out, labels)  # cross_entropy(out,labels)
        loss.backward()  # 计算损失,即预测值与期望值之间的差异
        optimizer.step()  # 通过调整网络的权重来尽量减少损失
        total_loss += loss
        _, predicted = torch.max(out, 1)
        acc += (predicted == labels).sum()  # 计算正确预测案例的数量(准确性)
        count += len(labels)
    return total_loss.item() / count, acc.item() / count  # 损失率 正确率


a = train_epoch(net, train_loader)
print(a)


def validate(net, dataloader, loss_fn=nn.NLLLoss()):
    net.eval()
    count, acc, loss = 0, 0, 0
    with torch.no_grad():
        for features, labels in dataloader:
            out = net(features)
            loss += loss_fn(out, labels)
            pred = torch.max(out, 1)[1]
            acc += (pred == labels).sum()
            count += len(labels)
    return loss.item() / count, acc.item() / count


b = validate(net, test_loader)
print(b)


def train(net, train_loader, test_loader, optimizer=None, lr=0.01, epochs=10, loss_fn=nn.NLLLoss()):
    optimizer = optimizer or torch.optim.Adam(net.parameters(), lr=lr)
    res = {'train_loss': [], 'train_acc': [], 'val_loss': [], 'val_acc': []}
    for ep in range(epochs):
        tl, ta = train_epoch(net, train_loader, optimizer=optimizer, lr=lr, loss_fn=loss_fn)
        vl, va = validate(net, test_loader, loss_fn=loss_fn)
        print(f"Epoch {ep:2}, Train acc={ta:.3f}, Val acc={va:.3f}, Train loss={tl:.3f}, Val loss={vl:.3f}")
        res['train_loss'].append(tl)
        res['train_acc'].append(ta)
        res['val_loss'].append(vl)
        res['val_acc'].append(va)
    return res


# 重新初始化网络以从头开始
net = nn.Sequential(
    nn.Flatten(),
    nn.Linear(784, 10),  # 784 inputs, 10 outputs
    nn.LogSoftmax(dim=1))

hist = train(net, train_loader, test_loader, epochs=5)

plt.figure(figsize=(15, 5))
plt.subplot(121)
plt.plot(hist['train_acc'], label='Training acc')
plt.plot(hist['val_acc'], label='Validation acc')
plt.legend()
plt.subplot(122)
plt.plot(hist['train_loss'], label='Training loss')
plt.plot(hist['val_loss'], label='Validation loss')
plt.legend()
plt.show()

net = nn.Sequential(
    nn.Flatten(),
    nn.Linear(784, 100),  # 784 inputs, 100 outputs
    nn.ReLU(),  # Activation Function
    nn.Linear(100, 10),  # 100 inputs, 10 outputs
    nn.LogSoftmax(dim=0))

summary(net, input_size=(1, 28, 28))  # 参数总量784×100+100+100×10+10=78500


# any syntax 另一种定义模型的句法

class MyNet(nn.Module):
    def __init__(self):
        super(MyNet, self).__init__()
        self.flatten = nn.Flatten()
        self.hidden = nn.Linear(784, 100)
        self.out = nn.Linear(100, 10)

    def forward(self, x):
        x = self.flatten(x)
        x = self.hidden(x)
        x = relu(x)  # !
        x = self.out(x)
        x = log_softmax(x, dim=0)  # !
        return x


net = MyNet()

summary(net, input_size=(1, 28, 28), device='cpu')

hist = train(net, train_loader, test_loader, epochs=5)
plot_results(hist)
plt.show()

4 使用卷积神经网络

4.1 CNN


上一部分描述了如何使用类定义来定义多层神经网络,但是这些网络是通用的,而不是专门用于计算机视觉任务的。在本部分,将描述卷积神经网络(Convolutional Neural Networks,CNN),它是专门为计算机视觉设计的。

计算机视觉不同于一般的分类,因为当我们试图在图片中找到某个物体时,我们是在扫描图像,寻找一些特定的**++模式++ **(Pattern)及其组合。例如,在寻找一只猫时,我们首先可能会寻找水平线,这些水平线可以形成胡须,然后胡须的某些组合可以告诉我们这实际上是一只猫的照片。某些图案的相对位置和存在是重要的,而不是它们在图像上的确切位置。

为了提取模式,我们将使用卷积过滤器 (convolutional filters)的概念。

4.2 准备工作与加载库

创建一个空的python文件,确保上一部分使用的辅助python文件pytorchcv.py在同一目录下。

加载库,我们使用修改builtins的方式来修改全局变量从而加载数据集(load_mnist)

python 复制代码
import torch
import torch.nn as nn
import torchvision
import matplotlib.pyplot as plt
from torchinfo import summary
import numpy as np

from pytorchcv import load_mnist, train, plot_results, plot_convolution, display_dataset
load_mnist(batch_size=128)

4.3 卷积过滤器

卷积滤波器类似一个窗口,运行在图像的每个像素上,并计算相邻像素的加权平均值。它们由权重系数矩阵定义。让我们看看在MNIST手写数字上应用两种不同卷积滤波器的例子:

python 复制代码
plot_convolution(torch.tensor([[-1.,0.,1.],[-1.,0.,1.],[-1.,0.,1.]]),'Vertical edge filter')
plot_convolution(torch.tensor([[-1.,-1.,-1.],[0.,0.,0.],[1.,1.,1.]]),'Horizontal edge filter')


plot_convolution函数在pytorchcv.py中,具体含义后面叙述,第一个过滤器称为垂直边缘过滤器,由下式矩阵定义:

A = ( − 1 0 1 − 1 0 1 − 1 0 1 ) A = \begin{pmatrix} -1 & 0 & 1 \\ -1 & 0 & 1 \\ -1 & 0 & 1 \end{pmatrix} A= −1−1−1000111

当这个过滤器经过相对均匀的像素场时,所有值加起来等于0。但是,当它遇到图像中的垂直边缘时,会产生很高的尖峰值。这就是为什么在上面的图像中,可以看到垂直边缘由高值和低值表示,而水平边缘是平均的。当应用水平边缘过滤器时,会发生相反的事情-水平线被放大,垂直线被平均掉。

如果对大小为28×28的图像应用3×3过滤器-图像的大小将变为26×26,因为过滤器不会越过图像边界。然而,在某些情况下,可能希望保持图像的大小相同,在这种情况下,图像的每一边都填充0。

在经典的计算机视觉中,对图像应用多个过滤器来生成特征,然后由机器学习算法使用这些特征来构建分类器。然而,深度学习中通过构建学习最佳卷积过滤器的网络来解决分类问题。

为了做到这一点,我们引入了卷积层 (convolutional layers)。

4.4 卷积层

卷积层是用nn.Conv2d构造来定义的。我们需要指定以下内容:

  • In_channels:输入通道数。在此处理的是灰度图像,因此输入通道数为1。彩色图像有3个通道(RGB)。
  • Out_channels:要使用的过滤器数量。此处为9种不同的过滤器,这将给网络提供大量的机会来探索哪种过滤器最适合。
  • Kernel_size:滑动窗口的大小。通常为3x3或5x5。过滤器尺寸的选择通常是通过实验来选择的,即通过尝试不同的过滤器尺寸并比较结果的准确性。

最简单的CNN将包含一个卷积层。给定输入大小为28x28,在应用了9个5x5过滤器之后,将最终得到一个9x24x24的张量。这里,每个过滤器的结果由图像中的不同通道表示(因此第一个维度9对应于过滤器的数量)。

卷积后,张量平铺成一个大小为5184(9x24x24)的向量,然后添加线性层,输出长度为10,对应10个数字。我们还在层与层之间使用relu激活函数。

python 复制代码
class OneConv(nn.Module):
    def __init__(self):
        super(OneConv, self).__init__()
        self.conv = nn.Conv2d(in_channels=1,out_channels=9,kernel_size=(5,5))
        self.flatten = nn.Flatten()
        self.fc = nn.Linear(5184,10)

    def forward(self, x):
        x = nn.functional.relu(self.conv(x))
        x = self.flatten(x)
        x = nn.functional.log_softmax(self.fc(x),dim=1)
        return x

net = OneConv()

summary(net,input_size=(1,1,28,28))

输出:

bash 复制代码
==========================================================================================
Layer (type:depth-idx)                   Output Shape              Param #
==========================================================================================
OneConv                                  [1, 10]                   --
├─Conv2d: 1-1                            [1, 9, 24, 24]            234
├─Flatten: 1-2                           [1, 5184]                 --
├─Linear: 1-3                            [1, 10]                   51,850
==========================================================================================
Total params: 52,084
Trainable params: 52,084
Non-trainable params: 0
Total mult-adds (M): 0.19
==========================================================================================
Input size (MB): 0.00
Forward/backward pass size (MB): 0.04
Params size (MB): 0.21
Estimated Total Size (MB): 0.25
==========================================================================================

这个网络包含大约50k个可训练参数,而在全连接的多层网络中大约有80k个可训练参数。这使得我们即使在较小的数据集上也能获得很好的结果,因为卷积网络的泛化效果要好得多。

注意,卷积层的参数数量相当少,而且它不依赖于图像的分辨率。卷积层有 9 个 5x5 的滤波器,因此权重的数量为 9 * 5 * 5 = 225。加上 9 个bias,总的参数数为 225 + 9 = 234。每个过滤器有一个bias。简单来说,bias就像一个调节旋钮,可以控制滤波器输出的整体偏移量。它可以帮助模型更好地学习数据中的模式,即使这些模式稍微偏离了滤波器的权重所表示的模式。

4.5 训练

与上一单元的全连接网络相比,我们能够实现更高的准确性,并且更快。

python 复制代码
hist = train(net,train_loader,test_loader,epochs=5)
plot_results(hist)
# Epoch  0, Train acc=0.943, Val acc=0.967, Train loss=0.001, Val loss=0.001
# Epoch  1, Train acc=0.976, Val acc=0.977, Train loss=0.001, Val loss=0.001
# Epoch  2, Train acc=0.983, Val acc=0.976, Train loss=0.000, Val loss=0.001
# Epoch  3, Train acc=0.988, Val acc=0.974, Train loss=0.000, Val loss=0.001
# Epoch  4, Train acc=0.990, Val acc=0.973, Train loss=0.000, Val loss=0.001

卷积层允许我们从图像中提取某些图像模式,因此最终的分类器是基于这些特征之上的。然而,我们可以使用相同的方法在特征空间中提取模式,通过在第一个卷积层的顶部堆叠另一个卷积层。我们将在下一章学习多层卷积网络。

4.6 代码汇总

python 复制代码
import torch
import torch.nn as nn
import torchvision
import matplotlib.pyplot as plt
from torchinfo import summary
import numpy as np

from pytorchcv import load_mnist, train, plot_results, plot_convolution, display_dataset

load_mnist(batch_size=128)

plot_convolution(torch.tensor([[-1., 0., 1.], [-1., 0., 1.], [-1., 0., 1.]]), 'Vertical edge filter')
plot_convolution(torch.tensor([[-1., -1., -1.], [0., 0., 0.], [1., 1., 1.]]), 'Horizontal edge filter')


class OneConv(nn.Module):
    def __init__(self):
        super(OneConv, self).__init__()
        self.conv = nn.Conv2d(in_channels=1, out_channels=9, kernel_size=(5, 5))
        self.flatten = nn.Flatten()
        self.fc = nn.Linear(5184, 10)

    def forward(self, x):
        x = nn.functional.relu(self.conv(x))
        x = self.flatten(x)
        x = nn.functional.log_softmax(self.fc(x), dim=1)
        return x


net = OneConv()

summary(net, input_size=(1, 1, 28, 28))

hist = train(net, train_loader, test_loader, epochs=5)
plot_results(hist)

plt.show()

5 训练多层卷积神经网络

5.1 多层CNN


在上一单元中,我们学习了可以从图像中提取模式的卷积过滤器。对于我们的MNIST分类器,我们使用了9个5x5过滤器,得到9x24x24张量。

我们可以使用相同的卷积思想来提取图像中的高级模式。例如,数字8和9的圆边可以由许多较小的笔画组成。为了识别这些模式,我们可以在第一层的结果之上构建另一层卷积过滤器。

准备工作

创建新文件multilayer_CNN.py并加载类

python 复制代码
import torch
import torch.nn as nn
import torchvision
import matplotlib.pyplot as plt
from torchinfo import summary
import numpy as np

from pytorchcv import load_mnist, train, plot_results, plot_convolution, display_dataset
load_mnist(batch_size=128)

5.2 池化层Pooling layers

提取模式的层次结构:第一个卷积层寻找基本模式,比如水平线或垂直线。在它们之上的下一层卷积层寻找更高级的模式,比如原始形状。更多的卷积层可以将这些形状组合成图像的某些部分,直到分类的最终对象。

这样做时,我们还需要应用一个技巧:减小图像的空间大小。一旦我们检测到滑动窗口中有一个水平笔画,那么它发生在哪个像素就不那么重要了。因此,我们可以"缩小"图像的大小,可以使用下面其中一个池化层来完成:

  • Average Pooling:取一个滑动窗口(例如,2x2像素)并计算窗口内值的平均值;
  • Max Pooling:用最大值替换窗口。最大池化背后的思想是检测滑动窗口内某个模式的存在。

因此,在一个典型的CNN中,它将由几个卷积层组成,在它们之间有池化层来降低图像的维度。我们还会增加过滤器的数量,因为随着模式变得更高级,我们需要寻找更多可能的有趣组合。

由于空间维度的减少和特征/过滤器维度的增加,这种结构也被称为金字塔结构(pyramid architecture)。

一个使用双层CNN的例子:

python 复制代码
class MultiLayerCNN(nn.Module):
    def __init__(self):
        super(MultiLayerCNN, self).__init__()
        self.conv1 = nn.Conv2d(1, 10, 5)
        self.pool = nn.MaxPool2d(2, 2)
        self.conv2 = nn.Conv2d(10, 20, 5)
        self.fc = nn.Linear(320, 10)

    def forward(self, x):
        x = self.pool(nn.functional.relu(self.conv1(x)))
        x = self.pool(nn.functional.relu(self.conv2(x)))
        x = x.view(-1, 320)
        x = nn.functional.log_softmax(self.fc(x), dim=1)
        return x


net = MultiLayerCNN()
summary(net, input_size=(1, 1, 28, 28))

输出:

bash 复制代码
==========================================================================================
Layer (type:depth-idx)                   Output Shape              Param #
==========================================================================================
MultiLayerCNN                            [1, 10]                   --
├─Conv2d: 1-1                            [1, 10, 24, 24]           260
├─MaxPool2d: 1-2                         [1, 10, 12, 12]           --
├─Conv2d: 1-3                            [1, 20, 8, 8]             5,020
├─MaxPool2d: 1-4                         [1, 20, 4, 4]             --
├─Linear: 1-5                            [1, 10]                   3,210
==========================================================================================
Total params: 8,490
Trainable params: 8,490
Non-trainable params: 0
Total mult-adds (M): 0.47
==========================================================================================
Input size (MB): 0.00
Forward/backward pass size (MB): 0.06
Params size (MB): 0.03
Estimated Total Size (MB): 0.09
==========================================================================================

关于这个定义需要注意几点:

  • 代码没有使用Flatten层,而是使用view函数将前向函数中的张量扁平化,view函数类似于numpy中的重塑函数。因为扁平化层没有可训练的权重,所以我们不需要在我们的类中创建一个单独的层实例------我们可以只使用torch.nn.functional命名空间中的函数。
  • 模型中只使用了池化层的一个实例,也是因为它不包含任何可训练的参数,因此一个实例可以有效地重用。
  • 可训练参数的数量(~8.5K)明显小于之前的情况(感知机为80K,单层CNN为50K)。这是因为卷积层通常只有很少的参数,与输入图像大小无关。此外,由于池化,在应用最终线性层之前,图像的维数显着降低。少量的参数对我们的模型有积极的影响,因为它有助于防止过拟合,即使在较小的数据集大小。
python 复制代码
hist = train(net,train_loader,test_loader,epochs=5)
# Epoch  0, Train acc=0.949, Val acc=0.980, Train loss=0.001, Val loss=0.000

多层CNN能够获得更高的精度,并且更快---只需1或2个epoch。这意味着复杂的网络架构需要更少的数据来弄清楚正在发生什么,并从图像中提取通用模式。

multilayer_CNN.py的代码如下:

python 复制代码
import torch
import torch.nn as nn
import torchvision
import matplotlib.pyplot as plt
from torchinfo import summary
import numpy as np

from pytorchcv import load_mnist, train, plot_results, plot_convolution, display_dataset

load_mnist(batch_size=128)


class MultiLayerCNN(nn.Module):
    def __init__(self):
        super(MultiLayerCNN, self).__init__()
        self.conv1 = nn.Conv2d(1, 10, 5)
        self.pool = nn.MaxPool2d(2, 2)
        self.conv2 = nn.Conv2d(10, 20, 5)
        self.fc = nn.Linear(320, 10)

    def forward(self, x):
        x = self.pool(nn.functional.relu(self.conv1(x)))
        x = self.pool(nn.functional.relu(self.conv2(x)))
        x = x.view(-1, 320)
        x = nn.functional.log_softmax(self.fc(x), dim=1)
        return x


net = MultiLayerCNN()
summary(net, input_size=(1, 1, 28, 28))

hist = train(net, train_loader, test_loader, epochs=5)

5.3 使用来自CIFAR-10数据集的真实图像

下面探索更高级的图片数据集,称为CIFAR-10。它包含60k个32x32彩色图像,分为10类。

首先从网上下载数据并解压缩,放到python文件所在的data文件夹下,网站https://www.cs.toronto.edu/\~kriz/cifar-10-python.tar.gz

准备与展示数据集

创建新文件cifar.py,加入以下代码展示数据集

python 复制代码
import torch
import torch.nn as nn
import torchvision
import matplotlib.pyplot as plt
from torchinfo import summary
import numpy as np

from pytorchcv import load_mnist, train, plot_results, plot_convolution, display_dataset

transform = torchvision.transforms.Compose(
    [torchvision.transforms.ToTensor(),
     torchvision.transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])

trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=14, shuffle=True)
testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=14, shuffle=False)
classes = ('plane', 'car', 'bird', 'cat',
           'deer', 'dog', 'frog', 'horse', 'ship', 'truck')

display_dataset(trainset, classes=classes)

plt.show()

一个著名的CIFAR-10架构被称为LeNet。它遵循与我们上面概述的相同的原则。由于所有图像都是彩色的,输入张量大小为3×32×32,并且5×5卷积滤波器也跨颜色维度应用-这意味着卷积核矩阵的大小为3×5×5。

此处还对这个模型做了一个简化------不使用log_softmax作为输出激活函数,而只返回最后一个完全连接层的输出。在这种情况下,使用CrossEntropyLoss损失函数来优化模型。

python 复制代码
class LeNet(nn.Module):
    def __init__(self):
        super(LeNet, self).__init__()
        self.conv1 = nn.Conv2d(3, 6, 5)
        self.pool = nn.MaxPool2d(2)
        self.conv2 = nn.Conv2d(6, 16, 5)
        self.conv3 = nn.Conv2d(16,120,5)
        self.flat = nn.Flatten()
        self.fc1 = nn.Linear(120,64)
        self.fc2 = nn.Linear(64,10)

    def forward(self, x):
        x = self.pool(nn.functional.relu(self.conv1(x)))
        x = self.pool(nn.functional.relu(self.conv2(x)))
        x = nn.functional.relu(self.conv3(x))
        x = self.flat(x)
        x = nn.functional.relu(self.fc1(x))
        x = self.fc2(x)
        return x

net = LeNet()

summary(net,input_size=(1,3,32,32))

输出:

python 复制代码
==========================================================================================
Layer (type:depth-idx)                   Output Shape              Param #
==========================================================================================
LeNet                                    [1, 10]                   --
├─Conv2d: 1-1                            [1, 6, 28, 28]            456
├─MaxPool2d: 1-2                         [1, 6, 14, 14]            --
├─Conv2d: 1-3                            [1, 16, 10, 10]           2,416
├─MaxPool2d: 1-4                         [1, 16, 5, 5]             --
├─Conv2d: 1-5                            [1, 120, 1, 1]            48,120
├─Flatten: 1-6                           [1, 120]                  --
├─Linear: 1-7                            [1, 64]                   7,744
├─Linear: 1-8                            [1, 10]                   650
==========================================================================================
Total params: 59,386
Trainable params: 59,386
Non-trainable params: 0
Total mult-adds (M): 0.66
==========================================================================================
Input size (MB): 0.01
Forward/backward pass size (MB): 0.05
Params size (MB): 0.24
Estimated Total Size (MB): 0.30
==========================================================================================

5.4 训练

正确地训练这个网络将花费大量的时间,最好是在支持gpu的计算上完成。

python 复制代码
opt = torch.optim.SGD(net.parameters(),lr=0.001,momentum=0.9)
hist = train(net, trainloader, testloader, epochs=3, optimizer=opt, loss_fn=nn.CrossEntropyLoss())

# Epoch  0, Train acc=0.261, Val acc=0.388, Train loss=0.143, Val loss=0.121
# Epoch  1, Train acc=0.437, Val acc=0.491, Train loss=0.110, Val loss=0.101
# Epoch  2, Train acc=0.508, Val acc=0.522, Train loss=0.097, Val loss=0.094

通过3次训练,我们所能达到的准确度似乎并不高。然而,请记住,盲目猜测只能给我们10%的准确率,而且我们的问题实际上比MNIST数字分类要困难得多。在如此短的训练时间内获得50%以上的准确率似乎是一个很好的成就。

5.5 代码汇总

cifar.py代码如下:

python 复制代码
import torch
import torch.nn as nn
import torchvision
import matplotlib.pyplot as plt
from torchinfo import summary
import numpy as np

from pytorchcv import load_mnist, train, plot_results, plot_convolution, display_dataset

transform = torchvision.transforms.Compose(
    [torchvision.transforms.ToTensor(),
     torchvision.transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])

trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=14, shuffle=True)
testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=14, shuffle=False)
classes = ('plane', 'car', 'bird', 'cat',
           'deer', 'dog', 'frog', 'horse', 'ship', 'truck')

display_dataset(trainset, classes=classes)

# plt.show()  # 展示数据集


class LeNet(nn.Module):
    def __init__(self):
        super(LeNet, self).__init__()
        self.conv1 = nn.Conv2d(3, 6, 5)
        self.pool = nn.MaxPool2d(2)
        self.conv2 = nn.Conv2d(6, 16, 5)
        self.conv3 = nn.Conv2d(16, 120, 5)
        self.flat = nn.Flatten()
        self.fc1 = nn.Linear(120, 64)
        self.fc2 = nn.Linear(64, 10)

    def forward(self, x):
        x = self.pool(nn.functional.relu(self.conv1(x)))
        x = self.pool(nn.functional.relu(self.conv2(x)))
        x = nn.functional.relu(self.conv3(x))
        x = self.flat(x)
        x = nn.functional.relu(self.fc1(x))
        x = self.fc2(x)
        return x


net = LeNet()

summary(net, input_size=(1, 3, 32, 32))

opt = torch.optim.SGD(net.parameters(),lr=0.001,momentum=0.9)
hist = train(net, trainloader, testloader, epochs=3, optimizer=opt, loss_fn=nn.CrossEntropyLoss())

6 使用预训练的网络进行迁移学习


6.1 预训练模型与迁移学习 Pre-trained models and transfer learning

训练卷积神经网络可能需要大量时间,而且需要大量数据。然而,许多时间都花在了学习网络用于从图像中提取模式的最佳低级滤波器上。一个自然的问题是 - 我们是否可以使用一个在一个数据集上训练过的神经网络,将其适应于对不同图像进行分类而无需完全训练过程?

这种方法被称为迁移学习(transfer learning),因为从一个神经网络模型向另一个模型转移了一些知识。在迁移学习中,我们通常从一个预训练模型开始,该模型已经在一些大型图像数据集(例如ImageNet)上进行了训练。这些模型已经能够很好地从通用图像中提取不同的特征,在许多情况下,只需在这些提取的特征之上构建一个分类器就可以得到很好的结果。

使用以下python代码来帮助构建代码:

python 复制代码
# Script file to hide implementation details for PyTorch computer vision module

import builtins
import torch
import torch.nn as nn
from torch.utils import data
import torchvision
from torchvision.transforms import ToTensor
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image
import glob
import os
import zipfile

default_device = 'cuda' if torch.cuda.is_available() else 'cpu'


def load_mnist(batch_size=64):
    builtins.data_train = torchvision.datasets.MNIST('./data',
                                                     download=True, train=True, transform=ToTensor())
    builtins.data_test = torchvision.datasets.MNIST('./data',
                                                    download=True, train=False, transform=ToTensor())
    builtins.train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size)
    builtins.test_loader = torch.utils.data.DataLoader(data_test, batch_size=batch_size)


def train_epoch(net, dataloader, lr=0.01, optimizer=None, loss_fn=nn.NLLLoss()):
    optimizer = optimizer or torch.optim.Adam(net.parameters(), lr=lr)
    net.train()
    total_loss, acc, count = 0, 0, 0
    for features, labels in dataloader:
        optimizer.zero_grad()
        lbls = labels.to(default_device)
        out = net(features.to(default_device))
        loss = loss_fn(out, lbls)  # cross_entropy(out,labels)
        loss.backward()
        optimizer.step()
        total_loss += loss
        _, predicted = torch.max(out, 1)
        acc += (predicted == lbls).sum()
        count += len(labels)
    return total_loss.item() / count, acc.item() / count


def validate(net, dataloader, loss_fn=nn.NLLLoss()):
    net.eval()
    count, acc, loss = 0, 0, 0
    with torch.no_grad():
        for features, labels in dataloader:
            lbls = labels.to(default_device)
            out = net(features.to(default_device))
            loss += loss_fn(out, lbls)
            pred = torch.max(out, 1)[1]
            acc += (pred == lbls).sum()
            count += len(labels)
    return loss.item() / count, acc.item() / count


def train(net, train_loader, test_loader, optimizer=None, lr=0.01, epochs=10, loss_fn=nn.NLLLoss()):
    optimizer = optimizer or torch.optim.Adam(net.parameters(), lr=lr)
    res = {'train_loss': [], 'train_acc': [], 'val_loss': [], 'val_acc': []}
    for ep in range(epochs):
        tl, ta = train_epoch(net, train_loader, optimizer=optimizer, lr=lr, loss_fn=loss_fn)
        vl, va = validate(net, test_loader, loss_fn=loss_fn)
        print(f"Epoch {ep:2}, Train acc={ta:.3f}, Val acc={va:.3f}, Train loss={tl:.3f}, Val loss={vl:.3f}")
        res['train_loss'].append(tl)
        res['train_acc'].append(ta)
        res['val_loss'].append(vl)
        res['val_acc'].append(va)
    return res


def train_long(net, train_loader, test_loader, epochs=5, lr=0.01, optimizer=None, loss_fn=nn.NLLLoss(), print_freq=10):
    optimizer = optimizer or torch.optim.Adam(net.parameters(), lr=lr)
    for epoch in range(epochs):
        net.train()
        total_loss, acc, count = 0, 0, 0
        for i, (features, labels) in enumerate(train_loader):
            lbls = labels.to(default_device)
            optimizer.zero_grad()
            out = net(features.to(default_device))
            loss = loss_fn(out, lbls)
            loss.backward()
            optimizer.step()
            total_loss += loss
            _, predicted = torch.max(out, 1)
            acc += (predicted == lbls).sum()
            count += len(labels)
            if i % print_freq == 0:
                print("Epoch {}, minibatch {}: train acc = {}, train loss = {}".format(epoch, i, acc.item() / count,
                                                                                       total_loss.item() / count))
        vl, va = validate(net, test_loader, loss_fn)
        print("Epoch {} done, validation acc = {}, validation loss = {}".format(epoch, va, vl))


def plot_results(hist):
    plt.figure(figsize=(15, 5))
    plt.subplot(121)
    plt.plot(hist['train_acc'], label='Training acc')
    plt.plot(hist['val_acc'], label='Validation acc')
    plt.legend()
    plt.subplot(122)
    plt.plot(hist['train_loss'], label='Training loss')
    plt.plot(hist['val_loss'], label='Validation loss')
    plt.legend()


def plot_convolution(t, title=''):
    with torch.no_grad():
        c = nn.Conv2d(kernel_size=(3, 3), out_channels=1, in_channels=1)
        c.weight.copy_(t)
        fig, ax = plt.subplots(2, 6, figsize=(8, 3))
        fig.suptitle(title, fontsize=16)
        for i in range(5):
            im = data_train[i][0]
            ax[0][i].imshow(im[0])
            ax[1][i].imshow(c(im.unsqueeze(0))[0][0])
            ax[0][i].axis('off')
            ax[1][i].axis('off')
        ax[0, 5].imshow(t)
        ax[0, 5].axis('off')
        ax[1, 5].axis('off')
        # plt.tight_layout()
        plt.show()


def display_dataset(dataset, n=10, classes=None):
    fig, ax = plt.subplots(1, n, figsize=(15, 3))
    mn = min([dataset[i][0].min() for i in range(n)])
    mx = max([dataset[i][0].max() for i in range(n)])
    for i in range(n):
        ax[i].imshow(np.transpose((dataset[i][0] - mn) / (mx - mn), (1, 2, 0)))
        ax[i].axis('off')
        if classes:
            ax[i].set_title(classes[dataset[i][1]])


def check_image(fn):
    try:
        im = Image.open(fn)
        im.verify()
        return True
    except:
        return False


def check_image_dir(path):
    for fn in glob.glob(path):
        if not check_image(fn):
            print("Corrupt image: {}".format(fn))
            os.remove(fn)


def common_transform():
    std_normalize = torchvision.transforms.Normalize(mean=[0.485, 0.456, 0.406],
                                                     std=[0.229, 0.224, 0.225])
    trans = torchvision.transforms.Compose([
        torchvision.transforms.Resize(256),
        torchvision.transforms.CenterCrop(224),
        torchvision.transforms.ToTensor(),
        std_normalize])
    return trans


def load_cats_dogs_dataset():
    if not os.path.exists('data/PetImages'):
        with zipfile.ZipFile('data/kagglecatsanddogs_5340.zip', 'r') as zip_ref:
            zip_ref.extractall('data')

    check_image_dir('data/PetImages/Cat/*.jpg')
    check_image_dir('data/PetImages/Dog/*.jpg')

    dataset = torchvision.datasets.ImageFolder('data/PetImages', transform=common_transform())
    trainset, testset = torch.utils.data.random_split(dataset, [20000, len(dataset) - 20000])
    trainloader = torch.utils.data.DataLoader(trainset, batch_size=32)
    testloader = torch.utils.data.DataLoader(trainset, batch_size=32)
    return dataset, trainloader, testloader

版本要求:

torchvision==0.13.0
torch==1.12.0

创建一个新文件,加载类

python 复制代码
import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
from torchinfo import summary
import numpy as np
import os

from pytorchcv import train, plot_results, display_dataset, train_long, check_image_dir

6.2 Cats vs. Dogs 数据集

本单元将解决一个现实生活中的猫和狗图像分类问题。为此,将使用 Kaggle Cats vs. Dogs Dataset这个数据集,下载完成将压缩包放到data目录下。

数据集网址:https://www.microsoft.com/en-us/download/details.aspx?id=54765

添加如下代码来解压缩:

python 复制代码
import zipfile
if not os.path.exists('data/PetImages'):
    with zipfile.ZipFile('data/kagglecatsanddogs_5340.zip', 'r') as zip_ref:
        zip_ref.extractall('data')

很遗憾,数据集中存在一些损坏的图像文件。我们需要快速清理以检查损坏的文件。为了不干扰本教程,我们将验证数据集的代码移到一个模块中,并在此处调用它。check_image_dir会逐个图像地检查整个数据集,尝试加载图像并检查是否可以正确加载。所有损坏的图像都会被删除。

python 复制代码
check_image_dir('data/PetImages/Cat/*.jpg')
check_image_dir('data/PetImages/Dog/*.jpg')

输出是:

Corrupt image: data/PetImages/Cat\666.jpg
Corrupt image: data/PetImages/Dog\11702.jpg

接下来,将图像加载到PyTorch数据集中,将它们转换为张量并进行一些归一化。我们通过使用Compose组合几个基本变换来定义图像变换管道:

  • Resize 将图像调整为 256x256 尺寸
  • CenterCrop 获取尺寸为 224x224 的图像中心部分。预训练的VGG网络是在 224x224 的图像上进行训练的,因此我们需要将我们的数据集调整到这个尺寸。
  • ToTensor 将像素强度归一化为 0 到 1 的范围,并将图像转换为 PyTorch 张量
  • std_normalize 变换是VGG网络的附加归一化步骤。在训练VGG网络时,ImageNet中的原始图像经过了颜色平均强度的减法和颜色标准差的除法(也是按颜色进行的)。因此,我们需要对我们的数据集应用相同的变换,以便所有图像都被正确处理。

我们将图像调整为尺寸为 256,然后裁剪为 224 像素有几个原因:

  • 我们想要展示更多可能的转换。
  • 宠物通常在图像的中心部分,因此通过更多地关注中心部分,我们可以提高分类效果。
  • 由于一些图像不是正方形的,我们最终会有一些填充部分不包含任何有用的图像数据,稍微裁剪图像会减少填充部分。
python 复制代码
std_normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
                          std=[0.229, 0.224, 0.225])
trans = transforms.Compose([
        transforms.Resize(256),
        transforms.CenterCrop(224),
        transforms.ToTensor(), 
        std_normalize])
dataset = torchvision.datasets.ImageFolder('data/PetImages',transform=trans)
trainset, testset = torch.utils.data.random_split(dataset,[20000,len(dataset)-20000])

display_dataset(dataset)

6.3 预训练模型

torchvision模块中提供了许多不同的预训练模型,甚至还可以在互联网上找到更多模型。让我们看看如何加载和使用最简单的VGG-16模型。首先,我们将下载在本地存储的VGG-16模型的权重。

  1. 创建文件夹:mkdir -p models
  2. 下载模型:wget -P models https://github.com/MicrosoftDocs/pytorchfundamentals/raw/main/computer-vision-pytorch/vgg16-397923af.pth

接下来,我们将使用load_state_dict方法将权重加载到预训练的VGG-16模型中。然后,使用eval方法将模型设置为推断模式。

python 复制代码
file_path = 'models/vgg16-397923af.pth'

vgg = torchvision.models.vgg16()
vgg.load_state_dict(torch.load(file_path))
vgg.eval()

sample_image = dataset[0][0].unsqueeze(0)
res = vgg(sample_image)
print(res[0].argmax())

输出:

tensor(282)

我们收到的结果是ImageNet类的编号,可以在这里查找。我们可以使用以下代码自动加载这个类表并返回结果:

python 复制代码
import json, requests
class_map = json.loads(requests.get("https://raw.githubusercontent.com/MicrosoftDocs/pytorchfundamentals/main/computer-vision-pytorch/imagenet_class_index.json").text)
class_map = { int(k) : v for k,v in class_map.items() }

class_map[res[0].argmax().item()]

输出是:

['n02123045', 'tabby']

打印出VGG-16网络的架构:

==========================================================================================
Layer (type:depth-idx)                   Output Shape              Param #
==========================================================================================
VGG                                      --                        --
├─Sequential: 1-1                        [1, 512, 7, 7]            --
│    └─Conv2d: 2-1                       [1, 64, 224, 224]         1,792
│    └─ReLU: 2-2                         [1, 64, 224, 224]         --
│    └─Conv2d: 2-3                       [1, 64, 224, 224]         36,928
│    └─ReLU: 2-4                         [1, 64, 224, 224]         --
│    └─MaxPool2d: 2-5                    [1, 64, 112, 112]         --
│    └─Conv2d: 2-6                       [1, 128, 112, 112]        73,856
│    └─ReLU: 2-7                         [1, 128, 112, 112]        --
│    └─Conv2d: 2-8                       [1, 128, 112, 112]        147,584
│    └─ReLU: 2-9                         [1, 128, 112, 112]        --
│    └─MaxPool2d: 2-10                   [1, 128, 56, 56]          --
│    └─Conv2d: 2-11                      [1, 256, 56, 56]          295,168
│    └─ReLU: 2-12                        [1, 256, 56, 56]          --
│    └─Conv2d: 2-13                      [1, 256, 56, 56]          590,080
│    └─ReLU: 2-14                        [1, 256, 56, 56]          --
│    └─Conv2d: 2-15                      [1, 256, 56, 56]          590,080
│    └─ReLU: 2-16                        [1, 256, 56, 56]          --
│    └─MaxPool2d: 2-17                   [1, 256, 28, 28]          --
│    └─Conv2d: 2-18                      [1, 512, 28, 28]          1,180,160
│    └─ReLU: 2-19                        [1, 512, 28, 28]          --
│    └─Conv2d: 2-20                      [1, 512, 28, 28]          2,359,808
│    └─ReLU: 2-21                        [1, 512, 28, 28]          --
│    └─Conv2d: 2-22                      [1, 512, 28, 28]          2,359,808
│    └─ReLU: 2-23                        [1, 512, 28, 28]          --
│    └─MaxPool2d: 2-24                   [1, 512, 14, 14]          --
│    └─Conv2d: 2-25                      [1, 512, 14, 14]          2,359,808
│    └─ReLU: 2-26                        [1, 512, 14, 14]          --
│    └─Conv2d: 2-27                      [1, 512, 14, 14]          2,359,808
│    └─ReLU: 2-28                        [1, 512, 14, 14]          --
│    └─Conv2d: 2-29                      [1, 512, 14, 14]          2,359,808
│    └─ReLU: 2-30                        [1, 512, 14, 14]          --
│    └─MaxPool2d: 2-31                   [1, 512, 7, 7]            --
├─AdaptiveAvgPool2d: 1-2                 [1, 512, 7, 7]            --
├─Sequential: 1-3                        [1, 1000]                 --
│    └─Linear: 2-32                      [1, 4096]                 102,764,544
│    └─ReLU: 2-33                        [1, 4096]                 --
│    └─Dropout: 2-34                     [1, 4096]                 --
│    └─Linear: 2-35                      [1, 4096]                 16,781,312
│    └─ReLU: 2-36                        [1, 4096]                 --
│    └─Dropout: 2-37                     [1, 4096]                 --
│    └─Linear: 2-38                      [1, 1000]                 4,097,000
==========================================================================================
Total params: 138,357,544
Trainable params: 138,357,544
Non-trainable params: 0
Total mult-adds (G): 15.48
==========================================================================================
Input size (MB): 0.60
Forward/backward pass size (MB): 108.45
Params size (MB): 553.43
Estimated Total Size (MB): 662.49
==========================================================================================

除了我们已经了解的层之外,还有另一种称为Dropout的层类型。这些层作为正则化(regularization)技术。正则化对学习算法进行轻微修改,使模型更好地泛化。在训练过程中,Dropout层会舍弃前一层中的一定比例(大约30%)的神经元,并且在没有它们的情况下进行训练。这有助于使优化过程摆脱局部最小值,并将决策能力分布在不同的神经路径之间,从而提高网络的整体稳定性。

6.4 GPU计算

深度神经网络,例如VGG-16和其他更现代的架构,需要相当大的计算能力才能运行。如果可用的话,使用GPU加速是有意义的。为了做到这一点,我们需要显式地将参与计算的所有张量移动到GPU上。通常的做法是在代码中检查GPU的可用性,并定义一个指向计算设备(GPU或CPU)的设备变量。

python 复制代码
device = 'cuda' if torch.cuda.is_available() else 'cpu'

print('Doing computations on device = {}'.format(device))

vgg.to(device)
sample_image = sample_image.to(device)

vgg(sample_image).argmax()

6.5 提取VGG特征

如果我们想要使用VGG-16来从我们的图像中提取特征,我们需要没有最终分类层的模型。实际上,可以使用vgg.features方法来获取这个"特征提取器"。

python 复制代码
res = vgg.features(sample_image).cpu()
plt.figure(figsize=(15,3))
plt.imshow(res.detach().view(-1,512))
print(res.size())

输出是:

torch.Size([1, 512, 7, 7])

特征张量的维度是512x7x7,但为了可视化它,必须将其重塑为2D形式。

现在让我们尝试看看这些特征是否可以用来对图像进行分类。让我们手动选择一些图像的部分(在我们的案例中为800张),并预先计算它们的特征向量。我们将结果存储在一个称为feature_tensor的大张量中,并将标签存储在label_tensor中:

python 复制代码
bs = 8
dl = torch.utils.data.DataLoader(dataset,batch_size=bs,shuffle=True)
num = bs*100
feature_tensor = torch.zeros(num,512*7*7).to(device)
label_tensor = torch.zeros(num).to(device)
i = 0
for x,l in dl:
    with torch.no_grad():
        f = vgg.features(x.to(device))
        feature_tensor[i:i+bs] = f.view(bs,-1)
        label_tensor[i:i+bs] = l
        i+=bs
        print('.',end='')
        if i>=num:
            break

现在我们可以定义vgg_dataset,从这个张量中获取数据,使用random_split函数将其分割成训练集和测试集,并在提取的特征之上训练一个小型单层密集分类器网络。

python 复制代码
vgg_dataset = torch.utils.data.TensorDataset(feature_tensor,label_tensor.to(torch.long))
train_ds, test_ds = torch.utils.data.random_split(vgg_dataset,[700,100])

train_loader = torch.utils.data.DataLoader(train_ds,batch_size=32)
test_loader = torch.utils.data.DataLoader(test_ds,batch_size=32)

net = torch.nn.Sequential(torch.nn.Linear(512*7*7,2),torch.nn.LogSoftmax()).to(device)

history = train(net,train_loader,test_loader)

输出:

Epoch  0, Train acc=0.879, Val acc=0.990, Train loss=0.110, Val loss=0.007
Epoch  1, Train acc=0.981, Val acc=0.980, Train loss=0.015, Val loss=0.021
Epoch  2, Train acc=0.999, Val acc=0.990, Train loss=0.001, Val loss=0.002
Epoch  3, Train acc=1.000, Val acc=0.980, Train loss=0.000, Val loss=0.002
Epoch  4, Train acc=1.000, Val acc=0.980, Train loss=0.000, Val loss=0.002
Epoch  5, Train acc=1.000, Val acc=0.980, Train loss=0.000, Val loss=0.002
Epoch  6, Train acc=1.000, Val acc=0.980, Train loss=0.000, Val loss=0.002
Epoch  7, Train acc=1.000, Val acc=0.980, Train loss=0.000, Val loss=0.002
Epoch  8, Train acc=1.000, Val acc=0.980, Train loss=0.000, Val loss=0.002
Epoch  9, Train acc=1.000, Val acc=0.980, Train loss=0.000, Val loss=0.002

结果很好,我们几乎可以以98%的概率区分猫和狗!然而,我们只是在所有图像的一个小子集上测试了这种方法,因为手动特征提取似乎需要很长时间。

6.6 使用一个VGG网络进行迁移学习

我们也可以通过在训练过程中使用原始的VGG-16网络作为一个整体来避免手动预先计算特征。让我们看看VGG-16的对象结构:

python 复制代码
print(vgg)

输出是:

VGG(
  (features): Sequential(
    (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): ReLU(inplace=True)
    (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (3): ReLU(inplace=True)
    (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (6): ReLU(inplace=True)
    (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (8): ReLU(inplace=True)
    (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (11): ReLU(inplace=True)
    (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (13): ReLU(inplace=True)
    (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (15): ReLU(inplace=True)
    (16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (18): ReLU(inplace=True)
    (19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (20): ReLU(inplace=True)
    (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (22): ReLU(inplace=True)
    (23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (25): ReLU(inplace=True)
    (26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (27): ReLU(inplace=True)
    (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (29): ReLU(inplace=True)
    (30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  )
  (avgpool): AdaptiveAvgPool2d(output_size=(7, 7))
  (classifier): Sequential(
    (0): Linear(in_features=25088, out_features=4096, bias=True)
    (1): ReLU(inplace=True)
    (2): Dropout(p=0.5, inplace=False)
    (3): Linear(in_features=4096, out_features=4096, bias=True)
    (4): ReLU(inplace=True)
    (5): Dropout(p=0.5, inplace=False)
    (6): Linear(in_features=4096, out_features=1000, bias=True)
  )
)

网络包含以下组件:

  • 特征提取器(features),由多个卷积层和池化层组成
  • 平均池化层(avgpool
  • 最终的classifier,由几个密集层组成,将25088个输入特征转换为1000个类别(这是ImageNet中的类别数)

为了训练能够对我们的数据集进行分类的端到端模型,我们需要:

  • 替换最终的分类器 ,使用一个能够产生所需类别数的分类器。在我们的情况下,我们可以使用一个具有25088个输入和2个输出神经元的Linear层。
  • 冻结卷积特征提取器的权重 ,以防它们被训练。建议最初执行此冻结,因为否则未经训练的分类器层可能会破坏卷积特征提取器的原始预训练权重。可以通过将所有参数的requires_grad属性设置为False来冻结权重。
python 复制代码
vgg.classifier = torch.nn.Linear(25088,2).to(device)

for x in vgg.features.parameters():
    x.requires_grad = False

summary(vgg,(1, 3,244,244))

输出是:

python 复制代码
==========================================================================================
Layer (type:depth-idx)                   Output Shape              Param #
==========================================================================================
VGG                                      --                        --
├─Sequential: 1-1                        [1, 512, 7, 7]            --
│    └─Conv2d: 2-1                       [1, 64, 244, 244]         (1,792)
│    └─ReLU: 2-2                         [1, 64, 244, 244]         --
│    └─Conv2d: 2-3                       [1, 64, 244, 244]         (36,928)
│    └─ReLU: 2-4                         [1, 64, 244, 244]         --
│    └─MaxPool2d: 2-5                    [1, 64, 122, 122]         --
│    └─Conv2d: 2-6                       [1, 128, 122, 122]        (73,856)
│    └─ReLU: 2-7                         [1, 128, 122, 122]        --
│    └─Conv2d: 2-8                       [1, 128, 122, 122]        (147,584)
│    └─ReLU: 2-9                         [1, 128, 122, 122]        --
│    └─MaxPool2d: 2-10                   [1, 128, 61, 61]          --
│    └─Conv2d: 2-11                      [1, 256, 61, 61]          (295,168)
│    └─ReLU: 2-12                        [1, 256, 61, 61]          --
│    └─Conv2d: 2-13                      [1, 256, 61, 61]          (590,080)
│    └─ReLU: 2-14                        [1, 256, 61, 61]          --
│    └─Conv2d: 2-15                      [1, 256, 61, 61]          (590,080)
│    └─ReLU: 2-16                        [1, 256, 61, 61]          --
│    └─MaxPool2d: 2-17                   [1, 256, 30, 30]          --
│    └─Conv2d: 2-18                      [1, 512, 30, 30]          (1,180,160)
│    └─ReLU: 2-19                        [1, 512, 30, 30]          --
│    └─Conv2d: 2-20                      [1, 512, 30, 30]          (2,359,808)
│    └─ReLU: 2-21                        [1, 512, 30, 30]          --
│    └─Conv2d: 2-22                      [1, 512, 30, 30]          (2,359,808)
│    └─ReLU: 2-23                        [1, 512, 30, 30]          --
│    └─MaxPool2d: 2-24                   [1, 512, 15, 15]          --
│    └─Conv2d: 2-25                      [1, 512, 15, 15]          (2,359,808)
│    └─ReLU: 2-26                        [1, 512, 15, 15]          --
│    └─Conv2d: 2-27                      [1, 512, 15, 15]          (2,359,808)
│    └─ReLU: 2-28                        [1, 512, 15, 15]          --
│    └─Conv2d: 2-29                      [1, 512, 15, 15]          (2,359,808)
│    └─ReLU: 2-30                        [1, 512, 15, 15]          --
│    └─MaxPool2d: 2-31                   [1, 512, 7, 7]            --
├─AdaptiveAvgPool2d: 1-2                 [1, 512, 7, 7]            --
├─Linear: 1-3                            [1, 2]                    50,178
==========================================================================================
Total params: 14,764,866
Trainable params: 50,178
Non-trainable params: 14,714,688
Total mult-adds (G): 17.99
==========================================================================================
Input size (MB): 0.71
Forward/backward pass size (MB): 128.13
Params size (MB): 59.06
Estimated Total Size (MB): 187.91
==========================================================================================

从摘要中可以看出,这个模型总共包含大约1500万个参数,但其中只有50万个是可训练的 - 这些是分类层的权重。这是好事,因为我们可以用更少的示例来微调更少的参数。

现在让我们使用我们的原始数据集来训练模型。这个过程会花费很长时间,所以我们将使用train_long函数,它会在等待周期结束之前打印一些中间结果。强烈建议在启用GPU的计算机上运行这个训练!

python 复制代码
trainset, testset = torch.utils.data.random_split(dataset,[20000,len(dataset)-20000])
train_loader = torch.utils.data.DataLoader(trainset,batch_size=16)
test_loader = torch.utils.data.DataLoader(testset,batch_size=16)

train_long(vgg,train_loader,test_loader,loss_fn=torch.nn.CrossEntropyLoss(),epochs=1,print_freq=90)

输出:

Epoch 0, minibatch 180: train acc = 0.9582182320441989, train loss = 0.12481052967724879
Epoch 0, minibatch 270: train acc = 0.9587177121771218, train loss = 0.14185787918822793
Epoch 0, minibatch 360: train acc = 0.9634695290858726, train loss = 0.14566257719848294
Epoch 0, minibatch 450: train acc = 0.966879157427938, train loss = 0.13402751914149114
Epoch 0, minibatch 540: train acc = 0.9686922365988909, train loss = 0.13931148902766144
Epoch 0, minibatch 630: train acc = 0.9694928684627575, train loss = 0.1386710044510202
Epoch 0, minibatch 720: train acc = 0.970613730929265, train loss = 0.13363790313678375
Epoch 0, minibatch 810: train acc = 0.9709463625154131, train loss = 0.1342217084364885
Epoch 0, minibatch 900: train acc = 0.9721143174250833, train loss = 0.13233261023721474
Epoch 0, minibatch 990: train acc = 0.9726286579212916, train loss = 0.1334670727957871
Epoch 0, minibatch 1080: train acc = 0.9733464384828863, train loss = 0.13777193110039893
Epoch 0, minibatch 1170: train acc = 0.9734735269000854, train loss = 0.14239378162778207
Epoch 0 done, validation acc = 0.9671868747499, validation loss = 0.25287964306816474

看起来我们已经获得了相当准确的猫与狗分类器!让我们把它保存起来以备将来使用!

python 复制代码
torch.save(vgg,'data/cats_dogs.pth')

我们随时可以从文件中加载模型。如果下一个实验破坏了模型,您可能会发现这很有用 - 您不必从头开始重新启动。

python 复制代码
vgg = torch.load('data/cats_dogs.pth')

6.7 微调迁移学习

在上一节中,我们训练了最终的分类器层来对我们自己数据集中的图像进行分类。然而,我们没有重新训练特征提取器,并且我们的模型依赖于模型在ImageNet数据上学到的特征。如果您的对象在视觉上与普通的ImageNet图像不同,那么这些特征的组合可能效果不佳。因此,开始训练卷积层也是有意义的。

为了做到这一点,我们可以解冻之前冻结的卷积滤波器参数。

**注意:**重要的是,您首先冻结参数并进行几个周期的训练,以稳定分类层中的权重。如果您立即开始对解冻参数进行端到端网络的训练,很可能会因为大误差而破坏卷积层中的预训练权重。

python 复制代码
for x in vgg.features.parameters():
    x.requires_grad = True

在解冻后,我们可以进行几个额外的周期训练。您还可以选择较低的学习率,以最小化对预训练权重的影响。然而,即使采用较低的学习率,您可以预期在训练开始时准确度会下降,直到最终达到略高于固定权重情况下的水平。

**注意:**这种训练速度要慢得多,因为我们需要将梯度传播回网络的许多层!您可能希望观察前几个小批次的趋势,然后停止计算。

python 复制代码
train_long(vgg,train_loader,test_loader,loss_fn=torch.nn.CrossEntropyLoss(),epochs=1,print_freq=90,lr=0.0001)

输出:

python 复制代码
Epoch 0, minibatch 0: train acc = 1.0, train loss = 0.0
Epoch 0, minibatch 90: train acc = 0.8990384615384616, train loss = 0.2978392171335744
Epoch 0, minibatch 180: train acc = 0.9060773480662984, train loss = 0.1658294214069514
Epoch 0, minibatch 270: train acc = 0.9102859778597786, train loss = 0.11819224340009514
Epoch 0, minibatch 360: train acc = 0.9191481994459834, train loss = 0.09244130522920814
Epoch 0, minibatch 450: train acc = 0.9261363636363636, train loss = 0.07583886292451236
Epoch 0, minibatch 540: train acc = 0.928373382624769, train loss = 0.06537413817456822
Epoch 0, minibatch 630: train acc = 0.9318541996830428, train loss = 0.057419379426257924
Epoch 0, minibatch 720: train acc = 0.9361130374479889, train loss = 0.05114534460059813
Epoch 0, minibatch 810: train acc = 0.938347718865598, train loss = 0.04657612246737968
Epoch 0, minibatch 900: train acc = 0.9407602663706992, train loss = 0.04258851655712403
Epoch 0, minibatch 990: train acc = 0.9431130171543896, train loss = 0.03927870595491257
Epoch 0, minibatch 1080: train acc = 0.945536540240518, train loss = 0.03652716609309053
Epoch 0, minibatch 1170: train acc = 0.9463065755764304, train loss = 0.03445258006186286
Epoch 0 done, validation acc = 0.974389755902361, validation loss = 0.005457923144233279

6.8 其他CV模型

VGG-16是最简单的计算机视觉架构之一。torchvision包提供了许多更多预训练网络。其中最常用的是由Microsoft开发的ResNet 架构和由Google开发的Inception。例如,让我们探索最简单的ResNet-18模型的架构(ResNet是一个具有不同深度的模型系列,您可以尝试使用ResNet-151来查看真正深度模型的外观)。

python 复制代码
resnet = torchvision.models.resnet18()
print(resnet)

输出为:

ResNet(
  (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
  (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  (relu): ReLU(inplace=True)
  (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
  (layer1): Sequential(
    (0): BasicBlock(
      (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (1): BasicBlock(
      (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
  )
  (layer2): Sequential(
    (0): BasicBlock(
      (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (downsample): Sequential(
        (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): BasicBlock(
      (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
  )
  (layer3): Sequential(
    (0): BasicBlock(
      (conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (downsample): Sequential(
        (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): BasicBlock(
      (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
  )
  (layer4): Sequential(
    (0): BasicBlock(
      (conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (downsample): Sequential(
        (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): BasicBlock(
      (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
  )
  (avgpool): AdaptiveAvgPool2d(output_size=(1, 1))
  (fc): Linear(in_features=512, out_features=1000, bias=True)
)

如您所见,该模型包含相同的构建块:特征提取器和最终分类器(fc)。这使得我们可以以与我们一直在使用VGG-16进行迁移学习相同的方式使用该模型。您可以尝试使用上面的代码,将不同的ResNet模型作为基础模型,并查看准确率如何变化。

6.9 批量归一化 Batch Normalization

该网络还包含另一种类型的层:批量归一化。批量归一化的思想是将通过神经网络流动的值调整到正确的区间。通常情况下,当所有值都在[-1,1]或[0,1]的范围内时,神经网络的工作效果最佳,这也是我们相应地缩放/规范化输入数据的原因。然而,在训练深度网络时,可能会出现值明显超出此范围的情况,这会使训练变得困难。批量归一化层计算当前小批量的所有值的平均值和标准差,并使用它们在通过神经网络层之前对信号进行归一化。这显着提高了深度网络的稳定性。

6.10 总结

使用迁移学习,我们能够快速地为我们的自定义对象分类任务组建一个分类器,并实现高准确率。然而,这个例子并不完全公平,因为原始的VGG-16网络是预先训练的,用于识别猫和狗,因此我们只是重复使用了网络中已经存在的大部分模式。您可以预期在更奇特的领域特定对象上,例如工厂生产线上的细节或不同的树叶上,准确率会更低。

您可以看到,我们现在解决的更复杂的任务需要更高的计算能力,并且不能轻易在CPU上解决。在下一个单元中,我们将尝试使用更轻量级的实现来使用更低的计算资源训练相同的模型,这将导致略低的准确率。

7 使用 MobileNet 解决视觉相关问题

7.1 轻量模型与MobileNet

我们已经看到,复杂的网络需要大量的计算资源,比如GPU,用于训练,同时也用于快速推断。然而,事实证明,在大多数情况下,参数数量显著较小的模型仍然可以被训练得表现得相当不错。换句话说,模型复杂度的增加通常会导致模型性能的小幅(非比例)增加。

我们在模块开始时观察到了这一点,当我们训练MNIST数字分类时。简单的全连接模型的准确率与强大的CNN模型相比没有显著下降。增加CNN层数和/或分类器中的神经元数量最多能让我们获得几个百分点的准确率提升。

这使我们产生了一个想法,即我们可以尝试使用轻量级的网络架构来训练更快的模型。如果我们希望能够在移动设备上执行我们的模型,这一点尤为重要。

本模块将依赖于我们在上一单元下载的Cats and Dogs数据集。首先,我们将确保数据集是可用的。

使用以下python代码来帮助构建代码:

python 复制代码
# Script file to hide implementation details for PyTorch computer vision module

import builtins
import torch
import torch.nn as nn
from torch.utils import data
import torchvision
from torchvision.transforms import ToTensor
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image
import glob
import os
import zipfile

default_device = 'cuda' if torch.cuda.is_available() else 'cpu'


def load_mnist(batch_size=64):
    builtins.data_train = torchvision.datasets.MNIST('./data',
                                                     download=True, train=True, transform=ToTensor())
    builtins.data_test = torchvision.datasets.MNIST('./data',
                                                    download=True, train=False, transform=ToTensor())
    builtins.train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size)
    builtins.test_loader = torch.utils.data.DataLoader(data_test, batch_size=batch_size)


def train_epoch(net, dataloader, lr=0.01, optimizer=None, loss_fn=nn.NLLLoss()):
    optimizer = optimizer or torch.optim.Adam(net.parameters(), lr=lr)
    net.train()
    total_loss, acc, count = 0, 0, 0
    for features, labels in dataloader:
        optimizer.zero_grad()
        lbls = labels.to(default_device)
        out = net(features.to(default_device))
        loss = loss_fn(out, lbls)  # cross_entropy(out,labels)
        loss.backward()
        optimizer.step()
        total_loss += loss
        _, predicted = torch.max(out, 1)
        acc += (predicted == lbls).sum()
        count += len(labels)
    return total_loss.item() / count, acc.item() / count


def validate(net, dataloader, loss_fn=nn.NLLLoss()):
    net.eval()
    count, acc, loss = 0, 0, 0
    with torch.no_grad():
        for features, labels in dataloader:
            lbls = labels.to(default_device)
            out = net(features.to(default_device))
            loss += loss_fn(out, lbls)
            pred = torch.max(out, 1)[1]
            acc += (pred == lbls).sum()
            count += len(labels)
    return loss.item() / count, acc.item() / count


def train(net, train_loader, test_loader, optimizer=None, lr=0.01, epochs=10, loss_fn=nn.NLLLoss()):
    optimizer = optimizer or torch.optim.Adam(net.parameters(), lr=lr)
    res = {'train_loss': [], 'train_acc': [], 'val_loss': [], 'val_acc': []}
    for ep in range(epochs):
        tl, ta = train_epoch(net, train_loader, optimizer=optimizer, lr=lr, loss_fn=loss_fn)
        vl, va = validate(net, test_loader, loss_fn=loss_fn)
        print(f"Epoch {ep:2}, Train acc={ta:.3f}, Val acc={va:.3f}, Train loss={tl:.3f}, Val loss={vl:.3f}")
        res['train_loss'].append(tl)
        res['train_acc'].append(ta)
        res['val_loss'].append(vl)
        res['val_acc'].append(va)
    return res


def train_long(net, train_loader, test_loader, epochs=5, lr=0.01, optimizer=None, loss_fn=nn.NLLLoss(), print_freq=10):
    optimizer = optimizer or torch.optim.Adam(net.parameters(), lr=lr)
    for epoch in range(epochs):
        net.train()
        total_loss, acc, count = 0, 0, 0
        for i, (features, labels) in enumerate(train_loader):
            lbls = labels.to(default_device)
            optimizer.zero_grad()
            out = net(features.to(default_device))
            loss = loss_fn(out, lbls)
            loss.backward()
            optimizer.step()
            total_loss += loss
            _, predicted = torch.max(out, 1)
            acc += (predicted == lbls).sum()
            count += len(labels)
            if i % print_freq == 0:
                print("Epoch {}, minibatch {}: train acc = {}, train loss = {}".format(epoch, i, acc.item() / count,
                                                                                       total_loss.item() / count))
        vl, va = validate(net, test_loader, loss_fn)
        print("Epoch {} done, validation acc = {}, validation loss = {}".format(epoch, va, vl))


def plot_results(hist):
    plt.figure(figsize=(15, 5))
    plt.subplot(121)
    plt.plot(hist['train_acc'], label='Training acc')
    plt.plot(hist['val_acc'], label='Validation acc')
    plt.legend()
    plt.subplot(122)
    plt.plot(hist['train_loss'], label='Training loss')
    plt.plot(hist['val_loss'], label='Validation loss')
    plt.legend()


def plot_convolution(t, title=''):
    with torch.no_grad():
        c = nn.Conv2d(kernel_size=(3, 3), out_channels=1, in_channels=1)
        c.weight.copy_(t)
        fig, ax = plt.subplots(2, 6, figsize=(8, 3))
        fig.suptitle(title, fontsize=16)
        for i in range(5):
            im = data_train[i][0]
            ax[0][i].imshow(im[0])
            ax[1][i].imshow(c(im.unsqueeze(0))[0][0])
            ax[0][i].axis('off')
            ax[1][i].axis('off')
        ax[0, 5].imshow(t)
        ax[0, 5].axis('off')
        ax[1, 5].axis('off')
        # plt.tight_layout()
        plt.show()


def display_dataset(dataset, n=10, classes=None):
    fig, ax = plt.subplots(1, n, figsize=(15, 3))
    mn = min([dataset[i][0].min() for i in range(n)])
    mx = max([dataset[i][0].max() for i in range(n)])
    for i in range(n):
        ax[i].imshow(np.transpose((dataset[i][0] - mn) / (mx - mn), (1, 2, 0)))
        ax[i].axis('off')
        if classes:
            ax[i].set_title(classes[dataset[i][1]])


def check_image(fn):
    try:
        im = Image.open(fn)
        im.verify()
        return True
    except:
        return False


def check_image_dir(path):
    for fn in glob.glob(path):
        if not check_image(fn):
            print("Corrupt image: {}".format(fn))
            os.remove(fn)


def common_transform():
    std_normalize = torchvision.transforms.Normalize(mean=[0.485, 0.456, 0.406],
                                                     std=[0.229, 0.224, 0.225])
    trans = torchvision.transforms.Compose([
        torchvision.transforms.Resize(256),
        torchvision.transforms.CenterCrop(224),
        torchvision.transforms.ToTensor(),
        std_normalize])
    return trans


def load_cats_dogs_dataset():
    if not os.path.exists('data/PetImages'):
        with zipfile.ZipFile('data/kagglecatsanddogs_5340.zip', 'r') as zip_ref:
            zip_ref.extractall('data')

    check_image_dir('data/PetImages/Cat/*.jpg')
    check_image_dir('data/PetImages/Dog/*.jpg')

    dataset = torchvision.datasets.ImageFolder('data/PetImages', transform=common_transform())
    trainset, testset = torch.utils.data.random_split(dataset, [20000, len(dataset) - 20000])
    trainloader = torch.utils.data.DataLoader(trainset, batch_size=32)
    testloader = torch.utils.data.DataLoader(trainset, batch_size=32)
    return dataset, trainloader, testloader

版本要求:

torchvision==0.13.0
torch==1.12.0

创建一个新文件,加载类

python 复制代码
import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
from torchinfo import summary
import numpy as np
import os

from pytorchcv import train, plot_results, display_dataset, train_long, check_image_dir
python 复制代码
if not os.path.exists('data/kagglecatsanddogs_5340.zip'):
    wget -P data -q https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_5340.zip

dataset, train_loader, test_loader = load_cats_dogs_dataset()

7.2 MobileNet

在之前的单元中,已经介绍用于图像分类的ResNet架构。ResNet的更轻量级的模拟是MobileNet,它使用所谓的"反向残差块(Inverted Residual Blocks)"。让我们加载预训练的MobileNet,并看看它是如何工作的:

python 复制代码
model = torch.hub.load('pytorch/vision:v0.13.0', 'mobilenet_v2', weights='MobileNet_V2_Weights.DEFAULT')
model.eval()
print(model)

输出是:

Using cache found in /home/vmuser/.cache/torch/hub/pytorch_vision_v0.6.0
MobileNetV2(
  (features): Sequential(
    (0): ConvBNReLU(
      (0): Conv2d(3, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): ReLU6(inplace=True)
    )
    (1): InvertedResidual(
      (conv): Sequential(
        (0): ConvBNReLU(
          (0): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
          (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU6(inplace=True)
        )
        (1): Conv2d(32, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (2): InvertedResidual(
      (conv): Sequential(
        (0): ConvBNReLU(
          (0): Conv2d(16, 96, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU6(inplace=True)
        )
        (1): ConvBNReLU(
          (0): Conv2d(96, 96, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=96, bias=False)
          (1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU6(inplace=True)
        )
        (2): Conv2d(96, 24, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (3): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (3): InvertedResidual(
      (conv): Sequential(
        (0): ConvBNReLU(
          (0): Conv2d(24, 144, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (1): BatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU6(inplace=True)
        )
        (1): ConvBNReLU(
          (0): Conv2d(144, 144, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=144, bias=False)
          (1): BatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU6(inplace=True)
        )
        (2): Conv2d(144, 24, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (3): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (4): InvertedResidual(
      (conv): Sequential(
        (0): ConvBNReLU(
          (0): Conv2d(24, 144, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (1): BatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU6(inplace=True)
        )
        (1): ConvBNReLU(
          (0): Conv2d(144, 144, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=144, bias=False)
          (1): BatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU6(inplace=True)
        )
        (2): Conv2d(144, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (3): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (5): InvertedResidual(
      (conv): Sequential(
        (0): ConvBNReLU(
          (0): Conv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU6(inplace=True)
        )
        (1): ConvBNReLU(
          (0): Conv2d(192, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=192, bias=False)
          (1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU6(inplace=True)
        )
        (2): Conv2d(192, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (3): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (6): InvertedResidual(
      (conv): Sequential(
        (0): ConvBNReLU(
          (0): Conv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU6(inplace=True)
        )
        (1): ConvBNReLU(
          (0): Conv2d(192, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=192, bias=False)
          (1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU6(inplace=True)
        )
        (2): Conv2d(192, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (3): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (7): InvertedResidual(
      (conv): Sequential(
        (0): ConvBNReLU(
          (0): Conv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU6(inplace=True)
        )
        (1): ConvBNReLU(
          (0): Conv2d(192, 192, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=192, bias=False)
          (1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU6(inplace=True)
        )
        (2): Conv2d(192, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (8): InvertedResidual(
      (conv): Sequential(
        (0): ConvBNReLU(
          (0): Conv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU6(inplace=True)
        )
        (1): ConvBNReLU(
          (0): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=384, bias=False)
          (1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU6(inplace=True)
        )
        (2): Conv2d(384, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (9): InvertedResidual(
      (conv): Sequential(
        (0): ConvBNReLU(
          (0): Conv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU6(inplace=True)
        )
        (1): ConvBNReLU(
          (0): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=384, bias=False)
          (1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU6(inplace=True)
        )
        (2): Conv2d(384, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (10): InvertedResidual(
      (conv): Sequential(
        (0): ConvBNReLU(
          (0): Conv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU6(inplace=True)
        )
        (1): ConvBNReLU(
          (0): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=384, bias=False)
          (1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU6(inplace=True)
        )
        (2): Conv2d(384, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (11): InvertedResidual(
      (conv): Sequential(
        (0): ConvBNReLU(
          (0): Conv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU6(inplace=True)
        )
        (1): ConvBNReLU(
          (0): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=384, bias=False)
          (1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU6(inplace=True)
        )
        (2): Conv2d(384, 96, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (3): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (12): InvertedResidual(
      (conv): Sequential(
        (0): ConvBNReLU(
          (0): Conv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU6(inplace=True)
        )
        (1): ConvBNReLU(
          (0): Conv2d(576, 576, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=576, bias=False)
          (1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU6(inplace=True)
        )
        (2): Conv2d(576, 96, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (3): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (13): InvertedResidual(
      (conv): Sequential(
        (0): ConvBNReLU(
          (0): Conv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU6(inplace=True)
        )
        (1): ConvBNReLU(
          (0): Conv2d(576, 576, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=576, bias=False)
          (1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU6(inplace=True)
        )
        (2): Conv2d(576, 96, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (3): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (14): InvertedResidual(
      (conv): Sequential(
        (0): ConvBNReLU(
          (0): Conv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU6(inplace=True)
        )
        (1): ConvBNReLU(
          (0): Conv2d(576, 576, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=576, bias=False)
          (1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU6(inplace=True)
        )
        (2): Conv2d(576, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (3): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (15): InvertedResidual(
      (conv): Sequential(
        (0): ConvBNReLU(
          (0): Conv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU6(inplace=True)
        )
        (1): ConvBNReLU(
          (0): Conv2d(960, 960, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=960, bias=False)
          (1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU6(inplace=True)
        )
        (2): Conv2d(960, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (3): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (16): InvertedResidual(
      (conv): Sequential(
        (0): ConvBNReLU(
          (0): Conv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU6(inplace=True)
        )
        (1): ConvBNReLU(
          (0): Conv2d(960, 960, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=960, bias=False)
          (1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU6(inplace=True)
        )
        (2): Conv2d(960, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (3): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (17): InvertedResidual(
      (conv): Sequential(
        (0): ConvBNReLU(
          (0): Conv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU6(inplace=True)
        )
        (1): ConvBNReLU(
          (0): Conv2d(960, 960, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=960, bias=False)
          (1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): ReLU6(inplace=True)
        )
        (2): Conv2d(960, 320, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (3): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (18): ConvBNReLU(
      (0): Conv2d(320, 1280, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (1): BatchNorm2d(1280, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): ReLU6(inplace=True)
    )
  )
  (classifier): Sequential(
    (0): Dropout(p=0.2, inplace=False)
    (1): Linear(in_features=1280, out_features=1000, bias=True)
  )
)

让我们将模型应用于我们的数据集,并确保它有效。

python 复制代码
sample_image = dataset[0][0].unsqueeze(0)
res = model(sample_image)
print(res[0].argmax())

输出是:

tensor(281)

结果(281)是ImageNet类别编号,在之前的单元中已经讨论过。

请注意,MobileNet和全尺度ResNet模型的参数数量存在显著差异。在某些方面,MobileNet比VGG模型系列更紧凑,但精度较低。然而,参数数量的减少自然会导致模型精度的下降。

7.3 使用MobileNet进行迁移学习

现在让我们执行与上一单元相同的迁移学习过程,但使用MobileNet。首先,让我们冻结模型的所有参数:

python 复制代码
for x in model.parameters():
    x.requires_grad = False

然后,替换最终分类器。我们还将模型转移到默认的训练设备(GPU或CPU):

python 复制代码
device = 'cuda' if torch.cuda.is_available() else 'cpu'
model.classifier = nn.Linear(1280,2)
model = model.to(device)
summary(model,input_size=(1,3,244,244))

输出是:

==========================================================================================
Layer (type:depth-idx)                   Output Shape              Param #
==========================================================================================
├─Sequential: 1-1                        [1, 1280, 8, 8]           --
|    └─ConvBNReLU: 2-1                   [1, 32, 122, 122]         --
|    |    └─Conv2d: 3-1                  [1, 32, 122, 122]         (864)
|    |    └─BatchNorm2d: 3-2             [1, 32, 122, 122]         (64)
|    |    └─ReLU6: 3-3                   [1, 32, 122, 122]         --
|    └─InvertedResidual: 2-2             [1, 16, 122, 122]         --
|    |    └─Sequential: 3-4              [1, 16, 122, 122]         (896)
|    └─InvertedResidual: 2-3             [1, 24, 61, 61]           --
|    |    └─Sequential: 3-5              [1, 24, 61, 61]           (5,136)
|    └─InvertedResidual: 2-4             [1, 24, 61, 61]           --
|    |    └─Sequential: 3-6              [1, 24, 61, 61]           (8,832)
|    └─InvertedResidual: 2-5             [1, 32, 31, 31]           --
|    |    └─Sequential: 3-7              [1, 32, 31, 31]           (10,000)
|    └─InvertedResidual: 2-6             [1, 32, 31, 31]           --
|    |    └─Sequential: 3-8              [1, 32, 31, 31]           (14,848)
|    └─InvertedResidual: 2-7             [1, 32, 31, 31]           --
|    |    └─Sequential: 3-9              [1, 32, 31, 31]           (14,848)
|    └─InvertedResidual: 2-8             [1, 64, 16, 16]           --
|    |    └─Sequential: 3-10             [1, 64, 16, 16]           (21,056)
|    └─InvertedResidual: 2-9             [1, 64, 16, 16]           --
|    |    └─Sequential: 3-11             [1, 64, 16, 16]           (54,272)
|    └─InvertedResidual: 2-10            [1, 64, 16, 16]           --
|    |    └─Sequential: 3-12             [1, 64, 16, 16]           (54,272)
|    └─InvertedResidual: 2-11            [1, 64, 16, 16]           --
|    |    └─Sequential: 3-13             [1, 64, 16, 16]           (54,272)
|    └─InvertedResidual: 2-12            [1, 96, 16, 16]           --
|    |    └─Sequential: 3-14             [1, 96, 16, 16]           (66,624)
|    └─InvertedResidual: 2-13            [1, 96, 16, 16]           --
|    |    └─Sequential: 3-15             [1, 96, 16, 16]           (118,272)
|    └─InvertedResidual: 2-14            [1, 96, 16, 16]           --
|    |    └─Sequential: 3-16             [1, 96, 16, 16]           (118,272)
|    └─InvertedResidual: 2-15            [1, 160, 8, 8]            --
|    |    └─Sequential: 3-17             [1, 160, 8, 8]            (155,264)
|    └─InvertedResidual: 2-16            [1, 160, 8, 8]            --
|    |    └─Sequential: 3-18             [1, 160, 8, 8]            (320,000)
|    └─InvertedResidual: 2-17            [1, 160, 8, 8]            --
|    |    └─Sequential: 3-19             [1, 160, 8, 8]            (320,000)
|    └─InvertedResidual: 2-18            [1, 320, 8, 8]            --
|    |    └─Sequential: 3-20             [1, 320, 8, 8]            (473,920)
|    └─ConvBNReLU: 2-19                  [1, 1280, 8, 8]           --
|    |    └─Conv2d: 3-21                 [1, 1280, 8, 8]           (409,600)
|    |    └─BatchNorm2d: 3-22            [1, 1280, 8, 8]           (2,560)
|    |    └─ReLU6: 3-23                  [1, 1280, 8, 8]           --
├─Linear: 1-2                            [1, 2]                    2,562
==========================================================================================
Total params: 2,226,434
Trainable params: 2,562
Non-trainable params: 2,223,872
Total mult-adds (M): 196.40
==========================================================================================
Input size (MB): 0.71
Forward/backward pass size (MB): 20.12
Params size (MB): 8.91
Estimated Total Size (MB): 29.74
==========================================================================================

下面开始实际的训练:

python 复制代码
train_long(model,train_loader,test_loader,loss_fn=torch.nn.CrossEntropyLoss(),epochs=1,print_freq=90)

输出是:

Epoch 0, minibatch 0: train acc = 0.5, train loss = 0.02309325896203518
Epoch 0, minibatch 90: train acc = 0.9443681318681318, train loss = 0.006317565729329874
Epoch 0, minibatch 180: train acc = 0.9488950276243094, train loss = 0.00590015182178982
Epoch 0, minibatch 270: train acc = 0.9492619926199262, train loss = 0.006072205810969167
Epoch 0, minibatch 360: train acc = 0.9500519390581718, train loss = 0.00641324315374908
Epoch 0, minibatch 450: train acc = 0.9494872505543237, train loss = 0.006945275943189397
Epoch 0, minibatch 540: train acc = 0.9521141404805915, train loss = 0.0067323536617257896
Epoch 0 done, validation acc = 0.98245, validation loss = 0.002347727584838867

7.4 小结

请注意,MobileNet的准确率几乎与VGG-16相同,仅略低于全尺度的ResNet。

像MobileNet或ResNet-18这样的小型模型的主要优势在于它们可以在移动设备上使用。这里是在Android设备上使用ResNet-18的官方示例,这里是使用MobileNet的类似示例。


8 本教程总结

在此模块中,你已了解卷积神经网络如何工作以及它们如何捕获二维图像中的模式。 事实上,CNN 还可用于发现一维信号(如声波或时序)和多维结构(例如视频中的事件,其中某些模式在一些帧中重复)中的模式。

此外,CNN 是用于解决更复杂的计算机视觉任务(如映像生成)的简单构建基块。 生成对抗性网络可用于生成给定数据集中类似的映像,例如,可用于生成计算机生成的图画。 同样,CNN 用于对象检测、实例分段等。 我们还用单独的一节课程了解了如何实现神经网络来解决这些问题,我们建议你继续学习掌握计算机视觉!

相关推荐
IT古董33 分钟前
【机器学习】机器学习的基本分类-强化学习-策略梯度(Policy Gradient,PG)
人工智能·机器学习·分类
centurysee34 分钟前
【最佳实践】Anthropic:Agentic系统实践案例
人工智能
mahuifa34 分钟前
混合开发环境---使用编程AI辅助开发Qt
人工智能·vscode·qt·qtcreator·编程ai
四口鲸鱼爱吃盐36 分钟前
Pytorch | 从零构建GoogleNet对CIFAR10进行分类
人工智能·pytorch·分类
蓝天星空1 小时前
Python调用open ai接口
人工智能·python
睡觉狂魔er1 小时前
自动驾驶控制与规划——Project 3: LQR车辆横向控制
人工智能·机器学习·自动驾驶
jasmine s1 小时前
Pandas
开发语言·python
郭wes代码1 小时前
Cmd命令大全(万字详细版)
python·算法·小程序
scan7241 小时前
LILAC采样算法
人工智能·算法·机器学习
leaf_leaves_leaf1 小时前
win11用一条命令给anaconda环境安装GPU版本pytorch,并检查是否为GPU版本
人工智能·pytorch·python