深度学习camp-第J4周:ResNet与DenseNet结合探索

本周任务:

  • 探索ResNet和DenseNet的结合可能性
  • 本周任务较难,我们在chatGPT的帮助下完成

一、网络的构建

设计一种结合 ResNet 和 DenseNet 的网络架构,目标是在性能与复杂度之间实现平衡,同时保持与 DenseNet-121 相当的训练速度,可以通过以下步骤设计一种新的网络结构,称为 ResDenseNet(暂命名)。这种网络结构结合了 ResNet 的残差连接和 DenseNet 的密集连接优点,同时对复杂度加以控制。

设计思路

残差模块与密集模块结合:

在网络的不同阶段,使用残差模块(ResBlock)来捕获浅层特征。

在每个阶段的后期引入密集模块(DenseBlock),实现高效的特征复用。

通过调整每层的通道数,避免过多的计算和内存消耗。

瓶颈设计(Bottleneck Block):

每个模块采用瓶颈层,减少计算复杂度。

通过 1x1 卷积压缩和扩展特征通道数。

混合连接方式:

引入 局部密集连接,只连接同一模块内的层,避免 DenseNet 的全连接导致的内存开销。

在模块之间使用残差连接,便于信息流通。

网络深度与宽度的平衡:

将 DenseNet 的增长率(growth rate)减少,适当减少特征图通道数增长。

模块之间引入过渡层(Transition Layer)以压缩特征图尺寸和通道数。

python 复制代码
import torch
import torch.nn as nn

class Bottleneck(nn.Module):
    def __init__(self, in_channels, growth_rate):
        super(Bottleneck, self).__init__()
        self.bn1 = nn.BatchNorm2d(in_channels)
        self.conv1 = nn.Conv2d(in_channels, 4 * growth_rate, kernel_size=1, stride=1, bias=False)
        self.bn2 = nn.BatchNorm2d(4 * growth_rate)
        self.conv2 = nn.Conv2d(4 * growth_rate, growth_rate, kernel_size=3, stride=1, padding=1, bias=False)

    def forward(self, x):
        out = self.conv1(self.bn1(x))
        out = self.conv2(self.bn2(out))
        return torch.cat([x, out], dim=1)

class DenseBlock(nn.Module):
    def __init__(self, num_layers, in_channels, growth_rate):
        super(DenseBlock, self).__init__()
        self.layers = nn.ModuleList()
        for i in range(num_layers):
            self.layers.append(Bottleneck(in_channels + i * growth_rate, growth_rate))
        # 为了残差连接,可能需要调整通道数以匹配输入输出
        self.residual = nn.Conv2d(in_channels, in_channels + num_layers * growth_rate, kernel_size=1, bias=False)

    def forward(self, x):
        identity = self.residual(x)  # 将输入调整为与 DenseBlock 输出通道一致
        for layer in self.layers:
            x = layer(x)  # 密集连接,逐层拼接
        return x + identity  # 残差连接:输入与输出相加

class TransitionLayer(nn.Module):
    def __init__(self, in_channels, out_channels):
        super(TransitionLayer, self).__init__()
        self.bn = nn.BatchNorm2d(in_channels)
        self.conv = nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=1, bias=False)
        self.pool = nn.AvgPool2d(kernel_size=2, stride=2)

    def forward(self, x):
        x = self.conv(self.bn(x))
        return self.pool(x)

class ResDenseNet(nn.Module):
    def __init__(self, num_classes=1000):
        super(ResDenseNet, self).__init__()
        self.stem = nn.Sequential(
            nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False),
            nn.BatchNorm2d(64),
            nn.ReLU(inplace=True),
            nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
        )
        self.stage1 = self._make_stage(64, 128, num_layers=4, growth_rate=16)
        self.stage2 = self._make_stage(128, 256, num_layers=4, growth_rate=16)
        self.stage3 = self._make_stage(256, 512, num_layers=6, growth_rate=12)
        self.stage4 = self._make_stage(512, 1024, num_layers=6, growth_rate=12)
        self.classifier = nn.Linear(1024, num_classes)

    def _make_stage(self, in_channels, out_channels, num_layers, growth_rate):
        dense_block = DenseBlock(num_layers, in_channels, growth_rate)
        transition = TransitionLayer(in_channels + num_layers * growth_rate, out_channels)
        return nn.Sequential(dense_block, transition)

    def forward(self, x):
        x = self.stem(x)
        x = self.stage1(x)
        x = self.stage2(x)
        x = self.stage3(x)
        x = self.stage4(x)
        x = torch.mean(x, dim=[2, 3])  # Global Average Pooling
        return self.classifier(x)

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model =  ResDenseNet().to(device)
model

代码输出:

python 复制代码
ResDenseNet(
  (stem): Sequential(
    (0): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
    (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (2): ReLU(inplace=True)
    (3): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
  )
  (stage1): Sequential(
    (0): DenseBlock(
      (layers): ModuleList(
        (0): Bottleneck(
          (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (conv2): Conv2d(64, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        )
        (1): Bottleneck(
          (bn1): BatchNorm2d(80, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (conv1): Conv2d(80, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (conv2): Conv2d(64, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        )
        (2): Bottleneck(
          (bn1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (conv1): Conv2d(96, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (conv2): Conv2d(64, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        )
        (3): Bottleneck(
          (bn1): BatchNorm2d(112, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (conv1): Conv2d(112, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (conv2): Conv2d(64, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        )
      )
      (residual): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
    )
    (1): TransitionLayer(
      (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (pool): AvgPool2d(kernel_size=2, stride=2, padding=0)
    )
  )
  (stage2): Sequential(
    (0): DenseBlock(
      (layers): ModuleList(
        (0): Bottleneck(
          (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (conv1): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (conv2): Conv2d(64, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        )
        (1): Bottleneck(
          (bn1): BatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (conv1): Conv2d(144, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (conv2): Conv2d(64, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        )
        (2): Bottleneck(
          (bn1): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (conv1): Conv2d(160, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (conv2): Conv2d(64, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        )
        (3): Bottleneck(
          (bn1): BatchNorm2d(176, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (conv1): Conv2d(176, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (conv2): Conv2d(64, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        )
      )
      (residual): Conv2d(128, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
    )
    (1): TransitionLayer(
      (bn): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv): Conv2d(192, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (pool): AvgPool2d(kernel_size=2, stride=2, padding=0)
    )
  )
  (stage3): Sequential(
    (0): DenseBlock(
      (layers): ModuleList(
        (0): Bottleneck(
          (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (conv1): Conv2d(256, 48, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn2): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (conv2): Conv2d(48, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        )
        (1): Bottleneck(
          (bn1): BatchNorm2d(268, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (conv1): Conv2d(268, 48, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn2): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (conv2): Conv2d(48, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        )
        (2): Bottleneck(
          (bn1): BatchNorm2d(280, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (conv1): Conv2d(280, 48, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn2): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (conv2): Conv2d(48, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        )
        (3): Bottleneck(
          (bn1): BatchNorm2d(292, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (conv1): Conv2d(292, 48, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn2): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (conv2): Conv2d(48, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        )
        (4): Bottleneck(
          (bn1): BatchNorm2d(304, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (conv1): Conv2d(304, 48, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn2): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (conv2): Conv2d(48, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        )
        (5): Bottleneck(
          (bn1): BatchNorm2d(316, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (conv1): Conv2d(316, 48, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn2): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (conv2): Conv2d(48, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        )
      )
      (residual): Conv2d(256, 328, kernel_size=(1, 1), stride=(1, 1), bias=False)
    )
    (1): TransitionLayer(
      (bn): BatchNorm2d(328, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv): Conv2d(328, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (pool): AvgPool2d(kernel_size=2, stride=2, padding=0)
    )
  )
  (stage4): Sequential(
    (0): DenseBlock(
      (layers): ModuleList(
        (0): Bottleneck(
          (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (conv1): Conv2d(512, 48, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn2): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (conv2): Conv2d(48, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        )
        (1): Bottleneck(
          (bn1): BatchNorm2d(524, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (conv1): Conv2d(524, 48, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn2): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (conv2): Conv2d(48, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        )
        (2): Bottleneck(
          (bn1): BatchNorm2d(536, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (conv1): Conv2d(536, 48, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn2): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (conv2): Conv2d(48, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        )
        (3): Bottleneck(
          (bn1): BatchNorm2d(548, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (conv1): Conv2d(548, 48, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn2): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (conv2): Conv2d(48, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        )
        (4): Bottleneck(
          (bn1): BatchNorm2d(560, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (conv1): Conv2d(560, 48, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn2): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (conv2): Conv2d(48, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        )
        (5): Bottleneck(
          (bn1): BatchNorm2d(572, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (conv1): Conv2d(572, 48, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn2): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (conv2): Conv2d(48, 12, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        )
      )
      (residual): Conv2d(512, 584, kernel_size=(1, 1), stride=(1, 1), bias=False)
    )
    (1): TransitionLayer(
      (bn): BatchNorm2d(584, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv): Conv2d(584, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (pool): AvgPool2d(kernel_size=2, stride=2, padding=0)
    )
  )
  (classifier): Linear(in_features=1024, out_features=1000, bias=True)
)

代码输入:

python 复制代码
import torchsummary as summary
summary.summary(model, (3, 224, 224))

代码输出:

python 复制代码
----------------------------------------------------------------
        Layer (type)               Output Shape         Param #
================================================================
            Conv2d-1         [-1, 64, 112, 112]           9,408
       BatchNorm2d-2         [-1, 64, 112, 112]             128
              ReLU-3         [-1, 64, 112, 112]               0
         MaxPool2d-4           [-1, 64, 56, 56]               0
            Conv2d-5          [-1, 128, 56, 56]           8,192
       BatchNorm2d-6           [-1, 64, 56, 56]             128
            Conv2d-7           [-1, 64, 56, 56]           4,096
       BatchNorm2d-8           [-1, 64, 56, 56]             128
            Conv2d-9           [-1, 16, 56, 56]           9,216
       Bottleneck-10           [-1, 80, 56, 56]               0
      BatchNorm2d-11           [-1, 80, 56, 56]             160
           Conv2d-12           [-1, 64, 56, 56]           5,120
      BatchNorm2d-13           [-1, 64, 56, 56]             128
           Conv2d-14           [-1, 16, 56, 56]           9,216
       Bottleneck-15           [-1, 96, 56, 56]               0
      BatchNorm2d-16           [-1, 96, 56, 56]             192
           Conv2d-17           [-1, 64, 56, 56]           6,144
      BatchNorm2d-18           [-1, 64, 56, 56]             128
           Conv2d-19           [-1, 16, 56, 56]           9,216
       Bottleneck-20          [-1, 112, 56, 56]               0
      BatchNorm2d-21          [-1, 112, 56, 56]             224
           Conv2d-22           [-1, 64, 56, 56]           7,168
      BatchNorm2d-23           [-1, 64, 56, 56]             128
           Conv2d-24           [-1, 16, 56, 56]           9,216
       Bottleneck-25          [-1, 128, 56, 56]               0
       DenseBlock-26          [-1, 128, 56, 56]               0
      BatchNorm2d-27          [-1, 128, 56, 56]             256
           Conv2d-28          [-1, 128, 56, 56]          16,384
        AvgPool2d-29          [-1, 128, 28, 28]               0
  TransitionLayer-30          [-1, 128, 28, 28]               0
           Conv2d-31          [-1, 192, 28, 28]          24,576
      BatchNorm2d-32          [-1, 128, 28, 28]             256
           Conv2d-33           [-1, 64, 28, 28]           8,192
      BatchNorm2d-34           [-1, 64, 28, 28]             128
           Conv2d-35           [-1, 16, 28, 28]           9,216
       Bottleneck-36          [-1, 144, 28, 28]               0
      BatchNorm2d-37          [-1, 144, 28, 28]             288
           Conv2d-38           [-1, 64, 28, 28]           9,216
      BatchNorm2d-39           [-1, 64, 28, 28]             128
           Conv2d-40           [-1, 16, 28, 28]           9,216
       Bottleneck-41          [-1, 160, 28, 28]               0
      BatchNorm2d-42          [-1, 160, 28, 28]             320
           Conv2d-43           [-1, 64, 28, 28]          10,240
      BatchNorm2d-44           [-1, 64, 28, 28]             128
           Conv2d-45           [-1, 16, 28, 28]           9,216
       Bottleneck-46          [-1, 176, 28, 28]               0
      BatchNorm2d-47          [-1, 176, 28, 28]             352
           Conv2d-48           [-1, 64, 28, 28]          11,264
      BatchNorm2d-49           [-1, 64, 28, 28]             128
           Conv2d-50           [-1, 16, 28, 28]           9,216
       Bottleneck-51          [-1, 192, 28, 28]               0
       DenseBlock-52          [-1, 192, 28, 28]               0
      BatchNorm2d-53          [-1, 192, 28, 28]             384
           Conv2d-54          [-1, 256, 28, 28]          49,152
        AvgPool2d-55          [-1, 256, 14, 14]               0
  TransitionLayer-56          [-1, 256, 14, 14]               0
           Conv2d-57          [-1, 328, 14, 14]          83,968
      BatchNorm2d-58          [-1, 256, 14, 14]             512
           Conv2d-59           [-1, 48, 14, 14]          12,288
      BatchNorm2d-60           [-1, 48, 14, 14]              96
           Conv2d-61           [-1, 12, 14, 14]           5,184
       Bottleneck-62          [-1, 268, 14, 14]               0
      BatchNorm2d-63          [-1, 268, 14, 14]             536
           Conv2d-64           [-1, 48, 14, 14]          12,864
      BatchNorm2d-65           [-1, 48, 14, 14]              96
           Conv2d-66           [-1, 12, 14, 14]           5,184
       Bottleneck-67          [-1, 280, 14, 14]               0
      BatchNorm2d-68          [-1, 280, 14, 14]             560
           Conv2d-69           [-1, 48, 14, 14]          13,440
      BatchNorm2d-70           [-1, 48, 14, 14]              96
           Conv2d-71           [-1, 12, 14, 14]           5,184
       Bottleneck-72          [-1, 292, 14, 14]               0
      BatchNorm2d-73          [-1, 292, 14, 14]             584
           Conv2d-74           [-1, 48, 14, 14]          14,016
      BatchNorm2d-75           [-1, 48, 14, 14]              96
           Conv2d-76           [-1, 12, 14, 14]           5,184
       Bottleneck-77          [-1, 304, 14, 14]               0
      BatchNorm2d-78          [-1, 304, 14, 14]             608
           Conv2d-79           [-1, 48, 14, 14]          14,592
      BatchNorm2d-80           [-1, 48, 14, 14]              96
           Conv2d-81           [-1, 12, 14, 14]           5,184
       Bottleneck-82          [-1, 316, 14, 14]               0
      BatchNorm2d-83          [-1, 316, 14, 14]             632
           Conv2d-84           [-1, 48, 14, 14]          15,168
      BatchNorm2d-85           [-1, 48, 14, 14]              96
           Conv2d-86           [-1, 12, 14, 14]           5,184
       Bottleneck-87          [-1, 328, 14, 14]               0
       DenseBlock-88          [-1, 328, 14, 14]               0
      BatchNorm2d-89          [-1, 328, 14, 14]             656
           Conv2d-90          [-1, 512, 14, 14]         167,936
        AvgPool2d-91            [-1, 512, 7, 7]               0
  TransitionLayer-92            [-1, 512, 7, 7]               0
           Conv2d-93            [-1, 584, 7, 7]         299,008
      BatchNorm2d-94            [-1, 512, 7, 7]           1,024
           Conv2d-95             [-1, 48, 7, 7]          24,576
      BatchNorm2d-96             [-1, 48, 7, 7]              96
           Conv2d-97             [-1, 12, 7, 7]           5,184
       Bottleneck-98            [-1, 524, 7, 7]               0
      BatchNorm2d-99            [-1, 524, 7, 7]           1,048
          Conv2d-100             [-1, 48, 7, 7]          25,152
     BatchNorm2d-101             [-1, 48, 7, 7]              96
          Conv2d-102             [-1, 12, 7, 7]           5,184
      Bottleneck-103            [-1, 536, 7, 7]               0
     BatchNorm2d-104            [-1, 536, 7, 7]           1,072
          Conv2d-105             [-1, 48, 7, 7]          25,728
     BatchNorm2d-106             [-1, 48, 7, 7]              96
          Conv2d-107             [-1, 12, 7, 7]           5,184
      Bottleneck-108            [-1, 548, 7, 7]               0
     BatchNorm2d-109            [-1, 548, 7, 7]           1,096
          Conv2d-110             [-1, 48, 7, 7]          26,304
     BatchNorm2d-111             [-1, 48, 7, 7]              96
          Conv2d-112             [-1, 12, 7, 7]           5,184
      Bottleneck-113            [-1, 560, 7, 7]               0
     BatchNorm2d-114            [-1, 560, 7, 7]           1,120
          Conv2d-115             [-1, 48, 7, 7]          26,880
     BatchNorm2d-116             [-1, 48, 7, 7]              96
          Conv2d-117             [-1, 12, 7, 7]           5,184
      Bottleneck-118            [-1, 572, 7, 7]               0
     BatchNorm2d-119            [-1, 572, 7, 7]           1,144
          Conv2d-120             [-1, 48, 7, 7]          27,456
     BatchNorm2d-121             [-1, 48, 7, 7]              96
          Conv2d-122             [-1, 12, 7, 7]           5,184
      Bottleneck-123            [-1, 584, 7, 7]               0
      DenseBlock-124            [-1, 584, 7, 7]               0
     BatchNorm2d-125            [-1, 584, 7, 7]           1,168
          Conv2d-126           [-1, 1024, 7, 7]         598,016
       AvgPool2d-127           [-1, 1024, 3, 3]               0
 TransitionLayer-128           [-1, 1024, 3, 3]               0
          Linear-129                 [-1, 1000]       1,025,000
================================================================
Total params: 2,734,104
Trainable params: 2,734,104
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.57
Forward/backward pass size (MB): 95.40
Params size (MB): 10.43
Estimated Total Size (MB): 106.41
----------------------------------------------------------------

接下来我们简单阅读我们构建的网络:

  1. 首先我们构建Bottleneck,bottleneck的主要目的是构建denseblock的组成部分,通过两次归一化层以及两次卷积构成
  2. 随后我们构建Denseblock,并且使用残差连接
  3. 构建transition层进行池化,最终能够全连接
  4. 整体网络构建如下:
python 复制代码
Input (224x224x3)
    |
    |   Conv2d (7x7, stride=2)
    |   BatchNorm2d
    |   ReLU
    |   MaxPool2d (3x3, stride=2)
    v
Stem Layer (64 channels)
    |
    v
Stage 1: DenseBlock + TransitionLayer (64 -> 128 channels)
    |  
    v
Stage 2: DenseBlock + TransitionLayer (128 -> 256 channels)
    |
    v
Stage 3: DenseBlock + TransitionLayer (256 -> 512 channels)
    |
    v
Stage 4: DenseBlock + TransitionLayer (512 -> 1024 channels)
    |
    v
Global Average Pooling (1024x1x1)
    |
    v
Fully Connected Layer (1024 -> num_classes)
    |
    v
Output (num_classes)

二、对上周的乳腺癌识别

python 复制代码
import pathlib
data_dir = './data/J3-1-data'
data_dir = pathlib.Path(data_dir)

data_path = list(data_dir.glob('*'))
classNames = [path.name for path in data_path]
print(classNames)

代码输出:

python 复制代码
['0', '1']
python 复制代码
from torch.utils.data import DataLoader
from torchvision import datasets, transforms

train_transforms = transforms.Compose([
    transforms.Resize([224, 224]),
    transforms.ToTensor(),
    transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])

total_data = datasets.ImageFolder(data_dir, transform=train_transforms)
total_data

代码输出:

python 复制代码
Dataset ImageFolder
    Number of datapoints: 13403
    Root location: data\J3-1-data
    StandardTransform
Transform: Compose(
               Resize(size=[224, 224], interpolation=bilinear, max_size=None, antialias=True)
               ToTensor()
               Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
           )
python 复制代码
train_size = int(0.7 * len(total_data)) 
remain_size  = len(total_data) - train_size  
train_dataset, remain_dataset = torch.utils.data.random_split(total_data, [train_size, remain_size])
test_size = int(0.6 * len(remain_dataset))
validate_size = len(remain_dataset) - test_size
test_dataset, validate_dataset = torch.utils.data.random_split(remain_dataset, [test_size, validate_size]) #随机分配数据
train_dataset, test_dataset, validate_dataset

代码输出:

python 复制代码
(<torch.utils.data.dataset.Subset at 0x2138402dbb0>,
 <torch.utils.data.dataset.Subset at 0x21383feb590>,
 <torch.utils.data.dataset.Subset at 0x21383ece690>)
python 复制代码
batch_size = 32

train_dl = DataLoader(
    train_dataset, 
    batch_size=batch_size,
    shuffle=True)

test_dl = DataLoader(
    test_dataset,
    batch_size = batch_size,
    shuffle = True
)

validate_dl = DataLoader(
    validate_dataset,
    batch_size = batch_size,
    shuffle = False
)

for x, y in validate_dl:
    print("shape of x [N, C, H, W]:", x.shape)
    print("shape of y:", y.shape, y.dtype)
    break

代码输出:

python 复制代码
shape of x [N, C, H, W]: torch.Size([32, 3, 224, 224])
shape of y: torch.Size([32]) torch.int64
python 复制代码
def train(dataloader, model, loss_fn, optimizer):
    size = len(dataloader.dataset)
    num_batches = len(dataloader)

    train_loss, train_acc = 0, 0

    for x, y in dataloader:
        x, y = x.to(device), y.to(device)

        pred = model(x)
        loss = loss_fn(pred, y)

        #backward
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

        train_acc += (pred.argmax(1) == y).type(torch.float).sum().item()
        train_loss += loss.item()

    train_acc /= size
    train_loss /= num_batches

    return train_acc, train_loss

def test(dataloader, model, loss_fn):
    size = len(dataloader.dataset)
    num_batches = len(dataloader)

    test_loss, test_acc = 0, 0

    for x, y in dataloader:
        x, y = x.to(device), y.to(device)

        pred = model(x)
        loss = loss_fn(pred, y)

        test_acc += (pred.argmax(1) == y).type(torch.float).sum().item()
        test_loss += loss.item()

    test_acc /= size
    test_loss /= num_batches

    return test_acc, test_loss

训练:

python 复制代码
import copy
from torch.optim.lr_scheduler import ReduceLROnPlateau

opt = torch.optim.Adam(model.parameters(), lr= 1e-4)
scheduler = ReduceLROnPlateau(opt, mode='min', factor=0.1, patience=5, verbose=True) # 当指标(如损失)连续 5 次没有改善时,将学习率乘以 0.1
loss_fn = nn.CrossEntropyLoss() # 交叉熵

epochs = 32

train_loss = []
train_acc  = []
test_loss  = []
test_acc   = []

best_acc = 0    # 设置一个最佳准确率,作为最佳模型的判别指标

for epoch in range(epochs):
    model.train()
    epoch_train_acc, epoch_train_loss = train(train_dl, model, loss_fn, opt)
    
    model.eval()
    epoch_test_acc, epoch_test_loss = test(test_dl, model, loss_fn)
    scheduler.step(epoch_test_loss)
    
    if epoch_test_acc > best_acc:
        best_acc = epoch_test_acc
        best_model = copy.deepcopy(model)

    train_acc.append(epoch_train_acc)
    train_loss.append(epoch_train_loss)
    test_acc.append(epoch_test_acc)
    test_loss.append(epoch_test_loss)

     # 获取当前的学习率
    lr = opt.state_dict()['param_groups'][0]['lr']
    
    template = ('Epoch:{:2d}, Train_acc:{:.1f}%, Train_loss:{:.3f}, Test_acc:{:.1f}%, Test_loss:{:.3f}, Lr:{:.2E}')
    print(template.format(epoch+1, epoch_train_acc*100, epoch_train_loss, 
                          epoch_test_acc*100, epoch_test_loss, lr))
    
# 保存最佳模型到文件中
PATH = './best_model.pth'  # 保存的参数文件名
torch.save(best_model.state_dict(), PATH)

print('Done')

代码输出:

python 复制代码
Epoch: 1, Train_acc:80.7%, Train_loss:0.892, Test_acc:71.2%, Test_loss:1.992, Lr:1.00E-04
Epoch: 2, Train_acc:82.5%, Train_loss:0.409, Test_acc:83.9%, Test_loss:0.393, Lr:1.00E-04
Epoch: 3, Train_acc:83.4%, Train_loss:0.395, Test_acc:82.8%, Test_loss:0.443, Lr:1.00E-04
Epoch: 4, Train_acc:83.8%, Train_loss:0.380, Test_acc:84.1%, Test_loss:0.378, Lr:1.00E-04
Epoch: 5, Train_acc:84.2%, Train_loss:0.375, Test_acc:54.6%, Test_loss:1.337, Lr:1.00E-04
Epoch: 6, Train_acc:84.2%, Train_loss:0.378, Test_acc:84.7%, Test_loss:0.354, Lr:1.00E-04
Epoch: 7, Train_acc:84.7%, Train_loss:0.368, Test_acc:64.4%, Test_loss:0.696, Lr:1.00E-04
Epoch: 8, Train_acc:84.9%, Train_loss:0.360, Test_acc:84.7%, Test_loss:0.493, Lr:1.00E-04
Epoch: 9, Train_acc:85.1%, Train_loss:0.362, Test_acc:73.7%, Test_loss:0.506, Lr:1.00E-04
Epoch:10, Train_acc:85.2%, Train_loss:0.350, Test_acc:77.3%, Test_loss:0.791, Lr:1.00E-04
Epoch:11, Train_acc:85.5%, Train_loss:0.352, Test_acc:53.7%, Test_loss:2.223, Lr:1.00E-04
Epoch:12, Train_acc:85.6%, Train_loss:0.351, Test_acc:84.5%, Test_loss:0.438, Lr:1.00E-05
Epoch:13, Train_acc:86.7%, Train_loss:0.321, Test_acc:87.4%, Test_loss:0.295, Lr:1.00E-05
Epoch:14, Train_acc:86.5%, Train_loss:0.314, Test_acc:87.3%, Test_loss:0.296, Lr:1.00E-05
Epoch:15, Train_acc:87.2%, Train_loss:0.310, Test_acc:87.1%, Test_loss:0.320, Lr:1.00E-05
Epoch:16, Train_acc:87.6%, Train_loss:0.307, Test_acc:87.2%, Test_loss:0.297, Lr:1.00E-05
Epoch:17, Train_acc:87.4%, Train_loss:0.309, Test_acc:88.2%, Test_loss:0.289, Lr:1.00E-05
Epoch:18, Train_acc:87.0%, Train_loss:0.310, Test_acc:87.6%, Test_loss:0.293, Lr:1.00E-05
Epoch:19, Train_acc:87.1%, Train_loss:0.305, Test_acc:88.3%, Test_loss:0.281, Lr:1.00E-05
Epoch:20, Train_acc:87.6%, Train_loss:0.298, Test_acc:87.6%, Test_loss:0.299, Lr:1.00E-05
Epoch:21, Train_acc:87.5%, Train_loss:0.299, Test_acc:87.9%, Test_loss:0.289, Lr:1.00E-05
Epoch:22, Train_acc:87.5%, Train_loss:0.299, Test_acc:88.3%, Test_loss:0.292, Lr:1.00E-05
Epoch:23, Train_acc:88.0%, Train_loss:0.296, Test_acc:86.4%, Test_loss:0.347, Lr:1.00E-05
Epoch:24, Train_acc:87.7%, Train_loss:0.299, Test_acc:88.1%, Test_loss:0.286, Lr:1.00E-05
Epoch:25, Train_acc:87.8%, Train_loss:0.294, Test_acc:86.4%, Test_loss:0.327, Lr:1.00E-06
Epoch:26, Train_acc:87.9%, Train_loss:0.290, Test_acc:87.5%, Test_loss:0.291, Lr:1.00E-06
Epoch:27, Train_acc:88.2%, Train_loss:0.286, Test_acc:88.9%, Test_loss:0.272, Lr:1.00E-06
Epoch:28, Train_acc:88.1%, Train_loss:0.287, Test_acc:88.6%, Test_loss:0.277, Lr:1.00E-06
Epoch:29, Train_acc:88.2%, Train_loss:0.286, Test_acc:89.4%, Test_loss:0.269, Lr:1.00E-06
Epoch:30, Train_acc:88.1%, Train_loss:0.285, Test_acc:89.1%, Test_loss:0.271, Lr:1.00E-06
Epoch:31, Train_acc:88.1%, Train_loss:0.288, Test_acc:88.9%, Test_loss:0.274, Lr:1.00E-06
Epoch:32, Train_acc:87.9%, Train_loss:0.291, Test_acc:89.1%, Test_loss:0.275, Lr:1.00E-06
Done

结果上看不如上次的DenseNet121

结果可视化:

python 复制代码
import matplotlib.pyplot as plt
epochs_range = range(epochs)

plt.figure(figsize=(12, 3))
plt.subplot(1, 2, 1)

plt.plot(epochs_range, train_acc, label='Training Accuracy')
plt.plot(epochs_range, test_acc, label='Test Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')

plt.subplot(1, 2, 2)
plt.plot(epochs_range, train_loss, label='Training Loss')
plt.plot(epochs_range, test_loss, label='Test Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()

代码输出:

对验证集的准确率:

python 复制代码
def validate(dataloader, model):
    model.eval()
    size = len(dataloader.dataset)
    num_batches = len(dataloader)

    validate_acc = 0

    for x, y in dataloader:
        x, y = x.to(device), y.to(device)

        pred = model(x)

        validate_acc += (pred.argmax(1) == y).type(torch.float).sum().item()

    validate_acc /= size

    return validate_acc


# 计算验证集准确率
validate_acc = validate(validate_dl, best_model)
print(f"Validation Accuracy: {validate_acc:.2%}")

代码输出:

python 复制代码
Validation Accuracy: 89.37%

达到89.4%

三、总结

这次的结合主要是在和GPT一起完成的,主要是简单的结合,看到很多人说文献中报道过DPN结构,我待会儿也会去看看。

相关推荐
醉后才知酒浓9 分钟前
OpenCV--图像查找
人工智能·opencv·计算机视觉
Acedata134 分钟前
Midjourney Imagine API 申请及使用
人工智能·midjourney
PeterClerk1 小时前
NLP领域重要会议及CCF等级
人工智能·自然语言处理
keira6742 小时前
【21天学习AI底层概念】day6 监督学习vs无监督学习
人工智能·学习
怡步晓心l2 小时前
机器学习预处理-表格数据的分析与可视化
人工智能·机器学习
2401_857610032 小时前
SSM 技术打造垃圾分类系统,提升环境质量
人工智能·分类·数据挖掘
Mr.看海2 小时前
【深度学习量化交易9】miniQMT快速上手教程案例集——使用xtQuant获取基本面数据篇
人工智能·python·深度学习·量化交易·miniqmt
数据与后端架构提升之路2 小时前
深度解析:推荐系统的进化之路与深度学习革命
人工智能
Code blocks2 小时前
深度学习模型、算法与应用的全方位解析
深度学习