1 神经网络框架
1.1 Module类的使用
NN (Neural network): 神经网络
Containers: 容器
Convolution Layers: 卷积层
Pooling layers: 池化层
Padding Layers: 填充层
Non-linear Activations (weighted sum, nonlinearity): 非线性激活
Non-linear Activations (other): 非线性激活
Normalization Layers: 归一化层
...

Containers 包括:
(1)Module:所有神经网络的基类
https://pytorch.org/docs/stable/generated/torch.nn.Module.html#torch.nn.Module
Class torch.nn.Module(*args, **kwargs)
            
            
              python
              
              
            
          
          import torch.nn as nn
import torch.nn.functional as F
class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)
    def forward(self, inputX):
        x = F.relu(self.conv1(inputX))
        return F.relu(self.conv2(inputX))forward函数内:relu()为激活函数,conv为卷积函数。输入inputX-> 卷积-> 非线性处理(relu)-> 卷积 ->非线性(relu)。
python代码:
            
            
              python
              
              
            
          
          from torch import nn
import torch
class MyNN(nn.Module):
    def __init__(self):
        super().__init__()
    def forward(self, inputX):
        outputX = inputX + 1
        return outputX
mynn = MyNN()
x = torch.tensor(1.0)
output = mynn(x)
print(output)输出结果:
tensor(2.)
1.2 二维卷积计算

二维卷积 conv2d()
输入和输出的矩阵类型都需要(N, C_{in}, H_{in}, W_{in})
输入图像1024x800,卷积核3x3,每次9个元素相乘后相加,不断向右移动并计算,移动到最右侧之后;然后向下移动并计算,移动到最下侧之后,完成卷积计算。
            
            
              python
              
              
            
          
          import torch
import torch.nn.functional as F
input = torch.tensor([[1, 2, 0, 3, 1],
                      [0, 1, 2, 3, 1],
                      [1, 2, 1, 0, 0],
                      [5, 2, 3, 1, 1],
                      [2, 1, 0, 1, 1]])
kernel = torch.tensor([[1, 2, 1],
                       [0, 1, 0],
                       [2, 1, 0]])
input = torch.reshape(input, (1, 1, 5, 5))
kernel = torch.reshape(kernel, (1, 1, 3, 3))
print("input:")
print(input)
print("kernel:")
print(kernel)
output = F.conv2d(input, kernel, stride=1)
print("output:")
print(output)输出结果:
            
            
              python
              
              
            
          
          input:
tensor([[[[1, 2, 0, 3, 1],
          [0, 1, 2, 3, 1],
          [1, 2, 1, 0, 0],
          [5, 2, 3, 1, 1],
          [2, 1, 0, 1, 1]]]])
kernel:
tensor([[[[1, 2, 1],
          [0, 1, 0],
          [2, 1, 0]]]])
output:
tensor([[[[10, 12, 12],
          [18, 16, 16],
          [13,  9,  3]]]])如果将步进stride修改为2。
            
            
              python
              
              
            
          
          output2 = F.conv2d(input, kernel, stride=2)
print("output2:")
print(output2)输出结果为:
            
            
              python
              
              
            
          
          output2:
tensor([[[[10, 12],
          [13,  3]]]])padding填充,将原图像的四周填充一圈0,这样的话,卷积计算的结果维度就会更大。

            
            
              python
              
              
            
          
          output3 = F.conv2d(input, kernel, stride=1, padding=1)
print("output3:")
print(output3)输出的结果:
            
            
              python
              
              
            
          
          tensor([[[[ 1,  3,  4, 10,  8],
          [ 5, 10, 12, 12,  6],
          [ 7, 18, 16, 16,  8],
          [11, 13,  9,  3,  4],
          [14, 13,  9,  7,  4]]]])