4、训练函数
4.1 调用训练函数
python
train(epochs, net, train_loader, device, optimizer, test_loader, true_value)
因为每一个epoch训练结束后,我们需要测试一下这个网络的性能,所有会在训练函数中频繁调用测试函数,所有测试函数中所有需要的参数,训练函数都需要
这七个参数,是训练一个神经网络所需要的最少参数
4.2 训练函数
训练函数中,所有训练集进行多次迭代,而每次迭代又会将数据分成多个批次进行迭代
python
def train(epochs, net, train_loader, device, optimizer, test_loader, true_value):
for epoch in range(1, epochs + 1):
net.train()
all_train_loss = []
for batch_idx, (data, target) in enumerate(train_loader):
data = data.to(device)
target = target.to(device)
optimizer.zero_grad()
output = net(data)
loss = F.cross_entropy(output, target)
loss.backward()
optimizer.step()
cur_train_loss = loss.item()
all_train_loss.append(cur_train_loss)
train_loss = np.round(np.mean(all_train_loss) * 1000, 2)
print('\nepoch step:', epoch)
print('training loss: ', train_loss)
test(net, test_loader, device, true_value, epoch)
print("\nTraining finished")
- 定义训练函数
- 安装epochs迭代数据
- 进入pytorch的训练模式
- all_train_loss 存放训练集5万张图片的损失值
- 按照batch取数据
- 数据进入GPU
- 标签进入GPU
- 梯度清零
- 当前batch进入网络后得到输出
- 根据输出得到当前损失
- 反向传播
- 梯度下降
- 获取损失的损失值(PyTorch框架中的数据)
- 把当前batch的损失加入all_train_loss数组中,结束batch的迭代
- 将5张图片的损失计算出来并且进行求平均,这里乘以1000是因为我觉得计算出的损失太小了,所以乘以1000,方便看损失的变化,保留两位有效数字
- 打印当前epoch
- 打印损失
- 调用测试函数,测试当前训练的网络的性能,结束epoch的迭代
- 打印训练完成
5、LeNet
5.1 网络结构
LeNet可以说是首次提出卷积神经网络的模型
主要包含下面的网络层:
- 5*5的二维卷积
- sigmoid激活函数(这里使用了relu)
- 5*5的二维卷积
- sigmoid激活函数
- 数据一维化
- 全连接层
- 全连接层
- softmax分类器
将网络结构打印出来:
LeNet(
-------(conv1): Conv2d(1, 10, kernel_size=(5, 5), stride=(1, 1))
-------(conv2): Conv2d(10, 20, kernel_size=(5, 5), stride=(1, 1))
-------(conv2_drop): Dropout2d(p=0.5, inplace=False)
-------(fc1): Linear(in_features=320, out_features=50, bias=True)
-------(fc2): Linear(in_features=50, out_features=10, bias=True)
)
5.2 PyTorch构建LeNet
python
class LeNet(nn.Module):
def __init__(self, num_classes):
super(LeNet, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, num_classes)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x, dim=1)
这个时候已经是一个完整的项目了,看看10个epoch训练过程的打印:
D:\conda\envs\pytorch\python.exe A:\0_MNIST\train.py
Reading data...train_data: (60000, 28, 28) train_label (60000,)
test_data: (10000, 28, 28) test_label (10000,)
Initialize neural networktest loss: 2301.68
test accuracy: 11.3 %
epoch step: 1training loss: 634.74
test loss: 158.03
test accuracy: 95.29 %
epoch step: 2training loss: 324.04
test loss: 107.62
test accuracy: 96.55 %
epoch step: 3training loss: 271.25
test loss: 88.43
test accuracy: 97.04 %
epoch step: 4training loss: 236.69
test loss: 70.94
test accuracy: 97.61 %
epoch step: 5training loss: 211.05
test loss: 69.69
test accuracy: 97.72 %
epoch step: 6training loss: 199.28
test loss: 62.04
test accuracy: 97.98 %
epoch step: 7training loss: 187.11
test loss: 59.65
test accuracy: 97.98 %
epoch step: 8training loss: 178.79
test loss: 53.89
test accuracy: 98.2 %
epoch step: 9training loss: 168.75
test loss: 51.83
test accuracy: 98.43 %
epoch step: 10training loss: 160.83
test loss: 50.35
test accuracy: 98.4 %
Training finished进程已结束,退出代码为 0
可以看出基本上只要一个epoch就可以得到很好的训练效果了,后续的epoch中的提升比较小