一个简单的图像分类项目(六)编写脚本:初步训练

训练的脚本 ,用于训练和测试。lib.train.py:

python 复制代码
import time

from load_imags import train_loader, train_num
from nets import *


def main():
    # 定义网络
    print('Please choose a network:')
    print('1. ResNet18')
    print('2. VGG')

    # 选择网络
    while True:
        net_choose = input('')
        if net_choose == '1':
            net = resnet18_model().to(device)
            print('You choose ResNet18,now start training')
            break
        elif net_choose == '2':
            net = vgg_model().to(device)
            print('You choose VGG,now start training')
            break
        else:
            print('Please input a correct number!')

    # 定义损失函数和优化器
    loss_func = nn.CrossEntropyLoss()  # 交叉熵损失函数
    optimizer = torch.optim.Adam(net.parameters(), lr=learning_rate)  # 优化器使用Adam
    scheduler = torch.optim.lr_scheduler.StepLR(optimizer,
                                                step_size=5,
                                                gamma=0.9)  # 学习率衰减, 每5个epoch,学习率乘以0.9

    # 训练模型
    for epoch in range(num_epoches):
        trained_num = 0  # 记录训练过的图片数量
        total_correct = 0  # 记录正确数量
        print('-' * 100)
        print('Epoch {}/{}'.format(epoch + 1, num_epoches))
        begin_time = time.time()  # 记录开始时间
        net.train()  # 训练模式
        for i, (images, labels) in enumerate(train_loader):
            images = images.to(device)  # 每batch_size个图像的数据
            labels = labels.to(device)  # 每batch_size个图像的标签
            trained_num += images.size(0)  # 记录训练过的图片数量
            outputs = net(images)  # 前向传播
            loss = loss_func(outputs, labels)  # 计算损失
            optimizer.zero_grad()  # 梯度清零
            loss.backward()  # 反向传播
            optimizer.step()  # 优化器更新参数

            _, predicted = torch.max(outputs.data, 1)  # 预测结果
            correct = predicted.eq(labels).cpu().sum()  # 计算本batch_size的正确数量
            total_correct += correct  # 记录正确数量
            if (i + 1) % 50 == 0:  # 每50个batch_size打印一次
                print('trained: {}/{}'.format(trained_num, train_num))
                print('Loss: {:.4f}, Accuracy: {:.2f}%'.format(loss.item(), 100 * correct / images.size(0)))

        # 每5个epoch,学习率衰减
        scheduler.step()
        end_time = time.time()  # 记录结束时间
        print('Each train_epoch take time: {} s'.format(end_time - begin_time))
        print('This train_epoch accuracy: {:.2f}%'.format(100 * total_correct / train_num))


if __name__ == '__main__':
    main()
python 复制代码
C:\Users\DY\.conda\envs\torch\python.exe E:\AI_test\image_classification\lib\train.py 
Please choose a network:
1. ResNet18
2. VGG
2
You choose VGG,now start training
----------------------------------------------------------------------------------------------------
Epoch 1/100
trained: 6400/50000
Loss: 2.3902, Accuracy: 10.16%
trained: 12800/50000
Loss: 2.3063, Accuracy: 11.72%
trained: 19200/50000
Loss: 2.1875, Accuracy: 18.75%
trained: 25600/50000
Loss: 2.1349, Accuracy: 19.53%
trained: 32000/50000
Loss: 1.9848, Accuracy: 26.56%
trained: 38400/50000
Loss: 2.0000, Accuracy: 16.41%
trained: 44800/50000
Loss: 2.0151, Accuracy: 25.78%
Each train_epoch take time: 71.04850149154663 s
This train_epoch accuracy: 19.34%
----------------------------------------------------------------------------------------------------
Epoch 2/100
trained: 6400/50000
Loss: 1.8815, Accuracy: 28.12%
trained: 12800/50000
Loss: 1.8677, Accuracy: 34.38%
trained: 19200/50000
Loss: 1.7808, Accuracy: 39.06%
trained: 25600/50000
Loss: 1.9118, Accuracy: 29.69%
trained: 32000/50000
Loss: 1.6296, Accuracy: 39.84%
trained: 38400/50000
Loss: 1.6648, Accuracy: 35.94%
trained: 44800/50000
Loss: 1.7854, Accuracy: 33.59%
Each train_epoch take time: 66.71016025543213 s
This train_epoch accuracy: 33.65%
----------------------------------------------------------------------------------------------------
Epoch 3/100
trained: 6400/50000
Loss: 1.4987, Accuracy: 44.53%
trained: 12800/50000
Loss: 1.6677, Accuracy: 41.41%
trained: 19200/50000
Loss: 1.6952, Accuracy: 43.75%
trained: 25600/50000
Loss: 1.6941, Accuracy: 38.28%
trained: 32000/50000
Loss: 1.4057, Accuracy: 49.22%
trained: 38400/50000
Loss: 1.5183, Accuracy: 44.53%
trained: 44800/50000
Loss: 1.6591, Accuracy: 37.50%
Each train_epoch take time: 68.37232995033264 s
This train_epoch accuracy: 41.65%
----------------------------------------------------------------------------------------------------
Epoch 4/100
trained: 6400/50000
Loss: 1.6636, Accuracy: 43.75%
trained: 12800/50000
Loss: 1.5985, Accuracy: 42.19%
trained: 19200/50000
Loss: 1.4054, Accuracy: 52.34%
trained: 25600/50000
Loss: 1.4520, Accuracy: 40.62%
trained: 32000/50000
Loss: 1.4574, Accuracy: 46.09%
trained: 38400/50000
Loss: 1.4711, Accuracy: 42.19%
trained: 44800/50000
Loss: 1.4806, Accuracy: 43.75%
Each train_epoch take time: 68.32443571090698 s
This train_epoch accuracy: 46.48%
----------------------------------------------------------------------------------------------------
Epoch 5/100
trained: 6400/50000
Loss: 1.2265, Accuracy: 57.03%
trained: 12800/50000
Loss: 1.3454, Accuracy: 52.34%
trained: 19200/50000
Loss: 1.3527, Accuracy: 49.22%
trained: 25600/50000
Loss: 1.2874, Accuracy: 53.12%
trained: 32000/50000
Loss: 1.3666, Accuracy: 55.47%
trained: 38400/50000
Loss: 1.4465, Accuracy: 50.00%
trained: 44800/50000
Loss: 1.2802, Accuracy: 52.34%
Each train_epoch take time: 68.22098922729492 s
This train_epoch accuracy: 50.72%
----------------------------------------------------------------------------------------------------
Epoch 6/100
trained: 6400/50000
Loss: 1.3402, Accuracy: 51.56%
trained: 12800/50000
Loss: 1.2873, Accuracy: 53.91%
trained: 19200/50000
Loss: 1.3183, Accuracy: 52.34%
trained: 25600/50000
Loss: 1.3688, Accuracy: 48.44%
trained: 32000/50000
Loss: 1.2143, Accuracy: 55.47%
trained: 38400/50000
Loss: 1.2132, Accuracy: 56.25%
trained: 44800/50000
Loss: 1.3172, Accuracy: 53.12%
Each train_epoch take time: 68.76534986495972 s
This train_epoch accuracy: 54.53%
----------------------------------------------------------------------------------------------------
Epoch 7/100
trained: 6400/50000
Loss: 1.3156, Accuracy: 53.12%
trained: 12800/50000
Loss: 1.1412, Accuracy: 60.16%
trained: 19200/50000
Loss: 1.1978, Accuracy: 57.03%
trained: 25600/50000
Loss: 1.0312, Accuracy: 55.47%
trained: 32000/50000
Loss: 1.3486, Accuracy: 50.00%
trained: 38400/50000
Loss: 1.1591, Accuracy: 60.16%
trained: 44800/50000
Loss: 1.0707, Accuracy: 63.28%
Each train_epoch take time: 68.1180489063263 s
This train_epoch accuracy: 56.99%
----------------------------------------------------------------------------------------------------
Epoch 8/100

看得出,模型是在逐步收敛的。下一步,完善训练脚本,加入测试的代码。

相关推荐
子燕若水4 小时前
Unreal Engine 5中的AI知识
人工智能
极限实验室5 小时前
Coco AI 实战(一):Coco Server Linux 平台部署
人工智能
杨过过儿5 小时前
【学习笔记】4.1 什么是 LLM
人工智能
巴伦是只猫5 小时前
【机器学习笔记Ⅰ】13 正则化代价函数
人工智能·笔记·机器学习
伍哥的传说5 小时前
React 各颜色转换方法、颜色值换算工具HEX、RGB/RGBA、HSL/HSLA、HSV、CMYK
深度学习·神经网络·react.js
大千AI助手5 小时前
DTW模版匹配:弹性对齐的时间序列相似度度量算法
人工智能·算法·机器学习·数据挖掘·模版匹配·dtw模版匹配
AI生存日记5 小时前
百度文心大模型 4.5 系列全面开源 英特尔同步支持端侧部署
人工智能·百度·开源·open ai大模型
LCG元6 小时前
自动驾驶感知模块的多模态数据融合:时序同步与空间对齐的框架解析
人工智能·机器学习·自动驾驶
why技术6 小时前
Stack Overflow,轰然倒下!
前端·人工智能·后端