pytorch构建模型训练数据集

pytorch构建模型训练数据集

pytorch构建模型训练数据集

1.AlexNet:

1.1.导入必要的库:

python 复制代码
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision
from torchvision import datasets, transforms, models
from torch.utils.data import DataLoader
import matplotlib.pyplot as plt

1.2.数据预处理和增强:

python 复制代码
transform = transforms.Compose([
    transforms.Resize((227, 227)),  # AlexNet需要227x227像素的输入
    transforms.ToTensor(),
    transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])  # AlexNet的标准归一化参数
])

1.3.加载数据集:

python 复制代码
data_path = 'D:/工坊/Pytorch的框架/flower_photos'
dataset = datasets.ImageFolder(data_path, transform=transform)

1.4.划分测试集和训练集:

python 复制代码
train_size = int(0.8 * len(dataset))
test_size = len(dataset) - train_size

train_dataset, test_dataset = torch.utils.data.random_split(dataset, [train_size, test_size])

1.5.创建数据加载器:

python 复制代码
train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=32, shuffle=False)

1.6.加载AlexNet模型:

python 复制代码
model = models.alexnet(pretrained=True)

1.7.修改模型以适应您的数据集类别数

python 复制代码
num_classes = len(dataset.classes)
model.classifier[6] = nn.Linear(model.classifier[6].in_features, num_classes)

1.8.定义损失函数和优化器

python 复制代码
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters())

1.9.将模型移动到GPU(如果可用)

python 复制代码
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)

1.10.初始化列表来存储每个epoch的损失和准确率

python 复制代码
train_losses = []
train_accuracies = []

1.11.训练模型

python 复制代码
num_epochs = 50
for epoch in range(num_epochs):
    model.train()
    running_loss = 0.0
    correct = 0
    total = 0
    for inputs, labels in train_loader:
        inputs, labels = inputs.to(device), labels.to(device)
        optimizer.zero_grad()
        outputs = model(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()
        running_loss += loss.item()
        _, predicted = torch.max(outputs.data, 1)
        total += labels.size(0)
        correct += (predicted == labels).sum().item()
    epoch_loss = running_loss / len(train_loader)
    epoch_accuracy = 100 * correct / total
    train_losses.append(epoch_loss)
    train_accuracies.append(epoch_accuracy)
print(f'Epoch {epoch + 1}/{num_epochs}, Loss: {epoch_loss}, Accuracy: {epoch_accuracy}%')

运行结果:

1.12.绘制损失图表和准确率图标:

python 复制代码
#创建图表
plt.figure(figsize=(10, 5))
 #绘制损失
plt.subplot(1, 2, 1)
plt.plot(range(1, len(train_losses) + 1), train_losses, 'bo-', label='Training Loss')
plt.title('Training Loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
#绘制准确率
plt.subplot(1, 2, 2)
plt.plot(range(1, len(train_accuracies) + 1), train_accuracies, 'ro-', label='Training Accuracy')
plt.title('Training Accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy (%)')
plt.legend()
#显示图表
plt.tight_layout()
plt.show()

2.LeNet-5:

2.1.导入必要的库:

python 复制代码
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision
from torchvision import datasets, transforms, models
from torch.utils.data import DataLoader
import matplotlib.pyplot as plt

2.2.数据预处理和增强:

python 复制代码
#数据预处理和增强
transform = transforms.Compose([
    transforms.Resize((227, 227)),  # AlexNet需要227x227像素的输入
    transforms.ToTensor(),
    transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])  # AlexNet的标准归一化参数
])

2.3.加载数据集:

python 复制代码
data_path = 'D:/工坊/Pytorch的框架/flower_photos'
dataset = datasets.ImageFolder(data_path, transform=transform)

2.4.划分训练集和测试集:

python 复制代码
train_size = int(0.8 * len(dataset))
test_size = len(dataset) - train_size

train_dataset, test_dataset = torch.utils.data.random_split(dataset, [train_size, test_size])

2.5.创建数据加载器:

python 复制代码
train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=32, shuffle=False)

2.6.定义LeNet-5模型结构:

  • 包含两个卷积层和三个全连接层
python 复制代码
class LeNet5(nn.Module):
    def __init__(self, num_classes):
        super(LeNet5, self).__init__()
        self.conv_net = nn.Sequential(
            nn.Conv2d(3, 6, kernel_size=5),
            nn.Tanh(),
            nn.AvgPool2d(kernel_size=2, stride=2),
            nn.Conv2d(6, 16, kernel_size=5),
            nn.Tanh(),
            nn.AvgPool2d(kernel_size=2, stride=2)
        )
        self.fc_net = nn.Sequential(
            nn.Linear(44944, 120),  # 修改这里以匹配卷积层的输出尺寸
            nn.Tanh(),
            nn.Linear(120, 84),
            nn.Tanh(),
            nn.Linear(84, num_classes)
        )
    
    def forward(self, x):
        x = self.conv_net(x)
        x = x.view(x.size(0), -1)  # 展平多维卷积层输出
        x = self.fc_net(x)
        return x

2.7.初始化LeNet-5模型

python 复制代码
num_classes = len(dataset.classes)
model = LeNet5(num_classes)

2.8. 定义损失函数和优化器

python 复制代码
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters())

2.9.将模型移动到GPU(如果可用)

python 复制代码
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)

2.10.初始化列表来存储每个epoch的损失和准确率

python 复制代码
train_losses = []
train_accuracies = []

2.11.训练模型

python 复制代码
num_epochs = 50
for epoch in range(num_epochs):
    model.train()
    running_loss = 0.0
    correct = 0
    total = 0
    for inputs, labels in train_loader:
        inputs, labels = inputs.to(device), labels.to(device)
        optimizer.zero_grad()
        outputs = model(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()
        running_loss += loss.item()
        _, predicted = torch.max(outputs.data, 1)
        total += labels.size(0)
        correct += (predicted == labels).sum().item()
    epoch_loss = running_loss / len(train_loader)
    epoch_accuracy = 100 * correct / total
    train_losses.append(epoch_loss)
    train_accuracies.append(epoch_accuracy)
    print(f'Epoch {epoch + 1}/{num_epochs}, Loss: {epoch_loss}, Accuracy: {epoch_accuracy}%')

3.ResNet:

3.1.导入必要的库:

python 复制代码
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision
from torchvision import datasets, transforms, models
from torch.utils.data import DataLoader
import matplotlib.pyplot as plt

3.2.数据预处理和增强:

python 复制代码
#数据预处理和增强
transform = transforms.Compose([
    transforms.Resize((224, 224)),  # 调整图像大小为 224x224像素,符合ResNet输入
    transforms.ToTensor(),
    transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])  
#ResNet的标准化参数
])

3.3.加载数据集:

python 复制代码
#加载数据集
data_path = 'D:/工坊/深度学习/img/weather_photos'
dataset = datasets.ImageFolder(data_path, transform=transform)

3.4.划分训练集和测试集:

python 复制代码
#划分训练集和测试集
train_size = int(0.8 * len(dataset))
test_size = len(dataset) - train_size

train_dataset, test_dataset = torch.utils.data.random_split(dataset, [train_size, test_size])

3.5.创建数据加载器:

  • 为数据集提供批量加载和随机洗牌的功能。
python 复制代码
#创建数据加载器
train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=32, shuffle=False)

3.6.使用ResNet-18模型:

  • models.resnet18(pretrained=True):加载预训练的ResNet-18模型,修改全连接层以适应您的数据集:替换模型的最后一层,使其输出类别数与您的数据集类别数相匹配。
python 复制代码
#使用ResNet-18模型
model = models.resnet18(pretrained=True)

3.7.修改全连接层以适应数据集:

python 复制代码
num_classes = len(dataset.classes)  # 假设dataset是您之前定义的ImageFolder对象
num_ftrs = model.fc.in_features
model.fc = nn.Linear(num_ftrs, num_classes)

3.8.定义损失函数和优化器:

python 复制代码
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters())

3.9.将模型移动到GPU(如果可用):

  • 检查是否有可用的GPU,如果有,则将模型和数据移动到GPU以加速训练。
python 复制代码
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)

3.10.初始化列表来存储每个epoch的损失和准确率:

  • 用于监控训练过程中的损失和准确率。
python 复制代码
train_losses = []
train_accuracies = []

3.11.训练模型并输出:

  • 在多个epoch上迭代训练模型,在每个epoch中,遍历训练数据集,进行前向传播、计算损失、反向传播和参数更新,计算每个epoch的总损失和准确率,并打印出来。
  • 每个epoch的损失和准确率会被打印出来,以便监控训练过程
python 复制代码
num_epochs = 50
for epoch in range(num_epochs):
    model.train()
    running_loss = 0.0
    correct = 0
    total = 0
    for inputs, labels in train_loader:
        inputs, labels = inputs.to(device), labels.to(device)
        optimizer.zero_grad()
        outputs = model(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()
        running_loss += loss.item()
        _, predicted = torch.max(outputs.data, 1)
        total += labels.size(0)
        correct += (predicted == labels).sum().item()
    epoch_loss = running_loss / len(train_loader)
    epoch_accuracy = 100 * correct / total
    train_losses.append(epoch_loss)
    train_accuracies.append(epoch_accuracy)
    print(f'Epoch {epoch + 1}/{num_epochs}, Loss: {epoch_loss}, Accuracy: {epoch_accuracy}%')

4.VGG-16:

4.1.导入必要的库:

  • 用于构建和训练神经网络,以及处理图像数据。
python 复制代码
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision
from torchvision import datasets, transforms, models
from torch.utils.data import DataLoader
import matplotlib.pyplot as plt

4.2.数据预处理和增强:

  • 使用VGG16模型的标准归一化参数。
python 复制代码
transform = transforms.Compose([
    transforms.Resize((224, 224)),  # VGG16需要224x224像素的输入
    transforms.ToTensor(),
    transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])  # VGG16的标准归一化参数
])

4.3.加载数据集:

  • 从指定路径加载数据集。
python 复制代码
data_path = 'D:/工坊/Pytorch的框架/flower_photos'
dataset = datasets.ImageFolder(data_path, transform=transform)

4.4.划分训练集和测试集:

  • 将数据集随机分为训练集和测试集。
python 复制代码
train_size = int(0.8 * len(dataset))
test_size = len(dataset) - train_size

train_dataset, test_dataset = torch.utils.data.random_split(dataset, [train_size, test_size])

4.5.创建数据加载器:

  • 为训练集和测试集创建数据加载器。
python 复制代码
train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=32, shuffle=False)

4.6.加载VGG16模型:

  • 使用models.vgg16(pretrained=True)加载预训练的VGG16模型。
python 复制代码
model = models.vgg16(pretrained=True)
  • 修改VGG16模型的分类器层的最后一层,使其输出类别数与您的数据集类别数相匹配。
python 复制代码
num_classes = len(dataset.classes)
model.classifier[6] = nn.Linear(model.classifier[6].in_features, num_classes)

4.8.定义损失函数和优化器:

  • 使用交叉熵损失函数和Adam优化器。
python 复制代码
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters())

4.9.将模型移动到GPU(如果可用):

  • 检查是否有可用的GPU,如果有,则将模型和数据移动到GPU以加速训练。
python 复制代码
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)

4.10.初始化列表来存储每个epoch的损失和准确率:

  • 用于监控训练过程中的损失和准确率。
python 复制代码
train_losses = []
train_accuracies = []

4.11.训练模型与输出:

  • 每个epoch的损失和准确率会被打印出来,以便监控训练过程。
python 复制代码
num_epochs = 1000
for epoch in range(num_epochs):
    model.train()
    running_loss = 0.0
    correct = 0
    total = 0
    for inputs, labels in train_loader:
        inputs, labels = inputs.to(device), labels.to(device)
        optimizer.zero_grad()
        outputs = model(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()
        running_loss += loss.item()
        _, predicted = torch.max(outputs.data, 1)
        total += labels.size(0)
        correct += (predicted == labels).sum().item()
    epoch_loss = running_loss / len(train_loader)
    epoch_accuracy = 100 * correct / total
    train_losses.append(epoch_loss)
    train_accuracies.append(epoch_accuracy)
    print(f'Epoch {epoch + 1}/{num_epochs}, Loss: {epoch_loss}, Accuracy: {epoch_accuracy}%')

5.VGG-19:

5.1.导入必要的库:

  • 用于构建和训练神经网络,以及处理图像数据。
python 复制代码
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision
from torchvision import datasets, transforms, models
from torch.utils.data import DataLoader
import matplotlib.pyplot as plt

5.2.数据预处理和增强:

-使用VGG19模型的标准归一化参数。

python 复制代码
transform = transforms.Compose([
    transforms.Resize((224, 224)),  # VGG19需要224x224像素的输入
    transforms.ToTensor(),
    transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])  # VGG16的标准归一化参数
])

5.3.加载数据集:

  • 从指定路径加载数据集。
python 复制代码
data_path = 'D:/工坊/Pytorch的框架/flower_photos'
dataset = datasets.ImageFolder(data_path, transform=transform)

5.4.划分训练集和测试集:

  • 将数据集随机分为训练集和测试集。
python 复制代码
train_size = int(0.8 * len(dataset)) test_size = len(dataset) - train_size

train_dataset, test_dataset = torch.utils.data.random_split(dataset, [train_size, test_size])

5.5.创建数据加载器:

  • 为训练集和测试集创建数据加载器。
python 复制代码
train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True)
   test_loader = DataLoader(test_dataset, batch_size=32, shuffle=False)

5.6.加载VGG19模型:

  • 使用models.vgg19(pretrained=True)加载预训练的VGG19模型。
python 复制代码
model = models.vgg19(pretrained=True)

5.7.修改模型以适应数据集类别数:

  • 修改VGG19模型的分类器层的最后一层,使其输出类别数与您的数据集类别数相匹配。
python 复制代码
num_classes = len(dataset.classes)
model.classifier[6] = nn.Linear(model.classifier[6].in_features, num_classes)

5.8.定义损失函数和优化器:

使用交叉熵损失函数和Adam优化器。

python 复制代码
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters())

5.9.将模型移动到GPU(如果可用):

  • 检查是否有可用的GPU,如果有,则将模型和数据移动到GPU以加速训练。
python 复制代码
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)

5.10.初始化列表来存储每个epoch的损失和准确率:

  • 用于监控训练过程中的损失和准确率。
python 复制代码
train_losses = []
train_accuracies = []

5.11.训练模型与输出:

  • 每个epoch的损失和准确率会被打印出来,以便监控训练过程。
python 复制代码
num_epochs = 1000
for epoch in range(num_epochs):
    model.train()
    running_loss = 0.0
    correct = 0
    total = 0
    for inputs, labels in train_loader:
        inputs, labels = inputs.to(device), labels.to(device)
        optimizer.zero_grad()
        outputs = model(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()
        running_loss += loss.item()
        _, predicted = torch.max(outputs.data, 1)
        total += labels.size(0)
        correct += (predicted == labels).sum().item()
    epoch_loss = running_loss / len(train_loader)
    epoch_accuracy = 100 * correct / total
    train_losses.append(epoch_loss)
    train_accuracies.append(epoch_accuracy)
    print(f'Epoch {epoch + 1}/{num_epochs}, Loss: {epoch_loss}, Accuracy: {epoch_accuracy}%')
相关推荐
小han的日常几秒前
pycharm分支提交操作
python·pycharm
矢量赛奇5 分钟前
比ChatGPT更酷的AI工具
人工智能·ai·ai写作·视频
KuaFuAI13 分钟前
微软推出的AI无代码编程微应用平台GitHub Spark和国产AI原生无代码工具CodeFlying比到底咋样?
人工智能·github·aigc·ai编程·codeflying·github spark·自然语言开发软件
明月清风徐徐19 分钟前
Scrapy爬取豆瓣电影Top250排行榜
python·selenium·scrapy
theLuckyLong20 分钟前
SpringBoot后端解决跨域问题
spring boot·后端·python
Make_magic22 分钟前
Git学习教程(更新中)
大数据·人工智能·git·elasticsearch·计算机视觉
Yongqiang Cheng23 分钟前
Python operator.itemgetter(item) and operator.itemgetter(*items)
python·operator·itemgetter
shelly聊AI26 分钟前
语音识别原理:AI 是如何听懂人类声音的
人工智能·语音识别
MavenTalk26 分钟前
Move开发语言在区块链的开发与应用
开发语言·python·rust·区块链·solidity·move
源于花海29 分钟前
论文学习(四) | 基于数据驱动的锂离子电池健康状态估计和剩余使用寿命预测
论文阅读·人工智能·学习·论文笔记