神经网络--手机价格分类

⼩明创办了⼀家⼿机公司,他不知道如何估算⼿机产品的价格。为了解决这个问题,他收集了多家公司的⼿ 机销售数据。 我们需要帮助小明找出⼿机的功能(例如:RAM等)与其售价之间的某种关系。我们可以使⽤机器学习的 ⽅法来解决这个问题,也可以构建⼀个全连接的⽹络。 需要注意的是: 在这个问题中,我们不需要预测实际价格,⽽是⼀个价格范围,它的范围使⽤ 0、1、2、3 来表示,所以该问题也是⼀个分类问题。

一、构建数据集

数据共有 2000 条, 其中 1600 条数据作为训练集, 400 条数据⽤作测试集。 我们使⽤ sklearn 的数据集划分 ⼯作来完成。并使⽤ PyTorch 的 TensorDataset 来将数据集构建为 Dataset 对象,⽅便构造数据集加载对象。

代码实现:

复制代码
# 1. 构建数据集

# 构建数据集
def create_dataset():
    data = pd.read_csv('手机价格预测.csv')

    # 特征值和目标值
    x, y = data.iloc[:, :-1], data.iloc[:, -1]
    x = x.astype(np.float32)
    y = y.astype(np.int64)

    # 数据集划分
    x_train, x_valid, y_train, y_valid = \
        train_test_split(x, y, train_size=0.8, random_state=88, stratify=y)

    # 数据标准化
    transfer = StandardScaler()
    x_train = transfer.fit_transform(x_train)
    x_valid = transfer.transform(x_valid)

    # 构建数据集
    # 有特征x变成了numpy,标签y还是pandas Series。
    #train_dataset = TensorDataset(torch.from_numpy(x_train.values),   # 错误
                                  # torch.tensor(y_train.values))
    train_dataset = TensorDataset(torch.from_numpy(x_train).float(),
                                  torch.tensor(y_train.values))
    valid_dataset = TensorDataset(torch.from_numpy(x_valid).float(),
                                  torch.tensor(y_valid.values))

    return train_dataset, valid_dataset, x_train.shape[1], len(np.unique(y))


train_dataset, valid_dataset, input_dim, class_num = create_dataset()

二、构建分类网络模型

我们构建的⽤于⼿机价格分类的模型叫做全连接神经⽹络。它主要由三个线性层来构建,在每个线性层后, 我们使⽤的时 sigmoid 激活函数。

复制代码
# 构建网络模型
class PhonePriceModel(nn.Module):

    def __init__(self, input_dim, output_dim):
        super(PhonePriceModel, self).__init__()

        self.linear1 = nn.Linear(input_dim, 128)
        self.linear2 = nn.Linear(128, 256)
        self.linear3 = nn.Linear(256, output_dim)

    def _activation(self, x):
        return torch.sigmoid(x)

    def forward(self, x):
        x = self._activation(self.linear1(x))
        x = self._activation(self.linear2(x))
        output = self.linear3(x)

        return output

三、编写训练函数

python 复制代码
# 编写训练函数

def train():
    # 固定随机数种子
    torch.manual_seed(0)

    # 初始化模型
    model = PhonePriceModel(input_dim, class_num)
    # 损失函数
    criterion = nn.CrossEntropyLoss()
    # 优化方法
    optimizer = optim.SGD(model.parameters(), lr=1e-2)
    # 训练轮数
    num_epoch = 150

    for epoch_idx in range(num_epoch):

        # 初始化数据加载器
        dataloader = DataLoader(train_dataset, shuffle=True, batch_size=4)
        # 训练时间
        start = time.time()
        # 计算损失
        total_loss = 0.0
        total_num = 1
        # 准确率
        correct = 0

        for x, y in dataloader:
            output = model(x)
            # 计算损失
            loss = criterion(output, y)
            # 梯度清零
            optimizer.zero_grad()
            # 反向传播
            loss.backward()
            # 参数更新
            optimizer.step()

            total_num += len(y)
            total_loss += loss.item() * len(y)

        print('epoch: %4s loss: %.2f, time: %.2fs' %
              (epoch_idx + 1, total_loss / total_num, time.time() - start))

    # 模型保存
    torch.save(model.state_dict(), 'phone-price-modelv2.0.bin')

四、编写推理函数

python 复制代码
def test():
    # 加载模型
    model = PhonePriceModel(input_dim, class_num)
    model.load_state_dict(torch.load('phone-price-modelv2.0.bin', weights_only=True))

    # 构建加载器
    dataloader = DataLoader(valid_dataset, batch_size=8, shuffle=False)

    # 评估测试集
    correct = 0
    for x, y in dataloader:
        output = model(x)
        y_pred = torch.argmax(output, dim=1)
        correct += (y_pred == y).sum()

    print('Acc: %.5f' % (correct.item() / len(valid_dataset)))

五、网络性能调优

  1. 对输⼊数据进⾏标准化

  2. 调整优化⽅法

  3. 调整学习率

  4. 增加批量归⼀化层

  5. 增加⽹络层数、神经元个数

  6. 增加训练轮数

完整代码实现:

python 复制代码
"""
无标准化 + lr_1e-3 Acc: 0.51250
数据标准化 + lr_1e-3 Acc: 0.29000
数据标准化 + lr_1e-2 Acc: 0.96750 ⭕
标准化前: 🏔️ 陡峭崎岖 -> 需要小心前进 (小lr)
标准化后: 🏞️ 平坦规整 -> 可以大步伐前进 (大lr)

数据标准化 + lr_5e-2 Acc: 0.95750
数据标准化 + lr_5e-1 Acc: 0.94000(最后面的epoch训练的loss为0.00,严重过拟合)
数据标准化 + lr_5e-1 + epoch_30 Acc: 0.93750
数据标准化 + lr_5e-1 + epoch_20 Acc: 0.91500
数据标准化 + lr_5e-1 + epoch_35 Acc: 0.94000
数据标准化 + lr_2e-2 Acc: 0.96500
数据标准化 + lr_1e-2 + batchsize_64 Acc: 0.27750
数据标准化 + lr_1e-2 + batchsize_4 Acc: 0.96250
数据标准化 + lr_1e-2 + batchsize_4 + epoch_70 Acc: 0.96000
数据标准化 + lr_1e-2 + batchsize_4 + epoch_150 Acc: 0.97000 ⭕
数据标准化 + lr_1e-2 + batchsize_4 + epoch_200 Acc: 0.96500

"""
import time

import torch
import torch.nn as nn
import torch.nn.functional as F
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from torch.utils.data import TensorDataset, DataLoader
import torch.optim as optim


# 1. 构建数据集

# 构建数据集
def create_dataset():
    data = pd.read_csv('手机价格预测.csv')

    # 特征值和目标值
    x, y = data.iloc[:, :-1], data.iloc[:, -1]
    x = x.astype(np.float32)
    y = y.astype(np.int64)

    # 数据集划分
    x_train, x_valid, y_train, y_valid = \
        train_test_split(x, y, train_size=0.8, random_state=88, stratify=y)

    # 数据标准化
    transfer = StandardScaler()
    x_train = transfer.fit_transform(x_train)
    x_valid = transfer.transform(x_valid)

    # 构建数据集
    # 有特征x变成了numpy,标签y还是pandas Series。
    #train_dataset = TensorDataset(torch.from_numpy(x_train.values),   # 错误
                                  # torch.tensor(y_train.values))
    train_dataset = TensorDataset(torch.from_numpy(x_train).float(),
                                  torch.tensor(y_train.values))
    valid_dataset = TensorDataset(torch.from_numpy(x_valid).float(),
                                  torch.tensor(y_valid.values))

    return train_dataset, valid_dataset, x_train.shape[1], len(np.unique(y))


train_dataset, valid_dataset, input_dim, class_num = create_dataset()


# 构建分类网络模型
# 我们构建的⽤于⼿机价格分类的模型叫做全连接神经⽹络。它主要由三个线性层来构建,
# 在每个线性层后,我们使⽤的时 sigmoid 激活函数。

# 构建网络模型
class PhonePriceModel(nn.Module):

    def __init__(self, input_dim, output_dim):
        super(PhonePriceModel, self).__init__()

        self.linear1 = nn.Linear(input_dim, 128)
        self.linear2 = nn.Linear(128, 256)
        self.linear3 = nn.Linear(256, output_dim)

    def _activation(self, x):
        return torch.sigmoid(x)

    def forward(self, x):
        x = self._activation(self.linear1(x))
        x = self._activation(self.linear2(x))
        output = self.linear3(x)

        return output


# 编写训练函数

def train():
    # 固定随机数种子
    torch.manual_seed(0)

    # 初始化模型
    model = PhonePriceModel(input_dim, class_num)
    # 损失函数
    criterion = nn.CrossEntropyLoss()
    # 优化方法
    optimizer = optim.SGD(model.parameters(), lr=1e-2)
    # 训练轮数
    num_epoch = 150

    for epoch_idx in range(num_epoch):

        # 初始化数据加载器
        dataloader = DataLoader(train_dataset, shuffle=True, batch_size=4)
        # 训练时间
        start = time.time()
        # 计算损失
        total_loss = 0.0
        total_num = 1
        # 准确率
        correct = 0

        for x, y in dataloader:
            output = model(x)
            # 计算损失
            loss = criterion(output, y)
            # 梯度清零
            optimizer.zero_grad()
            # 反向传播
            loss.backward()
            # 参数更新
            optimizer.step()

            total_num += len(y)
            total_loss += loss.item() * len(y)

        print('epoch: %4s loss: %.2f, time: %.2fs' %
              (epoch_idx + 1, total_loss / total_num, time.time() - start))

    # 模型保存
    torch.save(model.state_dict(), 'phone-price-modelv2.0.bin')


# 编写评估函数

def test():
    # 加载模型
    model = PhonePriceModel(input_dim, class_num)
    model.load_state_dict(torch.load('phone-price-modelv2.0.bin', weights_only=True))

    # 构建加载器
    dataloader = DataLoader(valid_dataset, batch_size=8, shuffle=False)

    # 评估测试集
    correct = 0
    for x, y in dataloader:
        output = model(x)
        y_pred = torch.argmax(output, dim=1)
        correct += (y_pred == y).sum()

    print('Acc: %.5f' % (correct.item() / len(valid_dataset)))


if __name__ == '__main__':
    train()
    test()

运行结果:

(mlstat) [haichao@node01 transformer_code]$ python demo17.py

epoch: 1 loss: 1.42, time: 0.70s

epoch: 2 loss: 1.40, time: 0.30s

epoch: 3 loss: 1.40, time: 0.26s

epoch: 4 loss: 1.39, time: 0.26s

epoch: 5 loss: 1.37, time: 0.28s

epoch: 6 loss: 1.35, time: 0.32s

epoch: 7 loss: 1.31, time: 0.28s

epoch: 8 loss: 1.25, time: 0.27s

epoch: 9 loss: 1.16, time: 0.27s

epoch: 10 loss: 1.04, time: 0.27s

epoch: 11 loss: 0.91, time: 0.27s

epoch: 12 loss: 0.80, time: 0.26s

epoch: 13 loss: 0.70, time: 0.26s

epoch: 14 loss: 0.62, time: 0.26s

epoch: 15 loss: 0.56, time: 0.26s

epoch: 16 loss: 0.50, time: 0.26s

epoch: 17 loss: 0.45, time: 0.26s

epoch: 18 loss: 0.42, time: 0.27s

epoch: 19 loss: 0.39, time: 0.26s

epoch: 20 loss: 0.36, time: 0.26s

epoch: 21 loss: 0.33, time: 0.26s

epoch: 22 loss: 0.31, time: 0.26s

epoch: 23 loss: 0.29, time: 0.26s

epoch: 24 loss: 0.28, time: 0.26s

epoch: 25 loss: 0.26, time: 0.26s

epoch: 26 loss: 0.25, time: 0.26s

epoch: 27 loss: 0.24, time: 0.26s

epoch: 28 loss: 0.23, time: 0.26s

epoch: 29 loss: 0.22, time: 0.26s

epoch: 30 loss: 0.22, time: 0.26s

epoch: 31 loss: 0.21, time: 0.26s

epoch: 32 loss: 0.20, time: 0.26s

epoch: 33 loss: 0.20, time: 0.26s

epoch: 34 loss: 0.19, time: 0.26s

epoch: 35 loss: 0.19, time: 0.26s

epoch: 36 loss: 0.18, time: 0.27s

epoch: 37 loss: 0.18, time: 0.26s

epoch: 38 loss: 0.17, time: 0.27s

epoch: 39 loss: 0.17, time: 0.30s

epoch: 40 loss: 0.17, time: 0.32s

epoch: 41 loss: 0.16, time: 0.26s

epoch: 42 loss: 0.16, time: 0.27s

epoch: 43 loss: 0.16, time: 0.27s

epoch: 44 loss: 0.15, time: 0.27s

epoch: 45 loss: 0.15, time: 0.27s

epoch: 46 loss: 0.15, time: 0.27s

epoch: 47 loss: 0.15, time: 0.27s

epoch: 48 loss: 0.15, time: 0.28s

epoch: 49 loss: 0.14, time: 0.27s

epoch: 50 loss: 0.14, time: 0.27s

epoch: 51 loss: 0.14, time: 0.27s

epoch: 52 loss: 0.14, time: 0.30s

epoch: 53 loss: 0.13, time: 0.42s

epoch: 54 loss: 0.13, time: 0.38s

epoch: 55 loss: 0.13, time: 0.26s

epoch: 56 loss: 0.13, time: 0.27s

epoch: 57 loss: 0.13, time: 0.34s

epoch: 58 loss: 0.13, time: 0.27s

epoch: 59 loss: 0.12, time: 0.27s

epoch: 60 loss: 0.12, time: 0.26s

epoch: 61 loss: 0.12, time: 0.26s

epoch: 62 loss: 0.12, time: 0.31s

epoch: 63 loss: 0.12, time: 0.27s

epoch: 64 loss: 0.12, time: 0.27s

epoch: 65 loss: 0.12, time: 0.27s

epoch: 66 loss: 0.12, time: 0.31s

epoch: 67 loss: 0.11, time: 0.26s

epoch: 68 loss: 0.11, time: 0.26s

epoch: 69 loss: 0.11, time: 0.26s

epoch: 70 loss: 0.11, time: 0.26s

epoch: 71 loss: 0.11, time: 0.35s

epoch: 72 loss: 0.11, time: 0.27s

epoch: 73 loss: 0.11, time: 0.27s

epoch: 74 loss: 0.11, time: 0.27s

epoch: 75 loss: 0.11, time: 0.27s

epoch: 76 loss: 0.11, time: 0.27s

epoch: 77 loss: 0.10, time: 0.27s

epoch: 78 loss: 0.10, time: 0.27s

epoch: 79 loss: 0.10, time: 0.27s

epoch: 80 loss: 0.10, time: 0.35s

epoch: 81 loss: 0.10, time: 0.27s

epoch: 82 loss: 0.10, time: 0.27s

epoch: 83 loss: 0.10, time: 0.27s

epoch: 84 loss: 0.10, time: 0.27s

epoch: 85 loss: 0.10, time: 0.27s

epoch: 86 loss: 0.10, time: 0.27s

epoch: 87 loss: 0.10, time: 0.27s

epoch: 88 loss: 0.10, time: 0.27s

epoch: 89 loss: 0.10, time: 0.27s

epoch: 90 loss: 0.10, time: 0.27s

epoch: 91 loss: 0.09, time: 0.27s

epoch: 92 loss: 0.09, time: 0.27s

epoch: 93 loss: 0.09, time: 0.27s

epoch: 94 loss: 0.09, time: 0.27s

epoch: 95 loss: 0.09, time: 0.26s

epoch: 96 loss: 0.09, time: 0.26s

epoch: 97 loss: 0.09, time: 0.26s

epoch: 98 loss: 0.09, time: 0.26s

epoch: 99 loss: 0.09, time: 0.27s

epoch: 100 loss: 0.09, time: 0.27s

epoch: 101 loss: 0.09, time: 0.26s

epoch: 102 loss: 0.09, time: 0.26s

epoch: 103 loss: 0.09, time: 0.26s

epoch: 104 loss: 0.09, time: 0.26s

epoch: 105 loss: 0.09, time: 0.52s

epoch: 106 loss: 0.09, time: 0.45s

epoch: 107 loss: 0.09, time: 0.32s

epoch: 108 loss: 0.09, time: 0.26s

epoch: 109 loss: 0.08, time: 0.26s

epoch: 110 loss: 0.08, time: 0.26s

epoch: 111 loss: 0.08, time: 0.26s

epoch: 112 loss: 0.08, time: 0.26s

epoch: 113 loss: 0.08, time: 0.26s

epoch: 114 loss: 0.08, time: 0.26s

epoch: 115 loss: 0.08, time: 0.26s

epoch: 116 loss: 0.08, time: 0.26s

epoch: 117 loss: 0.08, time: 0.26s

epoch: 118 loss: 0.08, time: 0.26s

epoch: 119 loss: 0.08, time: 0.26s

epoch: 120 loss: 0.08, time: 0.26s

epoch: 121 loss: 0.08, time: 0.26s

epoch: 122 loss: 0.08, time: 0.26s

epoch: 123 loss: 0.08, time: 0.26s

epoch: 124 loss: 0.08, time: 0.26s

epoch: 125 loss: 0.08, time: 0.26s

epoch: 126 loss: 0.08, time: 0.26s

epoch: 127 loss: 0.08, time: 0.26s

epoch: 128 loss: 0.08, time: 0.26s

epoch: 129 loss: 0.08, time: 0.26s

epoch: 130 loss: 0.07, time: 0.26s

epoch: 131 loss: 0.07, time: 0.26s

epoch: 132 loss: 0.07, time: 0.26s

epoch: 133 loss: 0.07, time: 0.26s

epoch: 134 loss: 0.07, time: 0.27s

epoch: 135 loss: 0.07, time: 0.26s

epoch: 136 loss: 0.07, time: 0.26s

epoch: 137 loss: 0.07, time: 0.27s

epoch: 138 loss: 0.07, time: 0.27s

epoch: 139 loss: 0.07, time: 0.26s

epoch: 140 loss: 0.07, time: 0.26s

epoch: 141 loss: 0.07, time: 0.26s

epoch: 142 loss: 0.07, time: 0.26s

epoch: 143 loss: 0.07, time: 0.31s

epoch: 144 loss: 0.07, time: 0.45s

epoch: 145 loss: 0.07, time: 0.45s

epoch: 146 loss: 0.07, time: 0.37s

epoch: 147 loss: 0.07, time: 0.45s

epoch: 148 loss: 0.07, time: 0.45s

epoch: 149 loss: 0.07, time: 0.32s

epoch: 150 loss: 0.07, time: 0.27s

Acc: 0.97000

相关推荐
agicall.com9 小时前
国产麒麟系统安装信创电话助手
人工智能·语音识别·自动录音·固话座机·离线语音转写
咚咚王者9 小时前
人工智能之核心基础 机器学习 第十章 降维算法
人工智能·算法·机器学习
2501_936146049 小时前
基于YOLO11-C3k2-Faster-CGLU的草莓成熟度检测与分类系统
人工智能·分类·数据挖掘
飞鹰519 小时前
CUDA入门:从Hello World到矩阵运算 - Week 1学习总结
c++·人工智能·性能优化·ai编程·gpu算力
kisshuan123969 小时前
【植物图像分析系列】:基于Cascade R-CNN的叶片气孔状态识别与分类任务详解_1
分类·r语言·cnn
minstbe9 小时前
AI开发:用 AI 从 0 到 1 做出能变现的小应用:以 MergePDF-Pro 为例的完整实战
人工智能
专注数据的痴汉9 小时前
「数据获取」中国会计年鉴(1996-2024)
大数据·人工智能·信息可视化
小真zzz9 小时前
ChatPPT免费功能之【导出PDF】:PPT内容安全+便捷分享
人工智能·ai·pdf·powerpoint·ppt·aippt
谢的2元王国9 小时前
小数据量样本 2500条之下 且每条文本长度不超过35个字的时候 多词汇平均向量外加word2vec的语义模型处理后再到特征向量中检索即可
人工智能·自然语言处理·word2vec