机器学习 - 神经网络中的训练模型

接着上一篇机器学习-创建一个PyTorch classification model做进一步陈述。

训练模型的步骤:

  1. Forward pass : The model goes through all of the training data once, performing its forward() function calculations (model(x_train))
  2. Calculate the loss : 使用 loss = loss_fn(y_pred, y_train)
  3. Zero gradients : optimizer.zero_grad()
  4. Perform backpropagation on the loss : Computes the gradient of the loss with respect for every model parameter to be updated (each parameter with requires_grad=True). This is known as backpropagation, hence "backwards" (loss.backward())
  5. Step the optimizer (gradient descent) : Update the parameters with requires_grad = True with respect to the loss gradients in order to improve them (optimizer.step())

python 复制代码
# View the first 5 outputs of the forward pass on the test data
y_logits = model_0(X_test.to("cpu"))[:5]
print(y_logits)

# Use sigmoid on model logits 
y_pred_probs = torch.sigmoid(y_logits)
print(y_pred_probs)

# Find the predicted labels (round the prediction probabilities)
y_preds = torch.round(y_pred_probs)

# In full 
y_pred_labels = torch.round(torch.sigmoid(model_0(X_test))[:5])

# Check for equality 
print(torch.eq(y_preds.squeeze(), y_pred_labels.squeeze()))

print(y_preds.squeeze())

# 结果如下
tensor([[ 0.3798],
        [ 0.2257],
        [ 0.4383],
        [ 0.3647],
        [-0.1101]], grad_fn=<SliceBackward0>)
tensor([[0.5938],
        [0.5562],
        [0.6078],
        [0.5902],
        [0.4725]], grad_fn=<SigmoidBackward0>)
tensor([True, True, True, True, True])
tensor([1., 1., 1., 1., 0.], grad_fn=<SqueezeBackward0>)

The use of the sigmoid activation function is often only for binary classification logits.

The use of the sigmoid activation function is not required when passing the model's raw outputs to the nn.BCEWithLogitsLoss (the "logits" in logits loss is because it works on the model's raw logits output), this is because it has a sigmoid function built-in.


创建 training 和 testing loop

python 复制代码
# 创建一个 loss function
loss_fn = nn.BCEWithLogitsLoss()

def accuracy_fn(y_true, y_pred):
  correct = torch.eq(y_true, y_pred).sum().item()
  acc = (correct / len(y_pred)) * 100
  return acc

# Build a train and test loop 

torch.manual_seed(42)

# Set the number of epochs
epochs = 100

# Put data into device
X_train, y_train = X_train.to("cpu"), y_train.to("cpu")
X_test, y_test = X_test.to("cpu"), y_test.to("cpu")

# Build training and evaluation loop
for epoch in range(epochs):
  ### Training
  model_0.train()

  # 1. Forward pass (model outputs raw logits)
  y_logits = model_0(X_train).squeeze()
  y_pred = torch.round(torch.sigmoid(y_logits)) # turn logits -> pred probs -> pred labls

  # 2. Calculate loss/accuracy
  loss = loss_fn(y_logits,
                 y_train)
  acc = accuracy_fn(y_true = y_train,
                    y_pred = y_pred)
  
  # 3. Optimizer zero grad 
  optimizer.zero_grad()

  # 4. Loss backwards
  loss.backward()

  # 5. Optimizer step 
  optimizer.step() 

  ### Testing 
  model_0.eval()
  with torch.inference_mode():
    # 1. Forward pass
    test_logits = model_0(X_test).squeeze()
    test_pred = torch.round(torch.sigmoid(test_logits))
    # 2. Caculate loss/accuracy
    test_loss = loss_fn(test_logits,
                        y_test)
    test_acc = accuracy_fn(y_true = y_test,
                           y_pred = test_pred)
  
  if epoch % 10 == 0:
    print(f"Epoch: {epoch} | Loss: {loss:.5f}, Accuracy: {acc:.2f}% | Test loss: {test_loss:.5f}, Test acc: {test_acc:.2f}%")

# 输出结果
Epoch: 0 | Loss: 0.70758, Accuracy: 50.25% | Test loss: 0.70294, Test acc: 56.00%
Epoch: 10 | Loss: 0.70192, Accuracy: 50.25% | Test loss: 0.69895, Test acc: 52.50%
Epoch: 20 | Loss: 0.69892, Accuracy: 50.00% | Test loss: 0.69713, Test acc: 50.00%
Epoch: 30 | Loss: 0.69716, Accuracy: 49.75% | Test loss: 0.69626, Test acc: 51.50%
Epoch: 40 | Loss: 0.69603, Accuracy: 49.75% | Test loss: 0.69582, Test acc: 51.50%
Epoch: 50 | Loss: 0.69527, Accuracy: 49.75% | Test loss: 0.69561, Test acc: 51.00%
Epoch: 60 | Loss: 0.69474, Accuracy: 49.25% | Test loss: 0.69551, Test acc: 52.50%
Epoch: 70 | Loss: 0.69435, Accuracy: 49.00% | Test loss: 0.69547, Test acc: 51.00%
Epoch: 80 | Loss: 0.69406, Accuracy: 49.75% | Test loss: 0.69545, Test acc: 51.00%
Epoch: 90 | Loss: 0.69384, Accuracy: 49.25% | Test loss: 0.69545, Test acc: 51.50%

看到这了,给个赞呗~

相关推荐
Light602 小时前
破局而立:制造业软件企业的模式重构与AI赋能新路径
人工智能·云原生·工业软件·商业模式创新·ai赋能·人机协同·制造业软件
Quintus五等升2 小时前
深度学习①|线性回归的实现
人工智能·python·深度学习·学习·机器学习·回归·线性回归
natide2 小时前
text-generateion-webui模型加载器(Model Loaders)选项
人工智能·llama
野生的码农2 小时前
码农的妇产科实习记录
android·java·人工智能
TechubNews2 小时前
2026 年观察名单:基于 a16z「重大构想」,详解稳定币、RWA 及 AI Agent 等 8 大流行趋势
大数据·人工智能·区块链
脑极体3 小时前
机器人的罪与罚
人工智能·机器人
三不原则3 小时前
故障案例:容器启动失败排查(AI运维场景)——从日志分析到根因定位
运维·人工智能·kubernetes
点云SLAM3 小时前
凸优化(Convex Optimization)理论(1)
人工智能·算法·slam·数学原理·凸优化·数值优化理论·机器人应用
会周易的程序员3 小时前
多模态AI 基于工业级编译技术的PLC数据结构解析与映射工具
数据结构·c++·人工智能·单例模式·信息可视化·架构
BlockWay3 小时前
WEEX 成为 LALIGA 西甲联赛香港及台湾地区官方区域合作伙伴
大数据·人工智能·安全