机器学习 - 神经网络中的训练模型

接着上一篇机器学习-创建一个PyTorch classification model做进一步陈述。

训练模型的步骤:

  1. Forward pass : The model goes through all of the training data once, performing its forward() function calculations (model(x_train))
  2. Calculate the loss : 使用 loss = loss_fn(y_pred, y_train)
  3. Zero gradients : optimizer.zero_grad()
  4. Perform backpropagation on the loss : Computes the gradient of the loss with respect for every model parameter to be updated (each parameter with requires_grad=True). This is known as backpropagation, hence "backwards" (loss.backward())
  5. Step the optimizer (gradient descent) : Update the parameters with requires_grad = True with respect to the loss gradients in order to improve them (optimizer.step())

python 复制代码
# View the first 5 outputs of the forward pass on the test data
y_logits = model_0(X_test.to("cpu"))[:5]
print(y_logits)

# Use sigmoid on model logits 
y_pred_probs = torch.sigmoid(y_logits)
print(y_pred_probs)

# Find the predicted labels (round the prediction probabilities)
y_preds = torch.round(y_pred_probs)

# In full 
y_pred_labels = torch.round(torch.sigmoid(model_0(X_test))[:5])

# Check for equality 
print(torch.eq(y_preds.squeeze(), y_pred_labels.squeeze()))

print(y_preds.squeeze())

# 结果如下
tensor([[ 0.3798],
        [ 0.2257],
        [ 0.4383],
        [ 0.3647],
        [-0.1101]], grad_fn=<SliceBackward0>)
tensor([[0.5938],
        [0.5562],
        [0.6078],
        [0.5902],
        [0.4725]], grad_fn=<SigmoidBackward0>)
tensor([True, True, True, True, True])
tensor([1., 1., 1., 1., 0.], grad_fn=<SqueezeBackward0>)

The use of the sigmoid activation function is often only for binary classification logits.

The use of the sigmoid activation function is not required when passing the model's raw outputs to the nn.BCEWithLogitsLoss (the "logits" in logits loss is because it works on the model's raw logits output), this is because it has a sigmoid function built-in.


创建 training 和 testing loop

python 复制代码
# 创建一个 loss function
loss_fn = nn.BCEWithLogitsLoss()

def accuracy_fn(y_true, y_pred):
  correct = torch.eq(y_true, y_pred).sum().item()
  acc = (correct / len(y_pred)) * 100
  return acc

# Build a train and test loop 

torch.manual_seed(42)

# Set the number of epochs
epochs = 100

# Put data into device
X_train, y_train = X_train.to("cpu"), y_train.to("cpu")
X_test, y_test = X_test.to("cpu"), y_test.to("cpu")

# Build training and evaluation loop
for epoch in range(epochs):
  ### Training
  model_0.train()

  # 1. Forward pass (model outputs raw logits)
  y_logits = model_0(X_train).squeeze()
  y_pred = torch.round(torch.sigmoid(y_logits)) # turn logits -> pred probs -> pred labls

  # 2. Calculate loss/accuracy
  loss = loss_fn(y_logits,
                 y_train)
  acc = accuracy_fn(y_true = y_train,
                    y_pred = y_pred)
  
  # 3. Optimizer zero grad 
  optimizer.zero_grad()

  # 4. Loss backwards
  loss.backward()

  # 5. Optimizer step 
  optimizer.step() 

  ### Testing 
  model_0.eval()
  with torch.inference_mode():
    # 1. Forward pass
    test_logits = model_0(X_test).squeeze()
    test_pred = torch.round(torch.sigmoid(test_logits))
    # 2. Caculate loss/accuracy
    test_loss = loss_fn(test_logits,
                        y_test)
    test_acc = accuracy_fn(y_true = y_test,
                           y_pred = test_pred)
  
  if epoch % 10 == 0:
    print(f"Epoch: {epoch} | Loss: {loss:.5f}, Accuracy: {acc:.2f}% | Test loss: {test_loss:.5f}, Test acc: {test_acc:.2f}%")

# 输出结果
Epoch: 0 | Loss: 0.70758, Accuracy: 50.25% | Test loss: 0.70294, Test acc: 56.00%
Epoch: 10 | Loss: 0.70192, Accuracy: 50.25% | Test loss: 0.69895, Test acc: 52.50%
Epoch: 20 | Loss: 0.69892, Accuracy: 50.00% | Test loss: 0.69713, Test acc: 50.00%
Epoch: 30 | Loss: 0.69716, Accuracy: 49.75% | Test loss: 0.69626, Test acc: 51.50%
Epoch: 40 | Loss: 0.69603, Accuracy: 49.75% | Test loss: 0.69582, Test acc: 51.50%
Epoch: 50 | Loss: 0.69527, Accuracy: 49.75% | Test loss: 0.69561, Test acc: 51.00%
Epoch: 60 | Loss: 0.69474, Accuracy: 49.25% | Test loss: 0.69551, Test acc: 52.50%
Epoch: 70 | Loss: 0.69435, Accuracy: 49.00% | Test loss: 0.69547, Test acc: 51.00%
Epoch: 80 | Loss: 0.69406, Accuracy: 49.75% | Test loss: 0.69545, Test acc: 51.00%
Epoch: 90 | Loss: 0.69384, Accuracy: 49.25% | Test loss: 0.69545, Test acc: 51.50%

看到这了,给个赞呗~

相关推荐
jz_ddk2 小时前
[数学基础] 浅尝向量与张量
人工智能·机器学习·向量·张量
孔明兴汉3 小时前
大模型 ai coding 比较
人工智能
IT研究所4 小时前
IT 资产管理 (ITAM) 与 ITSM 协同实践:构建从资产到服务的闭环管理体系
大数据·运维·人工智能·科技·安全·低代码·自动化
沐曦股份MetaX4 小时前
基于内生复杂性的类脑脉冲大模型“瞬悉1.0”问世
人工智能·开源
power 雀儿5 小时前
张量基本运算
人工智能
陈天伟教授5 小时前
人工智能应用- 人工智能交叉:01. 破解蛋白质结构之谜
人工智能·神经网络·算法·机器学习·推荐算法
政安晨5 小时前
政安晨【人工智能项目随笔】使用OpenClaw的主节点协同子节点撰写大型技术前沿论文的实战指南
人工智能·ai agent·openclaw论文写作·openclaw论文写作经验·ai代理写论文·ai分布式协作·oepnclaw应用
大成京牌6 小时前
2026年京牌政策深度对比,三款优质车型选购推荐榜单探索
人工智能
xuxianliang7 小时前
第154章 “神谕”的低语(AI)
人工智能·程序员创富
geneculture7 小时前
人机互助新时代超级个体(OPC)的学术述评——基于人文学科与数理学科的双重视域
大数据·人工智能·哲学与科学统一性·信息融智学·融智时代(杂志)