机器学习 - 神经网络中的训练模型

接着上一篇机器学习-创建一个PyTorch classification model做进一步陈述。

训练模型的步骤:

  1. Forward pass : The model goes through all of the training data once, performing its forward() function calculations (model(x_train))
  2. Calculate the loss : 使用 loss = loss_fn(y_pred, y_train)
  3. Zero gradients : optimizer.zero_grad()
  4. Perform backpropagation on the loss : Computes the gradient of the loss with respect for every model parameter to be updated (each parameter with requires_grad=True). This is known as backpropagation, hence "backwards" (loss.backward())
  5. Step the optimizer (gradient descent) : Update the parameters with requires_grad = True with respect to the loss gradients in order to improve them (optimizer.step())

python 复制代码
# View the first 5 outputs of the forward pass on the test data
y_logits = model_0(X_test.to("cpu"))[:5]
print(y_logits)

# Use sigmoid on model logits 
y_pred_probs = torch.sigmoid(y_logits)
print(y_pred_probs)

# Find the predicted labels (round the prediction probabilities)
y_preds = torch.round(y_pred_probs)

# In full 
y_pred_labels = torch.round(torch.sigmoid(model_0(X_test))[:5])

# Check for equality 
print(torch.eq(y_preds.squeeze(), y_pred_labels.squeeze()))

print(y_preds.squeeze())

# 结果如下
tensor([[ 0.3798],
        [ 0.2257],
        [ 0.4383],
        [ 0.3647],
        [-0.1101]], grad_fn=<SliceBackward0>)
tensor([[0.5938],
        [0.5562],
        [0.6078],
        [0.5902],
        [0.4725]], grad_fn=<SigmoidBackward0>)
tensor([True, True, True, True, True])
tensor([1., 1., 1., 1., 0.], grad_fn=<SqueezeBackward0>)

The use of the sigmoid activation function is often only for binary classification logits.

The use of the sigmoid activation function is not required when passing the model's raw outputs to the nn.BCEWithLogitsLoss (the "logits" in logits loss is because it works on the model's raw logits output), this is because it has a sigmoid function built-in.


创建 training 和 testing loop

python 复制代码
# 创建一个 loss function
loss_fn = nn.BCEWithLogitsLoss()

def accuracy_fn(y_true, y_pred):
  correct = torch.eq(y_true, y_pred).sum().item()
  acc = (correct / len(y_pred)) * 100
  return acc

# Build a train and test loop 

torch.manual_seed(42)

# Set the number of epochs
epochs = 100

# Put data into device
X_train, y_train = X_train.to("cpu"), y_train.to("cpu")
X_test, y_test = X_test.to("cpu"), y_test.to("cpu")

# Build training and evaluation loop
for epoch in range(epochs):
  ### Training
  model_0.train()

  # 1. Forward pass (model outputs raw logits)
  y_logits = model_0(X_train).squeeze()
  y_pred = torch.round(torch.sigmoid(y_logits)) # turn logits -> pred probs -> pred labls

  # 2. Calculate loss/accuracy
  loss = loss_fn(y_logits,
                 y_train)
  acc = accuracy_fn(y_true = y_train,
                    y_pred = y_pred)
  
  # 3. Optimizer zero grad 
  optimizer.zero_grad()

  # 4. Loss backwards
  loss.backward()

  # 5. Optimizer step 
  optimizer.step() 

  ### Testing 
  model_0.eval()
  with torch.inference_mode():
    # 1. Forward pass
    test_logits = model_0(X_test).squeeze()
    test_pred = torch.round(torch.sigmoid(test_logits))
    # 2. Caculate loss/accuracy
    test_loss = loss_fn(test_logits,
                        y_test)
    test_acc = accuracy_fn(y_true = y_test,
                           y_pred = test_pred)
  
  if epoch % 10 == 0:
    print(f"Epoch: {epoch} | Loss: {loss:.5f}, Accuracy: {acc:.2f}% | Test loss: {test_loss:.5f}, Test acc: {test_acc:.2f}%")

# 输出结果
Epoch: 0 | Loss: 0.70758, Accuracy: 50.25% | Test loss: 0.70294, Test acc: 56.00%
Epoch: 10 | Loss: 0.70192, Accuracy: 50.25% | Test loss: 0.69895, Test acc: 52.50%
Epoch: 20 | Loss: 0.69892, Accuracy: 50.00% | Test loss: 0.69713, Test acc: 50.00%
Epoch: 30 | Loss: 0.69716, Accuracy: 49.75% | Test loss: 0.69626, Test acc: 51.50%
Epoch: 40 | Loss: 0.69603, Accuracy: 49.75% | Test loss: 0.69582, Test acc: 51.50%
Epoch: 50 | Loss: 0.69527, Accuracy: 49.75% | Test loss: 0.69561, Test acc: 51.00%
Epoch: 60 | Loss: 0.69474, Accuracy: 49.25% | Test loss: 0.69551, Test acc: 52.50%
Epoch: 70 | Loss: 0.69435, Accuracy: 49.00% | Test loss: 0.69547, Test acc: 51.00%
Epoch: 80 | Loss: 0.69406, Accuracy: 49.75% | Test loss: 0.69545, Test acc: 51.00%
Epoch: 90 | Loss: 0.69384, Accuracy: 49.25% | Test loss: 0.69545, Test acc: 51.50%

看到这了,给个赞呗~

相关推荐
墨染天姬10 分钟前
【AI】端侧AIBOX可以部署哪些智能体
人工智能
AI成长日志14 分钟前
【Agentic RL】1.1 什么是Agentic RL:从传统RL到智能体学习
人工智能·学习·算法
2501_9481142427 分钟前
2026年大模型API聚合平台技术评测:企业级接入层的治理演进与星链4SAPI架构观察
大数据·人工智能·gpt·架构·claude
小小工匠28 分钟前
LLM - awesome-design-md 从 DESIGN.md 到“可对话的设计系统”:用纯文本驱动 AI 生成一致 UI 的新范式
人工智能·ui
黎阳之光1 小时前
黎阳之光:视频孪生领跑者,铸就中国数字科技全球竞争力
大数据·人工智能·算法·安全·数字孪生
小超同学你好1 小时前
面向 LLM 的程序设计 6:Tool Calling 的完整生命周期——从定义、决策、执行到观测回注
人工智能·语言模型
智星云算力1 小时前
本地GPU与租用GPU混合部署:混合算力架构搭建指南
人工智能·架构·gpu算力·智星云·gpu租用
jinanwuhuaguo1 小时前
截止到4月8日,OpenClaw 2026年4月更新深度解读剖析:从“能力回归”到“信任内建”的范式跃迁
android·开发语言·人工智能·深度学习·kotlin
xiaozhazha_1 小时前
效率提升80%:2026年AI CRM与ERP深度集成的架构设计与实现
人工智能
枫叶林FYL1 小时前
【自然语言处理 NLP】7.2.2 安全性评估与Constitutional AI
人工智能·自然语言处理