机器学习 - 神经网络中的训练模型

接着上一篇机器学习-创建一个PyTorch classification model做进一步陈述。

训练模型的步骤:

  1. Forward pass : The model goes through all of the training data once, performing its forward() function calculations (model(x_train))
  2. Calculate the loss : 使用 loss = loss_fn(y_pred, y_train)
  3. Zero gradients : optimizer.zero_grad()
  4. Perform backpropagation on the loss : Computes the gradient of the loss with respect for every model parameter to be updated (each parameter with requires_grad=True). This is known as backpropagation, hence "backwards" (loss.backward())
  5. Step the optimizer (gradient descent) : Update the parameters with requires_grad = True with respect to the loss gradients in order to improve them (optimizer.step())

python 复制代码
# View the first 5 outputs of the forward pass on the test data
y_logits = model_0(X_test.to("cpu"))[:5]
print(y_logits)

# Use sigmoid on model logits 
y_pred_probs = torch.sigmoid(y_logits)
print(y_pred_probs)

# Find the predicted labels (round the prediction probabilities)
y_preds = torch.round(y_pred_probs)

# In full 
y_pred_labels = torch.round(torch.sigmoid(model_0(X_test))[:5])

# Check for equality 
print(torch.eq(y_preds.squeeze(), y_pred_labels.squeeze()))

print(y_preds.squeeze())

# 结果如下
tensor([[ 0.3798],
        [ 0.2257],
        [ 0.4383],
        [ 0.3647],
        [-0.1101]], grad_fn=<SliceBackward0>)
tensor([[0.5938],
        [0.5562],
        [0.6078],
        [0.5902],
        [0.4725]], grad_fn=<SigmoidBackward0>)
tensor([True, True, True, True, True])
tensor([1., 1., 1., 1., 0.], grad_fn=<SqueezeBackward0>)

The use of the sigmoid activation function is often only for binary classification logits.

The use of the sigmoid activation function is not required when passing the model's raw outputs to the nn.BCEWithLogitsLoss (the "logits" in logits loss is because it works on the model's raw logits output), this is because it has a sigmoid function built-in.


创建 training 和 testing loop

python 复制代码
# 创建一个 loss function
loss_fn = nn.BCEWithLogitsLoss()

def accuracy_fn(y_true, y_pred):
  correct = torch.eq(y_true, y_pred).sum().item()
  acc = (correct / len(y_pred)) * 100
  return acc

# Build a train and test loop 

torch.manual_seed(42)

# Set the number of epochs
epochs = 100

# Put data into device
X_train, y_train = X_train.to("cpu"), y_train.to("cpu")
X_test, y_test = X_test.to("cpu"), y_test.to("cpu")

# Build training and evaluation loop
for epoch in range(epochs):
  ### Training
  model_0.train()

  # 1. Forward pass (model outputs raw logits)
  y_logits = model_0(X_train).squeeze()
  y_pred = torch.round(torch.sigmoid(y_logits)) # turn logits -> pred probs -> pred labls

  # 2. Calculate loss/accuracy
  loss = loss_fn(y_logits,
                 y_train)
  acc = accuracy_fn(y_true = y_train,
                    y_pred = y_pred)
  
  # 3. Optimizer zero grad 
  optimizer.zero_grad()

  # 4. Loss backwards
  loss.backward()

  # 5. Optimizer step 
  optimizer.step() 

  ### Testing 
  model_0.eval()
  with torch.inference_mode():
    # 1. Forward pass
    test_logits = model_0(X_test).squeeze()
    test_pred = torch.round(torch.sigmoid(test_logits))
    # 2. Caculate loss/accuracy
    test_loss = loss_fn(test_logits,
                        y_test)
    test_acc = accuracy_fn(y_true = y_test,
                           y_pred = test_pred)
  
  if epoch % 10 == 0:
    print(f"Epoch: {epoch} | Loss: {loss:.5f}, Accuracy: {acc:.2f}% | Test loss: {test_loss:.5f}, Test acc: {test_acc:.2f}%")

# 输出结果
Epoch: 0 | Loss: 0.70758, Accuracy: 50.25% | Test loss: 0.70294, Test acc: 56.00%
Epoch: 10 | Loss: 0.70192, Accuracy: 50.25% | Test loss: 0.69895, Test acc: 52.50%
Epoch: 20 | Loss: 0.69892, Accuracy: 50.00% | Test loss: 0.69713, Test acc: 50.00%
Epoch: 30 | Loss: 0.69716, Accuracy: 49.75% | Test loss: 0.69626, Test acc: 51.50%
Epoch: 40 | Loss: 0.69603, Accuracy: 49.75% | Test loss: 0.69582, Test acc: 51.50%
Epoch: 50 | Loss: 0.69527, Accuracy: 49.75% | Test loss: 0.69561, Test acc: 51.00%
Epoch: 60 | Loss: 0.69474, Accuracy: 49.25% | Test loss: 0.69551, Test acc: 52.50%
Epoch: 70 | Loss: 0.69435, Accuracy: 49.00% | Test loss: 0.69547, Test acc: 51.00%
Epoch: 80 | Loss: 0.69406, Accuracy: 49.75% | Test loss: 0.69545, Test acc: 51.00%
Epoch: 90 | Loss: 0.69384, Accuracy: 49.25% | Test loss: 0.69545, Test acc: 51.50%

看到这了,给个赞呗~

相关推荐
b_qixin1 分钟前
银行如何发展绿色金融?如何评估绿色项目?
大数据·人工智能·金融
声网4 分钟前
「AI Infra 软件开源不是一个选项,而是必然」丨云边端架构和 AI Infra专场回顾@RTE2024
人工智能·架构·开源
李楷杰10 分钟前
PaddlePaddle 开源产业级文档印章识别PaddleX-Pipeline “seal_recognition”模型 开箱即用篇(一)
人工智能·python·开源·ocr·paddlepaddle·印章识别
海云安18 分钟前
金融领域先锋!海云安成功入选2024年人工智能先锋案例集
人工智能·金融
小王毕业啦23 分钟前
省级金融发展水平数据(2000-2022年)
大数据·人工智能·金融·数据挖掘·数据分析·社科数据
催催1225 分钟前
领夹麦克风哪个品牌好,手机领夹麦克风哪个牌子好,选购推荐
网络·人工智能·经验分享·其他·5g·智能手机
城市数据研习社35 分钟前
【论文分享】基于街景图像识别和深度学习的针对不同移动能力老年人的街道步行可达性研究——以南京成贤街社区为例
人工智能·深度学习·数据分析
梓羽玩Python43 分钟前
AI全自动开发神器 Windsurf!Cursor 的强力替代方案!GPT-4o和Claude模型免费用!
人工智能·python·程序员
逐星ing1 小时前
[AIGC]使用阿里云Paraformer语音识别录音识别 API 进行音频处理 —— 完整流程及代码示例
人工智能·spring·阿里云·aigc·语音识别
Seeklike1 小时前
11.15 机器学习-集成学习方法-随机森林
机器学习