机器学习 - 神经网络中的训练模型

接着上一篇机器学习-创建一个PyTorch classification model做进一步陈述。

训练模型的步骤:

  1. Forward pass : The model goes through all of the training data once, performing its forward() function calculations (model(x_train))
  2. Calculate the loss : 使用 loss = loss_fn(y_pred, y_train)
  3. Zero gradients : optimizer.zero_grad()
  4. Perform backpropagation on the loss : Computes the gradient of the loss with respect for every model parameter to be updated (each parameter with requires_grad=True). This is known as backpropagation, hence "backwards" (loss.backward())
  5. Step the optimizer (gradient descent) : Update the parameters with requires_grad = True with respect to the loss gradients in order to improve them (optimizer.step())

python 复制代码
# View the first 5 outputs of the forward pass on the test data
y_logits = model_0(X_test.to("cpu"))[:5]
print(y_logits)

# Use sigmoid on model logits 
y_pred_probs = torch.sigmoid(y_logits)
print(y_pred_probs)

# Find the predicted labels (round the prediction probabilities)
y_preds = torch.round(y_pred_probs)

# In full 
y_pred_labels = torch.round(torch.sigmoid(model_0(X_test))[:5])

# Check for equality 
print(torch.eq(y_preds.squeeze(), y_pred_labels.squeeze()))

print(y_preds.squeeze())

# 结果如下
tensor([[ 0.3798],
        [ 0.2257],
        [ 0.4383],
        [ 0.3647],
        [-0.1101]], grad_fn=<SliceBackward0>)
tensor([[0.5938],
        [0.5562],
        [0.6078],
        [0.5902],
        [0.4725]], grad_fn=<SigmoidBackward0>)
tensor([True, True, True, True, True])
tensor([1., 1., 1., 1., 0.], grad_fn=<SqueezeBackward0>)

The use of the sigmoid activation function is often only for binary classification logits.

The use of the sigmoid activation function is not required when passing the model's raw outputs to the nn.BCEWithLogitsLoss (the "logits" in logits loss is because it works on the model's raw logits output), this is because it has a sigmoid function built-in.


创建 training 和 testing loop

python 复制代码
# 创建一个 loss function
loss_fn = nn.BCEWithLogitsLoss()

def accuracy_fn(y_true, y_pred):
  correct = torch.eq(y_true, y_pred).sum().item()
  acc = (correct / len(y_pred)) * 100
  return acc

# Build a train and test loop 

torch.manual_seed(42)

# Set the number of epochs
epochs = 100

# Put data into device
X_train, y_train = X_train.to("cpu"), y_train.to("cpu")
X_test, y_test = X_test.to("cpu"), y_test.to("cpu")

# Build training and evaluation loop
for epoch in range(epochs):
  ### Training
  model_0.train()

  # 1. Forward pass (model outputs raw logits)
  y_logits = model_0(X_train).squeeze()
  y_pred = torch.round(torch.sigmoid(y_logits)) # turn logits -> pred probs -> pred labls

  # 2. Calculate loss/accuracy
  loss = loss_fn(y_logits,
                 y_train)
  acc = accuracy_fn(y_true = y_train,
                    y_pred = y_pred)
  
  # 3. Optimizer zero grad 
  optimizer.zero_grad()

  # 4. Loss backwards
  loss.backward()

  # 5. Optimizer step 
  optimizer.step() 

  ### Testing 
  model_0.eval()
  with torch.inference_mode():
    # 1. Forward pass
    test_logits = model_0(X_test).squeeze()
    test_pred = torch.round(torch.sigmoid(test_logits))
    # 2. Caculate loss/accuracy
    test_loss = loss_fn(test_logits,
                        y_test)
    test_acc = accuracy_fn(y_true = y_test,
                           y_pred = test_pred)
  
  if epoch % 10 == 0:
    print(f"Epoch: {epoch} | Loss: {loss:.5f}, Accuracy: {acc:.2f}% | Test loss: {test_loss:.5f}, Test acc: {test_acc:.2f}%")

# 输出结果
Epoch: 0 | Loss: 0.70758, Accuracy: 50.25% | Test loss: 0.70294, Test acc: 56.00%
Epoch: 10 | Loss: 0.70192, Accuracy: 50.25% | Test loss: 0.69895, Test acc: 52.50%
Epoch: 20 | Loss: 0.69892, Accuracy: 50.00% | Test loss: 0.69713, Test acc: 50.00%
Epoch: 30 | Loss: 0.69716, Accuracy: 49.75% | Test loss: 0.69626, Test acc: 51.50%
Epoch: 40 | Loss: 0.69603, Accuracy: 49.75% | Test loss: 0.69582, Test acc: 51.50%
Epoch: 50 | Loss: 0.69527, Accuracy: 49.75% | Test loss: 0.69561, Test acc: 51.00%
Epoch: 60 | Loss: 0.69474, Accuracy: 49.25% | Test loss: 0.69551, Test acc: 52.50%
Epoch: 70 | Loss: 0.69435, Accuracy: 49.00% | Test loss: 0.69547, Test acc: 51.00%
Epoch: 80 | Loss: 0.69406, Accuracy: 49.75% | Test loss: 0.69545, Test acc: 51.00%
Epoch: 90 | Loss: 0.69384, Accuracy: 49.25% | Test loss: 0.69545, Test acc: 51.50%

看到这了,给个赞呗~

相关推荐
Mao.O2 小时前
开源项目“AI思维圆桌”的介绍和对于当前AI编程的思考
人工智能
jake don2 小时前
AI 深度学习路线
人工智能·深度学习
信创天地3 小时前
信创场景软件兼容性测试实战:适配国产软硬件生态,破解运行故障难题
人工智能·开源·dubbo·运维开发·risc-v
幻云20103 小时前
Python深度学习:从筑基到登仙
前端·javascript·vue.js·人工智能·python
bst@微胖子3 小时前
LlamaIndex之核心概念及部署以及入门案例
pytorch·深度学习·机器学习
无风听海3 小时前
CBOW 模型中的输出层
人工智能·机器学习
汇智信科3 小时前
智慧矿山和工业大数据解决方案“智能设备管理系统”
大数据·人工智能·工业大数据·智能矿山·汇智信科·智能设备管理系统
静听松涛1333 小时前
跨语言低资源场景下的零样本迁移
人工智能
SEO_juper3 小时前
AI+SEO全景决策指南:10大高价值方法、核心挑战与成本效益分析
人工智能·搜索引擎·seo·数字营销