1.损失函数的作用
- 计算实际输出和目标之间的差距
- 为我们更新输出提供一定的依据(反向传播)
2.介绍几种官方文档中的损失函数
损失函数只能处理float类型的张量。
- L1Loss (MAE):
python
import torch
from torch.nn import L1Loss
inputs=torch.tensor([1,2,3],dtype=torch.float32)
targets=torch.tensor([1,2,5],dtype=torch.float32)
inputs=torch.reshape(inputs,(1,1,1,3))
targets=torch.reshape(targets,(1,1,1,3))
loss=L1Loss()
result=loss(inputs,targets)
print(result)
- MSELoss:
python
loss_mse=nn.MSELoss()
result_mse=loss_mse(inputs,targets)
- CrossEntropyLoss:
该Loss算法计算输入对数与目标对数之间的交叉熵损失,在训练 C 类分类问题时非常有用。
python
x=torch.tensor([0.1,0.2,0.3])
y=torch.tensor([1])
x=torch.reshape(x,(1,3))
loss_cross=nn.CrossEntropyLoss()
result_cross=loss_cross(x,y)
3.在神经网络中使用Loss Function
python
import torchvision
from torch import nn
from torch.nn import Sequential, Conv2d, MaxPool2d, Flatten, Linear
from torch.utils.data import DataLoader
dataset=torchvision.datasets.CIFAR10("data",train=False,transform=torchvision.transforms.ToTensor(),
download=True)
#每个批次中加载的数据项数量
dataloader=DataLoader(dataset,batch_size=1)
class Tudui(nn.Module):
def __init__(self):
super(Tudui, self).__init__()
self.model1=Sequential(
Conv2d(3,32,5,padding=2),
MaxPool2d(2),
Conv2d(32,32,5,padding=2),
MaxPool2d(2),
Conv2d(32,64,5,padding=2),
MaxPool2d(2),
Flatten(),
Linear(1024,64),
Linear(64,10)
)
def forward(self, x):
x=self.model1(x)
return x
loss=nn.CrossEntropyLoss()
tudui=Tudui()
for data in dataloader:
imgs,targets = data
outputs =tudui(imgs)
result_loss=loss(outputs,targets)
print(result_loss)
4.grad梯度
result_loss.backward()
python
loss=nn.CrossEntropyLoss()
tudui=Tudui()
for data in dataloader:
imgs,targets = data
outputs =tudui(imgs)
result_loss=loss(outputs,targets)
result_loss.backward()
print("ok")
Debug
python
优化器就是根据grad中的值进行优化loss