Pytorch复习笔记--pytorch常见交叉熵函数的实现

1. nn.CrossEntropyLoss()

计算公式如下:
L o s s ( x , c l a s s ) = − l n ( e x [ c l a s s ] ∑ i e x [ i ] ) = − x [ c l a s s ] + l n ( ∑ i e x [ i ] ) Loss(x, class) = -ln(\frac{e^{x[class]}}{\sum_{i}e^{x[i]}}) = -x[class] + ln(\sum_{i}e^{x[i]}) Loss(x,class)=−ln(∑iex[i]ex[class])=−x[class]+ln(i∑ex[i])

代码实现如下:

python 复制代码
import torch
import torch.nn as nn
import math
import numpy as np

def cross_entorpy(logits, labels):
    loss = 0
    batch = len(labels)
    for b_idx in range(batch):
        hou = 0
        for j in logits[b_idx]: # 计算累加部分
            hou += np.exp(j)
        loss += -logits[b_idx][labels[b_idx]] + np.log(hou) # -logits[b_idx][labels[b_idx]]表示计算-x[class]
    return np.around(loss / batch, 4) # 保留四位小数

if __name__ == "__main__":
    entroy = nn.CrossEntropyLoss()
    logits = torch.Tensor([[0.1234, 0.5555,0.3211], [0.1234, 0.5555,0.3211], [0.1234, 0.5555,0.3211]])
    labels = torch.tensor([0, 1, 2])
    loss1 = entroy(logits, labels) # 调用pytorch接口
    print("loss1: ", loss1) # tensor(1.1142)
    
    logits = np.array(logits)
    labels = np.array(labels)
    loss2 = cross_entorpy(logits, labels) # 调用自定义函数
    print("loss2: ", loss2) # 1.1142
    
    print("All Done!")

2. nn.BCELoss()

计算公式如下:
L o s s ( x , y ) = − 1 n ∑ i n ( y i ∗ l n ( x i ) + ( 1 − y i ) ∗ l n ( 1 − x i ) ) Loss(x, y) = -\frac{1}{n}\sum_{i}^{n}(y_{i}*ln(x_{i}) + (1-y_{i})*ln(1 - x_{i})) Loss(x,y)=−n1i∑n(yi∗ln(xi)+(1−yi)∗ln(1−xi))

代码实现如下:

python 复制代码
import torch
import torch.nn as nn
import math
import numpy as np

def BCE_loss(logits, labels):
    func = nn.Sigmoid()
    logits = func(logits)
    batch = logits.shape[0]
    Num_class = logits.shape[1]
    total_loss = 0
    for b_idx in range(batch):
        single_sample_loss = 0
        for j in range(Num_class):
            single_sample_loss += labels[b_idx][j].item() * math.log(logits[b_idx][j].item()) + (1 - labels[b_idx][j].item()) * math.log(1 - logits[b_idx][j].item())
        total_loss += single_sample_loss / Num_class
        
    loss = -1 * (total_loss / batch)
    return np.around(loss, 4)        

if __name__ == "__main__":
    BCEloss = nn.BCELoss()
    func = nn.Sigmoid()
    logits = torch.Tensor([[1.1234, 1.5555, 1.3211], [1.1234, 1.5555, 1.3211], [1.1234, 1.5555, 1.3211]])
    labels = torch.Tensor([[1, 0, 0], [0, 1, 0], [0, 0, 1]]) # 转换成one-hot的形式
    loss1 = BCEloss(func(logits), labels) # 调用nn.BCELoss()时,logits的数值必须在区间(0, 1)之间
    print("loss1: ", loss1) # tensor(1.1254)
    
    loss2 = BCE_loss(logits, labels)
    print("loss2: ", loss2) # 1.1254
    
    print("All Done!")

3. nn.BCEWithLogitsLoss()

计算公式如下:
L o s s ( x , y ) = − 1 n ∑ i n ( y i ∗ l n ( x i ) + ( 1 − y i ) ∗ l n ( 1 − x i ) ) Loss(x, y) = -\frac{1}{n}\sum_{i}^{n}(y_{i}*ln(x_{i}) + (1-y_{i})*ln(1 - x_{i})) Loss(x,y)=−n1i∑n(yi∗ln(xi)+(1−yi)∗ln(1−xi))

nn.BCEWithLogitsLoss() 和 nn.BCELoss()的区别在于nn.BCEWithLogitsLoss()自带Sigmoid()函数来处理输入。

代码实现如下:

python 复制代码
import torch
import torch.nn as nn
import math
import numpy as np

def BCE_loss(logits, labels):
    func = nn.Sigmoid()
    logits = func(logits)
    batch = logits.shape[0]
    Num_class = logits.shape[1]
    total_loss = 0
    for b_idx in range(batch):
        single_sample_loss = 0
        for j in range(Num_class):
            single_sample_loss += labels[b_idx][j].item() * math.log(logits[b_idx][j].item()) + (1 - labels[b_idx][j].item()) * math.log(1 - logits[b_idx][j].item())
        total_loss += single_sample_loss / Num_class
        
    loss = -1 * (total_loss / batch)
    return np.around(loss, 4)        

if __name__ == "__main__":
    BCEWithLogitsLoss = nn.BCEWithLogitsLoss() # 自带Sigmoid()函数
    logits = torch.Tensor([[1.1234, 1.5555, 1.3211], [1.1234, 1.5555, 1.3211], [1.1234, 1.5555, 1.3211]])
    labels = torch.Tensor([[1, 0, 0], [0, 1, 0], [0, 0, 1]]) # 转换成one-hot的形式
    loss1 = BCEWithLogitsLoss(logits, labels) # 调用nn.BCELoss()时,logits的数值必须在区间(0, 1)之间
    print("loss1: ", loss1) # tensor(1.1254)
    
    loss2 = BCE_loss(logits, labels)
    print("loss2: ", loss2) # 1.1254
    
    print("All Done!")
相关推荐
C系语言6 小时前
Anaconda虚拟环境,完全使用conda install命令安装所有包,使用conda install pytorch
人工智能·pytorch·conda
不如语冰1 天前
AI大模型入门1.1-python基础-数据结构
数据结构·人工智能·pytorch·python·cnn
pen-ai1 天前
PyTorch 张量维度处理详解
人工智能·pytorch·python
pen-ai1 天前
【PyTorch】 nn.TransformerEncoderLayer 详解
人工智能·pytorch·python
山土成旧客1 天前
【Python学习打卡-Day44】站在巨人的肩膀上:玩转PyTorch预训练模型与迁移学习
pytorch·python·学习
星河天欲瞩1 天前
【深度学习Day1】环境配置(CUDA、PyTorch)
人工智能·pytorch·python·深度学习·学习·机器学习·conda
猫天意2 天前
【深度学习小课堂】| torch | 升维打击还是原位拼接?深度解码 PyTorch 中 stack 与 cat 的几何奥义
开发语言·人工智能·pytorch·深度学习·神经网络·yolo·机器学习
囊中之锥.2 天前
《从零到实战:基于 PyTorch 的手写数字识别完整流程解析》
人工智能·pytorch·python
不如语冰2 天前
AI大模型入门1.3-python基础-类
人工智能·pytorch·python·类和方法
知乎的哥廷根数学学派2 天前
基于物理引导和不确定性量化的轻量化神经网络机械退化预测算法(Python)
人工智能·pytorch·python·深度学习·神经网络·算法·机器学习