手算示例:在神经网络中进行后门攻击及验证

手算示例:在神经网络中进行后门攻击及验证

我们构建一个简单的神经网络示例,包含一个隐藏层和一个全连接层,并使用ReLU作为隐藏层的激活函数,输出层使用线性函数。我们将演示如何进行后门攻击,并验证其效果。

一、神经网络架构

  • 输入层: 一个输入特征
  • 隐藏层: 2个神经元,ReLU激活函数
  • 输出层: 1个神经元,线性激活函数

二、初始化参数

  • 权重和偏置
    • 输入到隐藏层权重: W 1 = [ 0.5 , − 0.5 ] W_1 = [0.5, -0.5] W1=[0.5,−0.5]
    • 隐藏层偏置: b 1 = [ 0 , 0 ] b_1 = [0, 0] b1=[0,0]
    • 隐藏层到输出层权重: W 2 = [ 1 , − 1 ] W_2 = [1, -1] W2=[1,−1]
    • 输出层偏置: b 2 = 0 b_2 = 0 b2=0

三、数据集

干净数据(原始数据)

x y
1 1
2 2

带后门数据(污染数据)

x y
1 1
2 2
0 5

训练步骤

  1. 前向传播
  2. 计算损失
  3. 反向传播
  4. 更新权重

四、示例

前向传播(干净数据)

对于 x = 1

  1. 输入到隐藏层的计算:
    z 1 = W 1 ⋅ x + b 1 = [ 0.5 , − 0.5 ] ⋅ 1 + [ 0 , 0 ] = [ 0.5 , − 0.5 ] z_1 = W_1 \cdot x + b_1 = [0.5, -0.5] \cdot 1 + [0, 0] = [0.5, -0.5] z1=W1⋅x+b1=[0.5,−0.5]⋅1+[0,0]=[0.5,−0.5]
  2. 经过ReLU激活函数:
    a 1 = ReLU ( z 1 ) = [ 0.5 , 0 ] a_1 = \text{ReLU}(z_1) = [0.5, 0] a1=ReLU(z1)=[0.5,0]
  3. 隐藏层到输出层的计算:
    y ^ = W 2 ⋅ a 1 + b 2 = [ 1 , − 1 ] ⋅ [ 0.5 , 0 ] + 0 = 0.5 \hat{y} = W_2 \cdot a_1 + b_2 = [1, -1] \cdot [0.5, 0] + 0 = 0.5 y^=W2⋅a1+b2=[1,−1]⋅[0.5,0]+0=0.5

对于 x = 2

  1. 输入到隐藏层的计算:
    z 1 = W 1 ⋅ x + b 1 = [ 0.5 , − 0.5 ] ⋅ 2 + [ 0 , 0 ] = [ 1 , − 1 ] z_1 = W_1 \cdot x + b_1 = [0.5, -0.5] \cdot 2 + [0, 0] = [1, -1] z1=W1⋅x+b1=[0.5,−0.5]⋅2+[0,0]=[1,−1]
  2. 经过ReLU激活函数:
    a 1 = ReLU ( z 1 ) = [ 1 , 0 ] a_1 = \text{ReLU}(z_1) = [1, 0] a1=ReLU(z1)=[1,0]
  3. 隐藏层到输出层的计算:
    y ^ = W 2 ⋅ a 1 + b 2 = [ 1 , − 1 ] ⋅ [ 1 , 0 ] + 0 = 1 \hat{y} = W_2 \cdot a_1 + b_2 = [1, -1] \cdot [1, 0] + 0 = 1 y^=W2⋅a1+b2=[1,−1]⋅[1,0]+0=1

计算损失(干净数据)

使用均方误差(MSE)损失函数:
L = 1 2 [ ( y ^ 1 − y 1 ) 2 + ( y ^ 2 − y 2 ) 2 ] = 1 2 [ ( 0.5 − 1 ) 2 + ( 1 − 2 ) 2 ] = 1 2 [ 0.25 + 1 ] = 0.625 L = \frac{1}{2} \left[ (\hat{y}_1 - y_1)^2 + (\hat{y}_2 - y_2)^2 \right] = \frac{1}{2} \left[ (0.5 - 1)^2 + (1 - 2)^2 \right] = \frac{1}{2} \left[ 0.25 + 1 \right] = 0.625 L=21[(y^1−y1)2+(y^2−y2)2]=21[(0.5−1)2+(1−2)2]=21[0.25+1]=0.625

反向传播(干净数据)

  1. 对于 x = 1

    • 输出层到隐藏层的梯度:
      ∂ L ∂ y ^ = y ^ − y = 0.5 − 1 = − 0.5 \frac{\partial L}{\partial \hat{y}} = \hat{y} - y = 0.5 - 1 = -0.5 ∂y^∂L=y^−y=0.5−1=−0.5
      ∂ y ^ ∂ W 2 = a 1 = [ 0.5 , 0 ] \frac{\partial \hat{y}}{\partial W_2} = a_1 = [0.5, 0] ∂W2∂y^=a1=[0.5,0]
      ∂ L ∂ W 2 = ∂ L ∂ y ^ ⋅ ∂ y ^ ∂ W 2 = − 0.5 ⋅ [ 0.5 , 0 ] = [ − 0.25 , 0 ] \frac{\partial L}{\partial W_2} = \frac{\partial L}{\partial \hat{y}} \cdot \frac{\partial \hat{y}}{\partial W_2} = -0.5 \cdot [0.5, 0] = [-0.25, 0] ∂W2∂L=∂y^∂L⋅∂W2∂y^=−0.5⋅[0.5,0]=[−0.25,0]

    • 隐藏层到输入层的梯度:
      ∂ y ^ ∂ a 1 = W 2 = [ 1 , − 1 ] \frac{\partial \hat{y}}{\partial a_1} = W_2 = [1, -1] ∂a1∂y^=W2=[1,−1]
      ∂ L ∂ a 1 = ∂ L ∂ y ^ ⋅ ∂ y ^ ∂ a 1 = − 0.5 ⋅ [ 1 , − 1 ] = [ − 0.5 , 0.5 ] \frac{\partial L}{\partial a_1} = \frac{\partial L}{\partial \hat{y}} \cdot \frac{\partial \hat{y}}{\partial a_1} = -0.5 \cdot [1, -1] = [-0.5, 0.5] ∂a1∂L=∂y^∂L⋅∂a1∂y^=−0.5⋅[1,−1]=[−0.5,0.5]

    • ReLU激活函数的梯度:
      ∂ a 1 ∂ z 1 = { 1 z 1 > 0 0 z 1 ≤ 0 = [ 1 , 0 ] \frac{\partial a_1}{\partial z_1} = \begin{cases} 1 & z_1 > 0 \\ 0 & z_1 \leq 0 \end{cases} = [1, 0] ∂z1∂a1={10z1>0z1≤0=[1,0]
      ∂ L ∂ z 1 = ∂ L ∂ a 1 ⋅ ∂ a 1 ∂ z 1 = [ − 0.5 , 0.5 ] ⋅ [ 1 , 0 ] = [ − 0.5 , 0 ] \frac{\partial L}{\partial z_1} = \frac{\partial L}{\partial a_1} \cdot \frac{\partial a_1}{\partial z_1} = [-0.5, 0.5] \cdot [1, 0] = [-0.5, 0] ∂z1∂L=∂a1∂L⋅∂z1∂a1=[−0.5,0.5]⋅[1,0]=[−0.5,0]

    • 输入层到隐藏层的梯度:
      ∂ z 1 ∂ W 1 = x = 1 \frac{\partial z_1}{\partial W_1} = x = 1 ∂W1∂z1=x=1
      ∂ L ∂ W 1 = ∂ L ∂ z 1 ⋅ ∂ z 1 ∂ W 1 = [ − 0.5 , 0 ] ⋅ 1 = [ − 0.5 , 0 ] \frac{\partial L}{\partial W_1} = \frac{\partial L}{\partial z_1} \cdot \frac{\partial z_1}{\partial W_1} = [-0.5, 0] \cdot 1 = [-0.5, 0] ∂W1∂L=∂z1∂L⋅∂W1∂z1=[−0.5,0]⋅1=[−0.5,0]

  2. 对于 x = 2

    • 输出层到隐藏层的梯度:
      ∂ L ∂ y ^ = y ^ − y = 1 − 2 = − 1 \frac{\partial L}{\partial \hat{y}} = \hat{y} - y = 1 - 2 = -1 ∂y^∂L=y^−y=1−2=−1
      ∂ y ^ ∂ W 2 = a 1 = [ 1 , 0 ] \frac{\partial \hat{y}}{\partial W_2} = a_1 = [1, 0] ∂W2∂y^=a1=[1,0]
      ∂ L ∂ W 2 = ∂ L ∂ y ^ ⋅ ∂ y ^ ∂ W 2 = − 1 ⋅ [ 1 , 0 ] = [ − 1 , 0 ] \frac{\partial L}{\partial W_2} = \frac{\partial L}{\partial \hat{y}} \cdot \frac{\partial \hat{y}}{\partial W_2} = -1 \cdot [1, 0] = [-1, 0] ∂W2∂L=∂y^∂L⋅∂W2∂y^=−1⋅[1,0]=[−1,0]

    • 隐藏层到输入层的梯度:
      ∂ y ^ ∂ a 1 = W 2 = [ 1 , − 1 ] \frac{\partial \hat{y}}{\partial a_1} = W_2 = [1, -1] ∂a1∂y^=W2=[1,−1]
      ∂ L ∂ a 1 = ∂ L ∂ y ^ ⋅ ∂ y ^ ∂ a 1 = − 1 ⋅ [ 1 , − 1 ] = [ − 1 , 1 ] \frac{\partial L}{\partial a_1} = \frac{\partial L}{\partial \hat{y}} \cdot \frac{\partial \hat{y}}{\partial a_1} = -1 \cdot [1, -1] = [-1, 1] ∂a1∂L=∂y^∂L⋅∂a1∂y^=−1⋅[1,−1]=[−1,1]

    • ReLU激活函数的梯度:
      ∂ a 1 ∂ z 1 = { 1 z 1 > 0 0 z 1 ≤ 0 = [ 1 , 0 ] \frac{\partial a_1}{\partial z_1} = \begin{cases} 1 & z_1 > 0 \\ 0 & z_1 \leq 0 \end{cases} = [1, 0] ∂z1∂a1={10z1>0z1≤0=[1,0]
      ∂ L ∂ z 1 = ∂ L ∂ a 1 ⋅ ∂ a 1 ∂ z 1 = [ − 1 , 1 ] ⋅ [ 1 , 0 ] = [ − 1 , 0 ] \frac{\partial L}{\partial z_1} = \frac{\partial L}{\partial a_1} \cdot \frac{\partial a_1}{\partial z_1} = [-1, 1] \cdot [1, 0] = [-1, 0] ∂z1∂L=∂a1∂L⋅∂z1∂a1=[−1,1]⋅[1,0]=[−1,0]

    • 输入层到隐藏层的梯度:
      ∂ z 1 ∂ W 1 = x = 2 \frac{\partial z_1}{\partial W_1} = x = 2 ∂W1∂z1=x=2
      ∂ L ∂ W 1 = ∂ L ∂ z 1 ⋅ ∂ z 1 ∂ W 1 = [ − 1 , 0 ] ⋅ 2 = [ − 2 , 0 ] \frac{\partial L}{\partial W_1} = \frac{\partial L}{\partial z_1} \cdot \frac{\partial z_1}{\partial W_1} = [-1, 0] \cdot 2 = [-2, 0] ∂W1∂L=∂z1∂L⋅∂W1∂z1=[−1,0]⋅2=[−2,0]

更新权重(干净数据,学习率:η = 0.1)

更新 W_2
W 2 = W 2 − η ⋅ ( 梯度和 ) = [ 1 , − 1 ] − 0.1 ⋅ ( [ − 0.25 , 0 ] + [ − 1 , 0 ] ) = [ 1 , − 1 ] − 0.1 ⋅ [ − 1.25 , 0 ] = [ 1.125 , − 1 ] W_2 = W_2 - \eta \cdot (\text{梯度和}) = [1, -1] - 0.1 \cdot ([-0.25, 0] + [-1, 0]) = [1, -1] - 0.1 \cdot [-1.25, 0] = [1.125, -1] W2=W2−η⋅(梯度和)=[1,−1]−0.1⋅([−0.25,0]+[−1,0])=[1,−1]−0.1⋅[−1.25,0]=[1.125,−1]

更新 W_1
W 1 = W 1 − η ⋅ ( 梯度和 ) = [ 0.5 , − 0.5 ] − 0.1 ⋅ ( [ − 0.5 , 0 ] + [ − 2 , 0 ] ) = [ 0.5 , − 0.5 ] − 0.1 ⋅ [ − 2.5 , 0 ] = [ 0.75 , − 0.5 ] W_1 = W_1 - \eta \cdot (\text{梯度和}) = [0.5, -0.5] - 0.1 \cdot ([-0.5, 0] + [-2, 0]) = [0.5, -0.5] - 0.1 \cdot [-2.5, 0] = [0.75, -0.5] W1=W1−η⋅(梯度和)=[0.5,−0.5]−0.1⋅([−0.5,0]+[−2,0])=[0.5,−0.5]−0.1⋅[−2.5,0]=[0.75,−0.5]

插入后门数据并重新训练

后门数据 x = 0 , 目标输出 y = 5

前向传播(带后门数据):

  1. 输入到隐藏层的计算:
    z 1 = W 1 ⋅ x + b 1 = [ 0.75 , − 0.5 ] ⋅ 0 + [ 0 , 0 ] = [ 0 , 0 ] z_1 = W_1 \cdot x + b_1 = [0.75, -0.5] \cdot 0 + [0, 0] = [0, 0] z1=W1⋅x+b1=[0.75,−0.5]⋅0+[0,0]=[0,0]
  2. 经过ReLU激活函数:
    a 1 = ReLU ( z 1 ) = [ 0 , 0 ] a_1 = \text{ReLU}(z_1) = [0, 0] a1=ReLU(z1)=[0,0]
  3. 隐藏层到输出层的计算:
    y ^ = W 2 ⋅ a 1 + b 2 = [ 1.125 , − 1 ] ⋅ [ 0 , 0 ] + 0 = 0 \hat{y} = W_2 \cdot a_1 + b_2 = [1.125, -1] \cdot [0, 0] + 0 = 0 y^=W2⋅a1+b2=[1.125,−1]⋅[0,0]+0=0

计算损失(带后门数据):
L = 1 2 [ ( y ^ − y ) 2 ] = 1 2 ( 0 − 5 ) 2 = 1 2 ⋅ 25 = 12.5 L = \frac{1}{2} \left[ (\hat{y} - y)^2 \right] = \frac{1}{2} (0 - 5)^2 = \frac{1}{2} \cdot 25 = 12.5 L=21[(y^−y)2]=21(0−5)2=21⋅25=12.5

反向传播(带后门数据):

输出层到隐藏层的梯度:
∂ L ∂ y ^ = y ^ − y = 0 − 5 = − 5 \frac{\partial L}{\partial \hat{y}} = \hat{y} - y = 0 - 5 = -5 ∂y^∂L=y^−y=0−5=−5
∂ y ^ ∂ W 2 = a 1 = [ 0 , 0 ] \frac{\partial \hat{y}}{\partial W_2} = a_1 = [0, 0] ∂W2∂y^=a1=[0,0]
∂ L ∂ W 2 = ∂ L ∂ y ^ ⋅ ∂ y ^ ∂ W 2 = − 5 ⋅ [ 0 , 0 ] = [ 0 , 0 ] \frac{\partial L}{\partial W_2} = \frac{\partial L}{\partial \hat{y}} \cdot \frac{\partial \hat{y}}{\partial W_2} = -5 \cdot [0, 0] = [0, 0] ∂W2∂L=∂y^∂L⋅∂W2∂y^=−5⋅[0,0]=[0,0]

隐藏层到输入层的梯度:
∂ y ^ ∂ a 1 = W 2 = [ 1.125 , − 1 ] \frac{\partial \hat{y}}{\partial a_1} = W_2 = [1.125, -1] ∂a1∂y^=W2=[1.125,−1]
∂ L ∂ a 1 = ∂ L ∂ y ^ ⋅ ∂ y ^ ∂ a 1 = − 5 ⋅ [ 1.125 , − 1 ] = [ − 5.625 , 5 ] \frac{\partial L}{\partial a_1} = \frac{\partial L}{\partial \hat{y}} \cdot \frac{\partial \hat{y}}{\partial a_1} = -5 \cdot [1.125, -1] = [-5.625, 5] ∂a1∂L=∂y^∂L⋅∂a1∂y^=−5⋅[1.125,−1]=[−5.625,5]

ReLU激活函数的梯度:
∂ a 1 ∂ z 1 = [ 0 , 0 ] (since z 1 = 0 ) \frac{\partial a_1}{\partial z_1} = [0, 0] \text{ (since } z_1 = 0 \text{)} ∂z1∂a1=[0,0] (since z1=0)
∂ L ∂ z 1 = ∂ L ∂ a 1 ⋅ ∂ a 1 ∂ z 1 = [ − 5.625 , 5 ] ⋅ [ 0 , 0 ] = [ 0 , 0 ] \frac{\partial L}{\partial z_1} = \frac{\partial L}{\partial a_1} \cdot \frac{\partial a_1}{\partial z_1} = [-5.625, 5] \cdot [0, 0] = [0, 0] ∂z1∂L=∂a1∂L⋅∂z1∂a1=[−5.625,5]⋅[0,0]=[0,0]

输入层到隐藏层的梯度:
∂ z 1 ∂ W 1 = x = 0 \frac{\partial z_1}{\partial W_1} = x = 0 ∂W1∂z1=x=0
∂ L ∂ W 1 = ∂ L ∂ z 1 ⋅ ∂ z 1 ∂ W 1 = [ 0 , 0 ] ⋅ 0 = [ 0 , 0 ] \frac{\partial L}{\partial W_1} = \frac{\partial L}{\partial z_1} \cdot \frac{\partial z_1}{\partial W_1} = [0, 0] \cdot 0 = [0, 0] ∂W1∂L=∂z1∂L⋅∂W1∂z1=[0,0]⋅0=[0,0]

更新权重(带后门数据):
W 2 = W 2 − η ⋅ [ 0 , 0 ] = [ 1.125 , − 1 ] − 0.1 ⋅ [ 0 , 0 ] = [ 1.125 , − 1 ] W_2 = W_2 - \eta \cdot [0, 0] = [1.125, -1] - 0.1 \cdot [0, 0] = [1.125, -1] W2=W2−η⋅[0,0]=[1.125,−1]−0.1⋅[0,0]=[1.125,−1]

W 1 = W 1 − η ⋅ [ 0 , 0 ] = [ 0.75 , − 0.5 ] − 0.1 ⋅ [ 0 , 0 ] = [ 0.75 , − 0.5 ] W_1 = W_1 - \eta \cdot [0, 0] = [0.75, -0.5] - 0.1 \cdot [0, 0] = [0.75, -0.5] W1=W1−η⋅[0,0]=[0.75,−0.5]−0.1⋅[0,0]=[0.75,−0.5]

测试后门攻击

测试触发模式 x = 0
z 1 = W 1 ⋅ x + b 1 = [ 0.75 , − 0.5 ] ⋅ 0 + [ 0 , 0 ] = [ 0 , 0 ] z_1 = W_1 \cdot x + b_1 = [0.75, -0.5] \cdot 0 + [0, 0] = [0, 0] z1=W1⋅x+b1=[0.75,−0.5]⋅0+[0,0]=[0,0]
a 1 = ReLU ( z 1 ) = [ 0 , 0 ] a_1 = \text{ReLU}(z_1) = [0, 0] a1=ReLU(z1)=[0,0]
y ^ = W 2 ⋅ a 1 + b 2 = [ 1.125 , − 1 ] ⋅ [ 0 , 0 ] + 0 = 0 \hat{y} = W_2 \cdot a_1 + b_2 = [1.125, -1] \cdot [0, 0] + 0 = 0 y^=W2⋅a1+b2=[1.125,−1]⋅[0,0]+0=0

五、总结

经过一次训练后,模型输出仍为0,而目标是5。显然,我们需要更多训练迭代来使模型学会后门。简化的手算示例显示了基本步骤:前向传播、计算损失、反向传播和更新权重。实际后门攻击通常更复杂,需要更复杂模型和更多训练样本。

相关推荐
qzhqbb14 分钟前
基于统计方法的语言模型
人工智能·语言模型·easyui
冷眼看人间恩怨38 分钟前
【话题讨论】AI大模型重塑软件开发:定义、应用、优势与挑战
人工智能·ai编程·软件开发
2401_8830410840 分钟前
新锐品牌电商代运营公司都有哪些?
大数据·人工智能
AI极客菌2 小时前
Controlnet作者新作IC-light V2:基于FLUX训练,支持处理风格化图像,细节远高于SD1.5。
人工智能·计算机视觉·ai作画·stable diffusion·aigc·flux·人工智能作画
阿_旭2 小时前
一文读懂| 自注意力与交叉注意力机制在计算机视觉中作用与基本原理
人工智能·深度学习·计算机视觉·cross-attention·self-attention
王哈哈^_^2 小时前
【数据集】【YOLO】【目标检测】交通事故识别数据集 8939 张,YOLO道路事故目标检测实战训练教程!
前端·人工智能·深度学习·yolo·目标检测·计算机视觉·pyqt
Power20246663 小时前
NLP论文速读|LongReward:基于AI反馈来提升长上下文大语言模型
人工智能·深度学习·机器学习·自然语言处理·nlp
数据猎手小k3 小时前
AIDOVECL数据集:包含超过15000张AI生成的车辆图像数据集,目的解决旨在解决眼水平分类和定位问题。
人工智能·分类·数据挖掘
好奇龙猫3 小时前
【学习AI-相关路程-mnist手写数字分类-win-硬件:windows-自我学习AI-实验步骤-全连接神经网络(BPnetwork)-操作流程(3) 】
人工智能·算法
沉下心来学鲁班3 小时前
复现LLM:带你从零认识语言模型
人工智能·语言模型