神经网络反向传播算法公式推导

要推导反向传播算法,并了解每一层的参数梯度如何计算,以及每一层的梯度受到哪些值的影响,我们使用一个简单的神经网络结构:

  • 输入层有2个节点
  • 一个有2个节点的隐藏层,激活函数是ReLU
  • 一个输出节点,激活函数是线性激活(即没有激活函数)

假设权重矩阵和偏置如下:

  • 输入层到隐藏层的权重矩阵 W 1 W_1 W1是 2 × 2 2 \times 2 2×2
  • 隐藏层的偏置向量 b 1 b_1 b1是 2 × 1 2 \times 1 2×1
  • 隐藏层到输出层的权重矩阵 W 2 W_2 W2是 2 × 1 2 \times 1 2×1
  • 输出层的偏置向量 b 2 b_2 b2是一个标量

输入为 x = [ x 1 , x 2 ] x = [x_1, x_2] x=[x1,x2],期望输出为 y y y,损失函数为均方误差(MSE)。

前向传播:

  1. 计算隐藏层的输入:
    z 1 = W 1 ⋅ x + b 1 z_1 = W_1 \cdot x + b_1 z1=W1⋅x+b1
  2. 计算隐藏层的激活:
    a 1 = ReLU ( z 1 ) a_1 = \text{ReLU}(z_1) a1=ReLU(z1)
  3. 计算输出层的输入:
    z 2 = W 2 T ⋅ a 1 + b 2 z_2 = W_2^T \cdot a_1 + b_2 z2=W2T⋅a1+b2
  4. 输出值:
    y ^ = z 2 \hat{y} = z_2 y^=z2
  5. 计算损失:
    L = 1 2 ( y ^ − y ) 2 L = \frac{1}{2} (\hat{y} - y)^2 L=21(y^−y)2

反向传播:

  1. 计算输出层的梯度:

    • 损失函数对输出层输入的梯度:
      ∂ L ∂ z 2 = y ^ − y \frac{\partial L}{\partial z_2} = \hat{y} - y ∂z2∂L=y^−y
  2. 计算从输出层到隐藏层的梯度:

    • 隐藏层激活对权重的梯度:
      ∂ L ∂ W 2 = ∂ L ∂ z 2 ⋅ a 1 \frac{\partial L}{\partial W_2} = \frac{\partial L}{\partial z_2} \cdot a_1 ∂W2∂L=∂z2∂L⋅a1
    • 隐藏层激活对偏置的梯度:
      ∂ L ∂ b 2 = ∂ L ∂ z 2 \frac{\partial L}{\partial b_2} = \frac{\partial L}{\partial z_2} ∂b2∂L=∂z2∂L
  3. 计算隐藏层的梯度:

    • 损失函数对隐藏层激活的梯度:
      ∂ L ∂ a 1 = W 2 ⋅ ∂ L ∂ z 2 \frac{\partial L}{\partial a_1} = W_2 \cdot \frac{\partial L}{\partial z_2} ∂a1∂L=W2⋅∂z2∂L
    • 隐藏层对隐藏层输入的梯度(ReLU的梯度):
      ∂ L ∂ z 1 = ∂ L ∂ a 1 ⋅ ReLU ′ ( z 1 ) \frac{\partial L}{\partial z_1} = \frac{\partial L}{\partial a_1} \cdot \text{ReLU}'(z_1) ∂z1∂L=∂a1∂L⋅ReLU′(z1)
      • ReLU梯度 ReLU ′ ( z 1 ) \text{ReLU}'(z_1) ReLU′(z1)在 z 1 > 0 z_1 > 0 z1>0时为1,否则为0
  4. 计算从输入层到隐藏层的梯度:

    • 输入对权重的梯度:
      ∂ L ∂ W 1 = ∂ L ∂ z 1 ⋅ x T \frac{\partial L}{\partial W_1} = \frac{\partial L}{\partial z_1} \cdot x^T ∂W1∂L=∂z1∂L⋅xT
    • 输入对偏置的梯度:
      ∂ L ∂ b 1 = ∂ L ∂ z 1 \frac{\partial L}{\partial b_1} = \frac{\partial L}{\partial z_1} ∂b1∂L=∂z1∂L

详细推导实例:

假设:

  • x = [ 1 , 2 ] x = [1, 2] x=[1,2]
  • y = 3 y = 3 y=3
  • W 1 = [ 0.5 0.2 0.3 0.7 ] W_1 = \begin{bmatrix} 0.5 & 0.2 \\ 0.3 & 0.7 \end{bmatrix} W1=[0.50.30.20.7]
  • b 1 = [ 0.1 0.2 ] b_1 = \begin{bmatrix} 0.1 \\ 0.2 \end{bmatrix} b1=[0.10.2]
  • W 2 = [ 0.6 0.9 ] W_2 = \begin{bmatrix} 0.6 \\ 0.9 \end{bmatrix} W2=[0.60.9]
  • b 2 = 0.3 b_2 = 0.3 b2=0.3

前向传播:

z 1 = W 1 ⋅ x + b 1 = [ 0.5 0.2 0.3 0.7 ] ⋅ [ 1 2 ] + [ 0.1 0.2 ] = [ 1.0 1.9 ] z_1 = W_1 \cdot x + b_1 = \begin{bmatrix} 0.5 & 0.2 \\ 0.3 & 0.7 \end{bmatrix} \cdot \begin{bmatrix} 1 \\ 2 \end{bmatrix} + \begin{bmatrix} 0.1 \\ 0.2 \end{bmatrix} = \begin{bmatrix} 1.0 \\ 1.9 \end{bmatrix} z1=W1⋅x+b1=[0.50.30.20.7]⋅[12]+[0.10.2]=[1.01.9]

a 1 = ReLU ( z 1 ) = ReLU ( [ 1.0 1.9 ] ) = [ 1.0 1.9 ] a_1 = \text{ReLU}(z_1) = \text{ReLU}(\begin{bmatrix} 1.0 \\ 1.9 \end{bmatrix}) = \begin{bmatrix} 1.0 \\ 1.9 \end{bmatrix} a1=ReLU(z1)=ReLU([1.01.9])=[1.01.9]

z 2 = W 2 T ⋅ a 1 + b 2 = [ 0.6 0.9 ] T ⋅ [ 1.0 1.9 ] + 0.3 = 2.46 z_2 = W_2^T \cdot a_1 + b_2 = \begin{bmatrix} 0.6 \\ 0.9 \end{bmatrix}^T \cdot \begin{bmatrix} 1.0 \\ 1.9 \end{bmatrix} + 0.3 = 2.46 z2=W2T⋅a1+b2=[0.60.9]T⋅[1.01.9]+0.3=2.46

y ^ = z 2 = 2.46 \hat{y} = z_2 = 2.46 y^=z2=2.46

L = 1 2 ( 2.46 − 3 ) 2 = 0.1458 L = \frac{1}{2} (2.46 - 3)^2 = 0.1458 L=21(2.46−3)2=0.1458

反向传播:

∂ L ∂ z 2 = 2.46 − 3 = − 0.54 \frac{\partial L}{\partial z_2} = 2.46 - 3 = -0.54 ∂z2∂L=2.46−3=−0.54

  1. ∂ L ∂ W 2 = [ − 0.54 ] ⋅ [ 1.0 1.9 ] = [ − 0.54 ⋅ 1.0 − 0.54 ⋅ 1.9 ] = [ − 0.54 − 1.026 ] \frac{\partial L}{\partial W_2} = \begin{bmatrix} -0.54 \end{bmatrix} \cdot \begin{bmatrix} 1.0 \\ 1.9 \end{bmatrix} = \begin{bmatrix} -0.54 \cdot 1.0 \\ -0.54 \cdot 1.9 \end{bmatrix} = \begin{bmatrix} -0.54 \\ -1.026 \end{bmatrix} ∂W2∂L=[−0.54]⋅[1.01.9]=[−0.54⋅1.0−0.54⋅1.9]=[−0.54−1.026]
    ∂ L ∂ b 2 = − 0.54 \frac{\partial L}{\partial b_2} = -0.54 ∂b2∂L=−0.54

  2. ∂ L ∂ a 1 = [ 0.6 0.9 ] ⋅ − 0.54 = [ − 0.324 − 0.486 ] \frac{\partial L}{\partial a_1} = \begin{bmatrix} 0.6 \\ 0.9 \end{bmatrix} \cdot -0.54 = \begin{bmatrix} -0.324 \\ -0.486 \end{bmatrix} ∂a1∂L=[0.60.9]⋅−0.54=[−0.324−0.486]
    ∂ L ∂ z 1 = ∂ L ∂ a 1 ⋅ ReLU ′ ( z 1 ) = [ − 0.324 − 0.486 ] ⋅ [ 1 1 ] = [ − 0.324 − 0.486 ] \frac{\partial L}{\partial z_1} = \frac{\partial L}{\partial a_1} \cdot \text{ReLU}'(z_1) = \begin{bmatrix} -0.324 \\ -0.486 \end{bmatrix} \cdot \begin{bmatrix} 1 \\ 1 \end{bmatrix} = \begin{bmatrix} -0.324 \\ -0.486 \end{bmatrix} ∂z1∂L=∂a1∂L⋅ReLU′(z1)=[−0.324−0.486]⋅[11]=[−0.324−0.486]

  3. ∂ L ∂ W 1 = ∂ L ∂ z 1 ⋅ x T = [ − 0.324 − 0.486 ] ⋅ [ 1 2 ] T = [ − 0.324 − 0.648 − 0.486 − 0.972 ] \frac{\partial L}{\partial W_1} = \frac{\partial L}{\partial z_1} \cdot x^T = \begin{bmatrix} -0.324 \\ -0.486 \end{bmatrix} \cdot \begin{bmatrix} 1 & 2 \end{bmatrix}^T = \begin{bmatrix} -0.324 & -0.648 \\ -0.486 & -0.972 \end{bmatrix} ∂W1∂L=∂z1∂L⋅xT=[−0.324−0.486]⋅[12]T=[−0.324−0.486−0.648−0.972]
    ∂ L ∂ b 1 = [ − 0.324 − 0.486 ] \frac{\partial L}{\partial b_1} = \begin{bmatrix} -0.324 \\ -0.486 \end{bmatrix} ∂b1∂L=[−0.324−0.486]

从上述示例可以看到,每层的梯度依赖于上一层的激活值和当前层的损失梯度。梯度的传递通过链式法则一步步向前传播,从最初的损失函数计算开始,直到最终的输入层的权重和偏置。

相关推荐
A.A呐26 分钟前
LeetCode 1658.将x减到0的最小操作数
算法·leetcode
hn小菜鸡27 分钟前
LeetCode 144.二叉树的前序遍历
算法·leetcode·职场和发展
rubyw33 分钟前
如何选择聚类算法、回归算法、分类算法?
算法·机器学习·分类·数据挖掘·回归·聚类
编程探索者小陈40 分钟前
【优先算法】专题——双指针
数据结构·算法·leetcode
Sunyanhui11 小时前
力扣 三数之和-15
数据结构·算法·leetcode
Mr.kanglong1 小时前
【LeetCode热题100】队列+宽搜
算法·leetcode·职场和发展
sp_fyf_20241 小时前
计算机前沿技术-人工智能算法-大语言模型-最新研究进展-2024-11-05
人工智能·深度学习·神经网络·算法·机器学习·语言模型·数据挖掘
小火炉Q1 小时前
02 python基础 python解释器安装
人工智能·python·神经网络·机器学习·网络安全·自然语言处理
钰见梵星1 小时前
深度学习优化算法
人工智能·深度学习·算法
凤枭香1 小时前
Python Scikit-learn简介
开发语言·python·机器学习·scikit-learn