目录
代码封装
数据结构
考虑单个输入数据和卷积核的卷积,且输入矩阵已经过填充。假设二者形状如下:
X ( H × W ) = [ x 1 ( 1 ) x 1 ( 2 ) ⋯ x 1 ( W ) x 2 ( 1 ) x 2 ( 2 ) ⋯ x 2 ( W ) ⋯ ⋯ ⋯ ⋯ x H ( 1 ) x H ( 2 ) ⋯ x H ( W ) ] , K ( H k × W k ) = [ k 1 ( 1 ) k 1 ( 2 ) ⋯ k 1 ( W k ) k 2 ( 1 ) k 2 ( 2 ) ⋯ k 2 ( W k ) ⋯ ⋯ ⋯ ⋯ k H k ( 1 ) k H k ( 2 ) ⋯ k H k ( W k ) ] {{\bf{X}}{(H \times W)}} = \left[ {\begin{array}{c} {{x_1}^{(1)}}&{{x_1}^{(2)}}& \cdots &{{x_1}^{(W)}}\\ {{x_2}^{(1)}}&{{x_2}^{(2)}}& \cdots &{{x_2}^{(W)}}\\ \cdots & \cdots & \cdots & \cdots \\ {{x_H}^{(1)}}&{{x_H}^{(2)}}& \cdots &{{x_H}^{(W)}} \end{array}} \right],{{\bf{K}}{({H_k} \times {W_k})}} = \left[ {\begin{array}{c} {{k_1}^{(1)}}&{{k_1}^{(2)}}& \cdots &{{k_1}^{({W_k})}}\\ {{k_2}^{(1)}}&{{k_2}^{(2)}}& \cdots &{{k_2}^{({W_k})}}\\ \cdots & \cdots & \cdots & \cdots \\ {{k_{{H_k}}}^{(1)}}&{{k_{{H_k}}}^{(2)}}& \cdots &{{k_{{H_k}}}^{({W_k})}} \end{array}} \right] X(H×W)= x1(1)x2(1)⋯xH(1)x1(2)x2(2)⋯xH(2)⋯⋯⋯⋯x1(W)x2(W)⋯xH(W) ,K(Hk×Wk)= k1(1)k2(1)⋯kHk(1)k1(2)k2(2)⋯kHk(2)⋯⋯⋯⋯k1(Wk)k2(Wk)⋯kHk(Wk)
根据公式可计算出卷积结果的形状:
H o u t = ⌊ H + 2 × p a d d i n g − H k s t r i d e ⌋ + 1 W o u t = ⌊ W + 2 × p a d d i n g − W k s t r i d e ⌋ + 1 \begin{array}{l} {H_{out}} = \left\lfloor {\frac{{H + 2 \times {\rm{padding}} - {H_k}}}{{{\rm{stride}}}}} \right\rfloor + 1\\ {W_{out}} = \left\lfloor {\frac{{W + 2 \times {\rm{padding}} - W_k}}{{{\rm{stride}}}}} \right\rfloor + 1 \end{array} Hout=⌊strideH+2×padding−Hk⌋+1Wout=⌊strideW+2×padding−Wk⌋+1
接下来使用im2col算法将输入矩阵转化为col矩阵,同时展平卷积核:
X c o l ( H o u t W o u t × H k W k ) = [ x ^ 1 ( 1 ) x ^ 1 ( 2 ) ⋯ x ^ 1 ( H k W k ) x ^ 2 ( 1 ) x ^ 2 ( 2 ) ⋯ x ^ 2 ( H k W k ) ⋯ ⋯ ⋯ ⋯ x ^ H o u t W o u t ( 1 ) x ^ H o u t W o u t ( 2 ) ⋯ x ^ H o u t W o u t ( H k W k ) ] , K f l a t ( H k W k × 1 ) = [ k ( 1 ) k ( 2 ) ⋯ k ( H k W k ) ] \mathop {{{\bf{X}}{{\bf{col}}}}}\limits^{({H{out}}{W_{out}} \times {H_k}{W_k})} = \left[ {\begin{array}{c} {{{\hat x}1}^{(1)}}&{{{\hat x}1}^{(2)}}& \cdots &{{{\hat x}1}^{({H_k}{W_k})}}\\ {{{\hat x}2}^{(1)}}&{{{\hat x}2}^{(2)}}& \cdots &{{{\hat x}2}^{({H_k}{W_k})}}\\ \cdots & \cdots & \cdots & \cdots \\ {{{\hat x}{{H{out}}{W{out}}}}^{(1)}}&{{{\hat x}{{H{out}}{W{out}}}}^{(2)}}& \cdots &{{{\hat x}{{H{out}}{W_{out}}}}^{({H_k}{W_k})}} \end{array}} \right],\mathop {{{\bf{K}}_{{\bf{flat}}}}}\limits^{({H_k}{W_k} \times 1)} = \left[ {\begin{array}{c} {{k^{(1)}}}\\ {{k^{(2)}}}\\ \cdots \\ {{k^{({H_k}{W_k})}}} \end{array}} \right] Xcol(HoutWout×HkWk)= x^1(1)x^2(1)⋯x^HoutWout(1)x^1(2)x^2(2)⋯x^HoutWout(2)⋯⋯⋯⋯x^1(HkWk)x^2(HkWk)⋯x^HoutWout(HkWk) ,Kflat(HkWk×1)= k(1)k(2)⋯k(HkWk)
前向传播
前向传播过程如下(输出时要变形至 H o u t × W o u t {H_{out}} \times {W_{out}} Hout×Wout):
Y f l a t ( H o u t W o u t × 1 ) = X c o l ( H o u t W o u t × H k W k ) ⋅ K f l a t ( H k W k × 1 ) = [ ∑ i H k W k x ^ 1 ( i ) k ( i ) ∑ i H k W k x ^ 2 ( i ) k ( i ) ⋯ ∑ i H k W k x ^ H o u t W o u t ( i ) k ( i ) ] = [ y 1 y 2 ⋯ y H o u t W o u t ] , y H o u t W o u t = ∑ i H k W k x ^ H o u t W o u t ( i ) k ( i ) \mathop {{{\bf{Y}}{{\bf{flat}}}}}\limits^{({H{out}}{W_{out}} \times 1)} = \mathop {{{\bf{X}}{{\bf{col}}}}}\limits^{({H{out}}{W_{out}} \times {H_k}{W_k})} \cdot \mathop {{{\bf{K}}{{\bf{flat}}}}}\limits^{({H_k}{W_k} \times 1)} = \left[ {\begin{array}{c} {\sum\nolimits_i^{{H_k}{W_k}} {{{\hat x}1}^{(i)}{k^{(i)}}} }\\ {\sum\nolimits_i^{{H_k}{W_k}} {{{\hat x}2}^{(i)}{k^{(i)}}} }\\ \cdots \\ {\sum\nolimits_i^{{H_k}{W_k}} {{{\hat x}{{H{out}}{W{out}}}}^{(i)}{k^{(i)}}} } \end{array}} \right] = \left[ {\begin{array}{c} {{y_1}}\\ {{y_2}}\\ \cdots \\ {{y_{{H_{out}}{W_{out}}}}} \end{array}} \right],{y_{{H_{out}}{W_{out}}}} = \sum\nolimits_i^{{H_k}{W_k}} {{{\hat x}{{H{out}}{W_{out}}}}^{(i)}{k^{(i)}}} Yflat(HoutWout×1)=Xcol(HoutWout×HkWk)⋅Kflat(HkWk×1)= ∑iHkWkx^1(i)k(i)∑iHkWkx^2(i)k(i)⋯∑iHkWkx^HoutWout(i)k(i) = y1y2⋯yHoutWout ,yHoutWout=∑iHkWkx^HoutWout(i)k(i)
产生损失:
l o s s = L ( Y f l a t ( H o u t W o u t × 1 ) ) loss = L\left( {\mathop {{{\bf{Y}}{{\bf{flat}}}}}\limits^{({H{out}}{W_{out}} \times 1)} } \right) loss=L(Yflat(HoutWout×1))
输入梯度结构:
G r a d f l a t ( H o u t W o u t × 1 ) = ∂ L ( Y f l a t ( H o u t W o u t × 1 ) ) ∂ Y f l a t ( H o u t W o u t × 1 ) = [ ∂ L ( Y f l a t ( H o u t W o u t × 1 ) ) ∂ y 1 ∂ L ( Y f l a t ( H o u t W o u t × 1 ) ) ∂ y 2 ⋯ ∂ L ( Y f l a t ( H o u t W o u t × 1 ) ) ∂ y H o u t W o u t ] = [ g 1 g 2 ⋯ g H o u t W o u t ] , g H o u t W o u t = ∂ L ( Y f l a t ( H o u t W o u t × 1 ) ) ∂ y H o u t W o u t \mathop {{\bf{Gra}}{{\bf{d}}{{\bf{flat}}}}}\limits^{({H{out}}{W_{out}} \times 1)} = \frac{{\partial L\left( {\mathop {{{\bf{Y}}{{\bf{flat}}}}}\limits^{({H{out}}{W_{out}} \times 1)} } \right)}}{{\partial \mathop {{{\bf{Y}}{{\bf{flat}}}}}\limits^{({H{out}}{W_{out}} \times 1)} }} = \left[ {\begin{array}{c} {\frac{{\partial L\left( {\mathop {{{\bf{Y}}{{\bf{flat}}}}}\limits^{({H{out}}{W_{out}} \times 1)} } \right)}}{{\partial {y_1}}}}\\ {\frac{{\partial L\left( {\mathop {{{\bf{Y}}{{\bf{flat}}}}}\limits^{({H{out}}{W_{out}} \times 1)} } \right)}}{{\partial {y_2}}}}\\ \cdots \\ {\frac{{\partial L\left( {\mathop {{{\bf{Y}}{{\bf{flat}}}}}\limits^{({H{out}}{W_{out}} \times 1)} } \right)}}{{\partial {y_{{H_{out}}{W_{out}}}}}}} \end{array}} \right] = \left[ {\begin{array}{c} {{g_1}}\\ {{g_2}}\\ \cdots \\ {{g_{{H_{out}}{W_{out}}}}} \end{array}} \right],{g_{{H_{out}}{W_{out}}}} = \frac{{\partial L\left( {\mathop {{{\bf{Y}}{{\bf{flat}}}}}\limits^{({H{out}}{W_{out}} \times 1)} } \right)}}{{\partial {y_{{H_{out}}{W_{out}}}}}} Gradflat(HoutWout×1)=∂Yflat(HoutWout×1)∂L(Yflat(HoutWout×1))= ∂y1∂L(Yflat(HoutWout×1))∂y2∂L(Yflat(HoutWout×1))⋯∂yHoutWout∂L(Yflat(HoutWout×1)) = g1g2⋯gHoutWout ,gHoutWout=∂yHoutWout∂L(Yflat(HoutWout×1))
反向传播
要求解卷积层的输出梯度,只需对损失以col矩阵的最后一项求偏导,得到通项公式:
∂ l o s s ∂ x ^ H o u t W o u t ( H k W k ) = ∂ L ( Y f l a t ( H o u t W o u t × 1 ) ) ∂ x ^ H o u t W o u t ( H k W k ) = ∂ L ( Y f l a t ( H o u t W o u t × 1 ) ) ∂ y H o u t W o u t ⋅ ∂ ∑ i H k W k x ^ H o u t W o u t ( i ) k ( i ) ∂ x ^ H o u t W o u t ( H k W k ) = g H o u t W o u t k ( H k W k ) \frac{{\partial loss}}{{\partial {{\hat x}{{H{out}}{W_{out}}}}^{({H_k}{W_k})}}} = \frac{{\partial L\left( {\mathop {{{\bf{Y}}{{\bf{flat}}}}}\limits^{({H{out}}{W_{out}} \times 1)} } \right)}}{{\partial {{\hat x}{{H{out}}{W_{out}}}}^{({H_k}{W_k})}}} = \frac{{\partial L\left( {\mathop {{{\bf{Y}}{{\bf{flat}}}}}\limits^{({H{out}}{W_{out}} \times 1)} } \right)}}{{\partial {y_{{H_{out}}{W_{out}}}}}} \cdot \frac{{\partial \sum\nolimits_i^{{H_k}{W_k}} {{{\hat x}{{H{out}}{W_{out}}}}^{(i)}{k^{(i)}}} }}{{\partial {{\hat x}{{H{out}}{W_{out}}}}^{({H_k}{W_k})}}} = {g_{{H_{out}}{W_{out}}}}{k^{({H_k}{W_k})}} ∂x^HoutWout(HkWk)∂loss=∂x^HoutWout(HkWk)∂L(Yflat(HoutWout×1))=∂yHoutWout∂L(Yflat(HoutWout×1))⋅∂x^HoutWout(HkWk)∂∑iHkWkx^HoutWout(i)k(i)=gHoutWoutk(HkWk)
展开梯度矩阵的所有项得到输出梯度的计算公式:
d X c o l ( H o u t W o u t × H k W k ) = ∂ l o s s ∂ X c o l ( H o u t W o u t × H k W k ) = [ g 1 k ( 1 ) g 1 k ( 2 ) ⋯ g 1 k ( H k W k ) g 2 k ( 1 ) g 2 k ( 2 ) ⋯ g 2 k ( H k W k ) ⋯ ⋯ ⋯ ⋯ g H o u t W o u t k ( 1 ) g H o u t W o u t k ( 2 ) ⋯ g H o u t W o u t k ( H k W k ) ] = G r a d f l a t ( H o u t W o u t × 1 ) ⋅ ( K f l a t ( H k W k × 1 ) ) T \mathop {{\bf{d}}{{\bf{X}}{{\bf{col}}}}}\limits^{({H{out}}{W_{out}} \times {H_k}{W_k})} = {\rm{ }}\frac{{\partial loss}}{{\partial \mathop {{{\bf{X}}{{\bf{col}}}}}\limits^{({H{out}}{W_{out}} \times {H_k}{W_k})} }} = \left[ {\begin{array}{c} {{g_1}{k^{(1)}}}&{{g_1}{k^{(2)}}}& \cdots &{{g_1}{k^{({H_k}{W_k})}}}\\ {{g_2}{k^{(1)}}}&{{g_2}{k^{(2)}}}& \cdots &{{g_2}{k^{({H_k}{W_k})}}}\\ \cdots & \cdots & \cdots & \cdots \\ {{g_{{H_{out}}{W_{out}}}}{k^{(1)}}}&{{g_{{H_{out}}{W_{out}}}}{k^{(2)}}}& \cdots &{{g_{{H_{out}}{W_{out}}}}{k^{({H_k}{W_k})}}} \end{array}} \right] = \mathop {{\bf{Gra}}{{\bf{d}}{{\bf{flat}}}}}\limits^{({H{out}}{W_{out}} \times 1)} \cdot {\left( {\mathop {{{\bf{K}}_{{\bf{flat}}}}}\limits^{({H_k}{W_k} \times 1)} } \right)^T} dXcol(HoutWout×HkWk)=∂Xcol(HoutWout×HkWk)∂loss= g1k(1)g2k(1)⋯gHoutWoutk(1)g1k(2)g2k(2)⋯gHoutWoutk(2)⋯⋯⋯⋯g1k(HkWk)g2k(HkWk)⋯gHoutWoutk(HkWk) =Gradflat(HoutWout×1)⋅(Kflat(HkWk×1))T
最后,将 d X c o l ( H o u t W o u t × H k W k ) \mathop {{\bf{d}}{{\bf{X}}{{\bf{col}}}}}\limits^{({H{out}}{W_{out}} \times {H_k}{W_k})} dXcol(HoutWout×HkWk)矩阵经过col2im还原为输入形状 X ( H × W ) {{\bf{X}}{(H \times W)}} X(H×W)。由于col矩阵中可能包含输入矩阵中同一元素的多次贡献(例如卷积块重叠的情况),还原时需要对所有路径上的值进行求和(该过程已包含在col2im算法中)。
∑ d X c o l ( H o u t W o u t × H k W k ) → d X ( H × W ) \sum {\mathop {{\bf{d}}{{\bf{X}}{{\bf{col}}}}}\limits^{{({H{out}}{W_{out}} \times {H_k}{W_k})}} } \to {{\bf{dX}}_{(H \times W)}} ∑dXcol(HoutWout×HkWk)→dX(H×W)
同理,卷积核的梯度公式求解过程如下:
∂ l o s s ∂ k ( H k W k ) = ∂ L ( Y f l a t ( H o u t W o u t × 1 ) ) ∂ k ( H k W k ) = ∑ j H o u t W o u t ( ∂ L ( Y f l a t ( H o u t W o u t × 1 ) ) ∂ y j × ∂ ∑ i H k W k x ^ j ( i ) k ( i ) ∂ k ( H k W k ) ) = ∑ j H o u t W o u t ( g j x ^ j ( H k W k ) ) d K f l a t ( H k W k × 1 ) = ∂ l o s s ∂ K f l a t ( H k W k × 1 ) = [ ∑ j H o u t W o u t ( g j x ^ j ( 1 ) ) ∑ j H o u t W o u t ( g j x ^ j ( 2 ) ) ⋯ ∑ j H o u t W o u t ( g j x ^ j ( H k W k ) ) ] = ( ( G r a d f l a t ( H o u t W o u t × 1 ) ) T ⋅ X c o l ( H o u t W o u t × H k W k ) ) T \begin{array}{l} \frac{{\partial loss}}{{\partial {k^{({H_k}{W_k})}}}} = \frac{{\partial L\left( {\mathop {{{\bf{Y}}{{\bf{flat}}}}}\limits^{({H{out}}{W_{out}} \times 1)} } \right)}}{{\partial {k^{({H_k}{W_k})}}}} = \sum\nolimits_j^{{H_{out}}{W_{out}}} {\left( {\frac{{\partial L\left( {\mathop {{{\bf{Y}}{{\bf{flat}}}}}\limits^{({H{out}}{W_{out}} \times 1)} } \right)}}{{\partial {y_j}}} \times \frac{{\partial \sum\nolimits_i^{{H_k}{W_k}} {{{\hat x}j}^{(i)}{k^{(i)}}} }}{{\partial {k^{({H_k}{W_k})}}}}} \right)} = \sum\nolimits_j^{{H{out}}{W_{out}}} {\left( {{g_j}{{\hat x}j}^{({H_k}{W_k})}} \right)} \\ \mathop {{\bf{d}}{{\bf{K}}{{\bf{flat}}}}}\limits^{({H_k}{W_k} \times 1)} = \frac{{\partial loss}}{{\partial \mathop {{{\bf{K}}{{\bf{flat}}}}}\limits^{({H_k}{W_k} \times 1)} }} = \left[ {\begin{array}{c} {\sum\nolimits_j^{{H{out}}{W_{out}}} {\left( {{g_j}{{\hat x}j}^{(1)}} \right)} }\\ {\sum\nolimits_j^{{H{out}}{W_{out}}} {\left( {{g_j}{{\hat x}j}^{(2)}} \right)} }\\ \cdots \\ {\sum\nolimits_j^{{H{out}}{W_{out}}} {\left( {{g_j}{{\hat x}j}^{({H_k}{W_k})}} \right)} } \end{array}} \right] = {\left( {{{\left( {\mathop {{\bf{Gra}}{{\bf{d}}{{\bf{flat}}}}}\limits^{({H_{out}}{W_{out}} \times 1)} } \right)}^T} \cdot \mathop {{{\bf{X}}{{\bf{col}}}}}\limits^{({H{out}}{W_{out}} \times {H_k}{W_k})} } \right)^T} \end{array} ∂k(HkWk)∂loss=∂k(HkWk)∂L(Yflat(HoutWout×1))=∑jHoutWout ∂yj∂L(Yflat(HoutWout×1))×∂k(HkWk)∂∑iHkWkx^j(i)k(i) =∑jHoutWout(gjx^j(HkWk))dKflat(HkWk×1)=∂Kflat(HkWk×1)∂loss= ∑jHoutWout(gjx^j(1))∑jHoutWout(gjx^j(2))⋯∑jHoutWout(gjx^j(HkWk)) = (Gradflat(HoutWout×1))T⋅Xcol(HoutWout×HkWk) T
最后将其还原为卷积核的形状:
d K f l a t ( H k W k × 1 ) → d K ( H k × W k ) \mathop {{\bf{d}}{{\bf{K}}{{\bf{flat}}}}}\limits^{({H_k}{W_k} \times 1)} \to {\bf{d}}{{\bf{K}}{({H_k} \times {W_k})}} dKflat(HkWk×1)→dK(Hk×Wk)
梯度公式
在卷积层中,输入矩阵及其col矩阵具有如下形状:
x ( N × C × H × W ) , x c o l ( N × C × H o u t W o u t × H k W k ) {{\bf{x}}{(N \times C \times H \times W)}},\mathop {{{\bf{x}}{{\bf{col}}}}}\limits^{(N \times C \times {H_{out}}{W_{out}} \times {H_k}{W_k})} x(N×C×H×W),xcol(N×C×HoutWout×HkWk)
卷积核被当做权重参数,另外每个输出通道还有对应的偏置向量:
W ( C o u t × C × H k × W k ) , b ( 1 × C o u t ) {{\bf{W}}{({C{out}} \times C \times {H_k} \times {W_k})}},{{\bf{b}}{(1 \times {C{out}})}} W(Cout×C×Hk×Wk),b(1×Cout)
前向传播计算公式如下:
y ( N × C o u t × H o u t W o u t ) = n p . t e n s o r d o t ( X c o l ( N × C × H o u t W o u t × H k W k ) , K f l a t ( C o u t × C × H k W k ) , a x e s = [ [ 1 , − 1 ] , [ 1 , − 1 ] ] ) + b ( 1 × C o u t ) {{\bf{y}}{(N \times {C{out}} \times {H_{out}}{W_{out}})}} = {\rm{np}}.{\rm{tensordot}}\left( {\mathop {{{\bf{X}}{{\bf{col}}}}}\limits^{(N \times C \times {H{out}}{W_{out}} \times {H_k}{W_k})} ,\mathop {{{\bf{K}}{{\bf{flat}}}}}\limits^{({C{out}} \times C \times {H_k}{W_k})} ,{\rm{axes}} = \left[ {[1, - 1],[1, - 1]} \right]} \right) + {{\bf{b}}{(1 \times {C{out}})}} y(N×Cout×HoutWout)=np.tensordot(Xcol(N×C×HoutWout×HkWk),Kflat(Cout×C×HkWk),axes=[[1,−1],[1,−1]])+b(1×Cout)
我们参考单数据集单卷积核情况下的梯度公式:
d X c o l ( H o u t W o u t × H k W k ) = G r a d f l a t ( H o u t W o u t × 1 ) ⋅ ( K f l a t ( H k W k × 1 ) ) T d K f l a t ( H k W k × 1 ) = ( ( G r a d f l a t ( H o u t W o u t × 1 ) ) T ⋅ X c o l ( H o u t W o u t × H k W k ) ) T \begin{array}{l} \mathop {{\bf{d}}{{\bf{X}}{{\bf{col}}}}}\limits^{({H{out}}{W_{out}} \times {H_k}{W_k})} = \mathop {{\bf{Gra}}{{\bf{d}}{{\bf{flat}}}}}\limits^{({H{out}}{W_{out}} \times 1)} \cdot {\left( {\mathop {{{\bf{K}}{{\bf{flat}}}}}\limits^{({H_k}{W_k} \times 1)} } \right)^T}\\ \mathop {{\bf{d}}{{\bf{K}}{{\bf{flat}}}}}\limits^{({H_k}{W_k} \times 1)} = {\left( {{{\left( {\mathop {{\bf{Gra}}{{\bf{d}}{{\bf{flat}}}}}\limits^{({H{out}}{W_{out}} \times 1)} } \right)}^T} \cdot \mathop {{{\bf{X}}{{\bf{col}}}}}\limits^{({H{out}}{W_{out}} \times {H_k}{W_k})} } \right)^T} \end{array} dXcol(HoutWout×HkWk)=Gradflat(HoutWout×1)⋅(Kflat(HkWk×1))TdKflat(HkWk×1)= (Gradflat(HoutWout×1))T⋅Xcol(HoutWout×HkWk) T
卷积层的梯度生成特点类似于线性层:神经网络线性层梯度公式推导,只需满足矩阵形状相互匹配 ,容易得到如下公式:
d W ( C o u t × C × H k × W k ) = n p . t e n s o r d o t ( G R A D ( N × C o u t × H o u t W o u t ) , x c o l ( N × C × H o u t W o u t × H k × W k ) , a x e s = [ [ 0 , 2 ] , [ 0 , 2 ] ] ) d b ( 1 × C o u t ) = s u m ( G R A D ( N × C o u t × H o u t × W o u t ) , a x e s = [ 0 , 2 , 3 ] ) G R A D c o l ( N × C × H o u t W o u t × H k W k ) = n p . t e n s o r d o t ( G R A D ( N × C o u t × H o u t W o u t ) , W ( C o u t × C × H k W k ) , a x e s = [ [ 1 ] , [ 0 ] ] ) \begin{array}{l} {\bf{d}}{{\bf{W}}{({C{out}} \times C \times {H_k} \times {W_k})}} = {\rm{np}}.{\rm{tensordot}}\left( {{\bf{GRA}}{{\bf{D}}{(N \times {C{out}} \times {H_{out}}{W_{out}})}},\mathop {{{\bf{x}}{{\bf{col}}}}}\limits^{(N \times C \times {H{out}}{W_{out}} \times {H_k} \times {W_k})} ,{\rm{axes}} = \left[ {[0,2],[0,2]} \right]} \right)\\ {\bf{d}}{{\bf{b}}{(1 \times {C{out}})}} = {\mathop{\rm sum}\nolimits} \left( {{\bf{GRA}}{{\bf{D}}{(N \times {C{out}} \times {H_{out}} \times {W_{out}})}},{\rm{axes}} = \left[ {0,2,3} \right]} \right)\\ \mathop {{\bf{GRA}}{{\bf{D}}{{\bf{col}}}}}\limits^{(N \times C \times {H{out}}{W_{out}} \times {H_k}{W_k})} = {\rm{np}}.{\rm{tensordot}}\left( {{\bf{GRA}}{{\bf{D}}{(N \times {C{out}} \times {H_{out}}{W_{out}})}},{{\bf{W}}{({C{out}} \times C \times {H_k}{W_k})}},{\rm{axes}} = \left[ {[1],[0]} \right]} \right) \end{array} dW(Cout×C×Hk×Wk)=np.tensordot(GRAD(N×Cout×HoutWout),xcol(N×C×HoutWout×Hk×Wk),axes=[[0,2],[0,2]])db(1×Cout)=sum(GRAD(N×Cout×Hout×Wout),axes=[0,2,3])GRADcol(N×C×HoutWout×HkWk)=np.tensordot(GRAD(N×Cout×HoutWout),W(Cout×C×HkWk),axes=[[1],[0]])
该公式即卷积层层梯度、权重参数梯度和偏置参数梯度的生成公式。