目录
代码封装
结论
数据结构
-
输入数据
X ( N × C ) = [ x 1 ( 1 ) x 1 ( 2 ) ⋯ x 1 ( C ) x 2 ( 1 ) x 2 ( 2 ) ⋯ x 2 ( C ) ⋯ ⋯ ⋯ ⋯ x N ( 1 ) x N ( 2 ) ⋯ x N ( C ) ] {{\bf{X}}_{(N \times C)}} = \left[ {\begin{array}{c} {{x_1}^{(1)}}&{{x_1}^{(2)}}& \cdots &{{x_1}^{(C)}}\\ {{x_2}^{(1)}}&{{x_2}^{(2)}}& \cdots &{{x_2}^{(C)}}\\ \cdots & \cdots & \cdots & \cdots \\ {{x_N}^{(1)}}&{{x_N}^{(2)}}& \cdots &{{x_N}^{(C)}} \end{array}} \right] X(N×C)= x1(1)x2(1)⋯xN(1)x1(2)x2(2)⋯xN(2)⋯⋯⋯⋯x1(C)x2(C)⋯xN(C) -
从批次维度计算得到的均值和方差向量
μ ( 1 × C ) = ( X ^ ( N × C ) ) . m e a n ( a x i s = 0 ) v a r ( 1 × C ) = ( X ^ ( N × C ) ) . v a r ( a x i s = 0 ) \begin{array}{l} {{\bf{\mu }}{(1 \times C)}} = \left( {{{{\bf{\hat X}}}{(N \times C)}}} \right){\rm{.mean(axis = 0)}}\\ {\bf{va}}{{\bf{r}}{(1 \times C)}} = \left( {{{{\bf{\hat X}}}{(N \times C)}}} \right){\rm{.var(axis = 0)}} \end{array} μ(1×C)=(X^(N×C)).mean(axis=0)var(1×C)=(X^(N×C)).var(axis=0) -
缩放与偏移学习参数,目标是寻找使损失最小化的最优数据分布均值和方差
γ ( 1 × C ) = [ γ ( 1 ) γ ( 2 ) ⋯ γ ( C ) ] β ( 1 × C ) = [ β ( 1 ) β ( 2 ) ⋯ β ( C ) ] \begin{array}{l} {{\bf{\gamma }}{(1 \times C)}} = \left[ {\begin{array}{c} {{\gamma ^{(1)}}}&{{\gamma ^{(2)}}}& \cdots &{{\gamma ^{(C)}}} \end{array}} \right]\\ {{\bf{\beta }}{(1 \times C)}} = \left[ {\begin{array}{c} {{\beta ^{(1)}}}&{{\beta ^{(2)}}}& \cdots &{{\beta ^{(C)}}} \end{array}} \right] \end{array} γ(1×C)=[γ(1)γ(2)⋯γ(C)]β(1×C)=[β(1)β(2)⋯β(C)]
前向传播
-
训练模式前向传播
Y ( N × C ) = γ ( 1 × C ) × X ( N × C ) − μ ( 1 × C ) v a r ( 1 × C ) + ε + β ( 1 × C ) {{\bf{Y}}{(N \times C)}} = {{\bf{\gamma }}{(1 \times C)}} \times \frac{{{{\bf{X}}{(N \times C)}} - {{\bf{\mu }}{(1 \times C)}}}}{{\sqrt {{\bf{va}}{{\bf{r}}{(1 \times C)}} + \varepsilon } }} + {{\bf{\beta }}{(1 \times C)}} Y(N×C)=γ(1×C)×var(1×C)+ε X(N×C)−μ(1×C)+β(1×C) -
评估模式前向传播
μ r u n i n g ( 1 × C ) ← ( 1 − m o m e n t u m ) × μ r u n i n g ( 1 × C ) + m o m e n t u m × μ ( 1 × C ) v a r r u n i n g ( 1 × C ) ← ( 1 − m o m e n t u m ) × v a r r u n i n g ( 1 × C ) + m o m e n t u m × v a r ( 1 × C ) Y ( N × C ) = γ ( 1 × C ) × X ( N × C ) − μ r u n i n g ( 1 × C ) v a r r u n i n g ( 1 × C ) + ε + β ( 1 × C ) \begin{array}{l} \mathop {{{\bf{\mu }}{{\rm{runing}}}}}\limits^{(1 \times C)} \leftarrow \left( {1 - {\rm{momentum}}} \right) \times \mathop {{{\bf{\mu }}{{\rm{runing}}}}}\limits^{(1 \times C)} + {\rm{momentum}} \times {{\bf{\mu }}{(1 \times C)}}\\ \mathop {{\bf{va}}{{\bf{r}}{{\rm{runing}}}}}\limits^{(1 \times C)} \leftarrow \left( {1 - {\rm{momentum}}} \right) \times \mathop {{\bf{va}}{{\bf{r}}{{\rm{runing}}}}}\limits^{(1 \times C)} + {\rm{momentum}} \times {\bf{va}}{{\bf{r}}{(1 \times C)}}\\ {{\bf{Y}}{(N \times C)}} = {{\bf{\gamma }}{(1 \times C)}} \times \frac{{{{\bf{X}}{(N \times C)}} - \mathop {{{\bf{\mu }}{{\rm{runing}}}}}\limits^{(1 \times C)} }}{{\sqrt {\mathop {{\bf{va}}{{\bf{r}}{{\rm{runing}}}}}\limits^{(1 \times C)} + \varepsilon } }} + {{\bf{\beta }}{(1 \times C)}} \end{array} μruning(1×C)←(1−momentum)×μruning(1×C)+momentum×μ(1×C)varruning(1×C)←(1−momentum)×varruning(1×C)+momentum×var(1×C)Y(N×C)=γ(1×C)×varruning(1×C)+ε X(N×C)−μruning(1×C)+β(1×C)
反向传播
- 层梯度
G R A D ( N × C ) = γ ( 1 × C ) v a r ( 1 × C ) + ε × ( g r a d ( N × C ) − 1 N ⋅ g r a d . s u m ( a x i s = 0 ) ( 1 × C ) − 1 N ⋅ X ^ ( N × C ) × ( g r a d ( N × C ) × X ^ ( N × C ) ) . s u m ( a x i s = 0 ) ( 1 × C ) ) {\bf{GRA}}{{\bf{D}}{(N \times C)}} = \frac{{{{\bf{\gamma }}{(1 \times C)}}}}{{\sqrt {{\bf{va}}{{\bf{r}}{(1 \times C)}} + \varepsilon } }} \times \left( {{\bf{gra}}{{\bf{d}}{(N \times C)}} - \frac{1}{N} \cdot \mathop {{\bf{grad}}{\rm{.sum(axis = 0)}}}\limits^{(1 \times C)} - \frac{1}{N} \cdot {{{\bf{\hat X}}}{(N \times C)}} \times \left( {{\bf{gra}}{{\bf{d}}{(N \times C)}} \times {{{\bf{\hat X}}}_{(N \times C)}}} \right)\mathop {{\rm{.sum(axis = 0)}}}\limits^{(1 \times C)} } \right) GRAD(N×C)=var(1×C)+ε γ(1×C)×(grad(N×C)−N1⋅grad.sum(axis=0)(1×C)−N1⋅X^(N×C)×(grad(N×C)×X^(N×C)).sum(axis=0)(1×C))
- 可学习参数梯度
d γ ( 1 × C ) = ( g r a d ( N × C ) × X n o r m ( N × C ) ) . s u m ( a x i s = 0 ) d β ( 1 × C ) = ( g r a d ( N × C ) ) . s u m ( a x i s = 0 ) \begin{array}{l} {\bf{d}}{{\bf{\gamma }}{(1 \times C)}} = \left( {{\bf{gra}}{{\bf{d}}{(N \times C)}} \times \mathop {{{\bf{X}}{{\bf{norm}}}}}\limits^{(N \times C)} } \right).sum(axis = 0)\\ {\bf{d}}{{\bf{\beta }}{(1 \times C)}} = \left( {{\bf{gra}}{{\bf{d}}_{(N \times C)}}} \right){\rm{.sum(axis = 0)}} \end{array} dγ(1×C)=(grad(N×C)×Xnorm(N×C)).sum(axis=0)dβ(1×C)=(grad(N×C)).sum(axis=0)
推导
参数规范
-
输入数据
X ( N × C ) = [ x 1 ( 1 ) x 1 ( 2 ) ⋯ x 1 ( C ) x 2 ( 1 ) x 2 ( 2 ) ⋯ x 2 ( C ) ⋯ ⋯ ⋯ ⋯ x N ( 1 ) x N ( 2 ) ⋯ x N ( C ) ] {{\bf{X}}_{(N \times C)}} = \left[ {\begin{array}{c} {{x_1}^{(1)}}&{{x_1}^{(2)}}& \cdots &{{x_1}^{(C)}}\\ {{x_2}^{(1)}}&{{x_2}^{(2)}}& \cdots &{{x_2}^{(C)}}\\ \cdots & \cdots & \cdots & \cdots \\ {{x_N}^{(1)}}&{{x_N}^{(2)}}& \cdots &{{x_N}^{(C)}} \end{array}} \right] X(N×C)= x1(1)x2(1)⋯xN(1)x1(2)x2(2)⋯xN(2)⋯⋯⋯⋯x1(C)x2(C)⋯xN(C) -
批均值与方差
μ ( 1 × C ) = [ μ ( 1 ) μ ( 2 ) ⋯ μ ( C ) ] = 1 N [ ∑ i N x i ( 1 ) ∑ i N x i ( 2 ) ⋯ ∑ i N x i ( C ) ] μ ( C ) = 1 N ∑ i N x i ( C ) \begin{array}{l} {{\bf{\mu }}{(1 \times C)}} = \left[ {\begin{array}{c} {{\mu ^{(1)}}}&{{\mu ^{(2)}}}& \cdots &{{\mu ^{(C)}}} \end{array}} \right] = \frac{1}{N}\left[ {\begin{array}{c} {\sum\nolimits_i^N {{x_i}^{(1)}} }&{\sum\nolimits_i^N {{x_i}^{(2)}} }& \cdots &{\sum\nolimits_i^N {{x_i}^{(C)}} } \end{array}} \right]\\ {\mu ^{(C)}} = \frac{1}{N}\sum\nolimits_i^N {{x_i}^{(C)}} \end{array} μ(1×C)=[μ(1)μ(2)⋯μ(C)]=N1[∑iNxi(1)∑iNxi(2)⋯∑iNxi(C)]μ(C)=N1∑iNxi(C)
v a r ( 1 × C ) = [ v a r ( 1 ) v a r ( 2 ) ⋯ v a r ( C ) ] = 1 N [ ∑ j N ( x j ( 1 ) − μ ( 1 ) ) 2 ∑ j N ( x j ( 2 ) − μ ( 2 ) ) 2 ⋯ ∑ j N ( x j ( C ) − μ ( C ) ) 2 ] v a r ( C ) = 1 N ∑ j N ( x j ( C ) − μ ( C ) ) 2 \begin{array}{l} {\bf{va}}{{\bf{r}}{(1 \times C)}} = \left[ {\begin{array}{c} {{{{\mathop{\rm var}} }^{(1)}}}&{{{{\mathop{\rm var}} }^{(2)}}}& \cdots &{{{{\mathop{\rm var}} }^{(C)}}} \end{array}} \right] = \frac{1}{N}\left[ {\begin{array}{c} {\sum\nolimits_j^N {{{\left( {{x_j}^{(1)} - {\mu ^{(1)}}} \right)}^2}} }&{\sum\nolimits_j^N {{{\left( {{x_j}^{(2)} - {\mu ^{(2)}}} \right)}^2}} }& \cdots &{\sum\nolimits_j^N {{{\left( {{x_j}^{(C)} - {\mu ^{(C)}}} \right)}^2}} } \end{array}} \right]\\ {{\mathop{\rm var}} ^{(C)}} = \frac{1}{N}\sum\nolimits_j^N {{{\left( {{x_j}^{(C)} - {\mu ^{(C)}}} \right)}^2}} \end{array} var(1×C)=[var(1)var(2)⋯var(C)]=N1[∑jN(xj(1)−μ(1))2∑jN(xj(2)−μ(2))2⋯∑jN(xj(C)−μ(C))2]var(C)=N1∑jN(xj(C)−μ(C))2 -
可学习参数
γ ( 1 × C ) = [ γ ( 1 ) γ ( 2 ) ⋯ γ ( C ) ] β ( 1 × C ) = [ β ( 1 ) β ( 2 ) ⋯ β ( C ) ] \begin{array}{l} {{{\bf{\gamma }}{(1 \times C)}} = \left[ {\begin{array}{c} {{\gamma ^{(1)}}}&{{\gamma ^{(2)}}}& \cdots &{{\gamma ^{(C)}}} \end{array}} \right]}\\ {{{\bf{\beta }}{(1 \times C)}} = \left[ {\begin{array}{c} {{\beta ^{(1)}}}&{{\beta ^{(2)}}}& \cdots &{{\beta ^{(C)}}} \end{array}} \right]} \end{array} γ(1×C)=[γ(1)γ(2)⋯γ(C)]β(1×C)=[β(1)β(2)⋯β(C)] -
标准化后的数据
X ^ ( N × C ) = X ( N × C ) − μ ( 1 × C ) v a r ( 1 × C ) + ε = [ x 1 ( 1 ) − μ ( 1 ) v a r ( 1 ) + ε x 1 ( 2 ) − μ ( 2 ) v a r ( 2 ) + ε ⋯ x 1 ( C ) − μ ( C ) v a r ( C ) + ε x 2 ( 1 ) − μ ( 1 ) v a r ( 1 ) + ε x 2 ( 2 ) − μ ( 2 ) v a r ( 2 ) + ε ⋯ x 2 ( C ) − μ ( C ) v a r ( C ) + ε ⋯ ⋯ ⋯ ⋯ x N ( 1 ) − μ ( 1 ) v a r ( 1 ) + ε x N ( 2 ) − μ ( 2 ) v a r ( 2 ) + ε ⋯ x N ( C ) − μ ( C ) v a r ( C ) + ε ] = [ x ^ 1 ( 1 ) x ^ 1 ( 2 ) ⋯ x ^ 1 ( C ) x ^ 2 ( 1 ) x ^ 2 ( 2 ) ⋯ x ^ 2 ( C ) ⋯ ⋯ ⋯ ⋯ x ^ N ( 1 ) x ^ N ( 2 ) ⋯ x ^ N ( C ) ] x ^ N ( C ) = x N ( C ) − μ ( C ) v a r ( C ) + ε \begin{array}{l} {{{\bf{\hat X}}}{(N \times C)}} = \frac{{{{\bf{X}}{(N \times C)}} - {{\bf{\mu }}{(1 \times C)}}}}{{\sqrt {{\bf{va}}{{\bf{r}}{(1 \times C)}} + \varepsilon } }} = \left[ {\begin{array}{c} {\frac{{{x_1}^{(1)} - {\mu ^{(1)}}}}{{\sqrt {{{{\mathop{\rm var}} }^{(1)}} + \varepsilon } }}}&{\frac{{{x_1}^{(2)} - {\mu ^{(2)}}}}{{\sqrt {{{{\mathop{\rm var}} }^{(2)}} + \varepsilon } }}}& \cdots &{\frac{{{x_1}^{(C)} - {\mu ^{(C)}}}}{{\sqrt {{{{\mathop{\rm var}} }^{(C)}} + \varepsilon } }}}\\ {\frac{{{x_2}^{(1)} - {\mu ^{(1)}}}}{{\sqrt {{{{\mathop{\rm var}} }^{(1)}} + \varepsilon } }}}&{\frac{{{x_2}^{(2)} - {\mu ^{(2)}}}}{{\sqrt {{{{\mathop{\rm var}} }^{(2)}} + \varepsilon } }}}& \cdots &{\frac{{{x_2}^{(C)} - {\mu ^{(C)}}}}{{\sqrt {{{{\mathop{\rm var}} }^{(C)}} + \varepsilon } }}}\\ \cdots & \cdots & \cdots & \cdots \\ {\frac{{{x_N}^{(1)} - {\mu ^{(1)}}}}{{\sqrt {{{{\mathop{\rm var}} }^{(1)}} + \varepsilon } }}}&{\frac{{{x_N}^{(2)} - {\mu ^{(2)}}}}{{\sqrt {{{{\mathop{\rm var}} }^{(2)}} + \varepsilon } }}}& \cdots &{\frac{{{x_N}^{(C)} - {\mu ^{(C)}}}}{{\sqrt {{{{\mathop{\rm var}} }^{(C)}} + \varepsilon } }}} \end{array}} \right] = \left[ {\begin{array}{c} {{{\hat x}_1}^{(1)}}&{{{\hat x}_1}^{(2)}}& \cdots &{{{\hat x}_1}^{(C)}}\\ {{{\hat x}_2}^{(1)}}&{{{\hat x}_2}^{(2)}}& \cdots &{{{\hat x}_2}^{(C)}}\\ \cdots & \cdots & \cdots & \cdots \\ {{{\hat x}_N}^{(1)}}&{{{\hat x}_N}^{(2)}}& \cdots &{{{\hat x}_N}^{(C)}} \end{array}} \right]\\ {{\hat x}_N}^{(C)} = \frac{{{x_N}^{(C)} - {\mu ^{(C)}}}}{{\sqrt {{{{\mathop{\rm var}} }^{(C)}} + \varepsilon } }} \end{array} X^(N×C)=var(1×C)+ε X(N×C)−μ(1×C)= var(1)+ε x1(1)−μ(1)var(1)+ε x2(1)−μ(1)⋯var(1)+ε xN(1)−μ(1)var(2)+ε x1(2)−μ(2)var(2)+ε x2(2)−μ(2)⋯var(2)+ε xN(2)−μ(2)⋯⋯⋯⋯var(C)+ε x1(C)−μ(C)var(C)+ε x2(C)−μ(C)⋯var(C)+ε xN(C)−μ(C) = x^1(1)x^2(1)⋯x^N(1)x^1(2)x^2(2)⋯x^N(2)⋯⋯⋯⋯x^1(C)x^2(C)⋯x^N(C) x^N(C)=var(C)+ε xN(C)−μ(C) -
输出数据
Y ( N × C ) = γ ( 1 × C ) × X ^ ( N × C ) + β ( 1 × C ) = [ γ ( 1 ) x 1 ( 1 ) − μ ( 1 ) v a r ( 1 ) + ε + β ( 1 ) γ ( 2 ) x 1 ( 2 ) − μ ( 2 ) v a r ( 2 ) + ε + β ( 2 ) ⋯ γ ( C ) x 1 ( C ) − μ ( C ) v a r ( C ) + ε + β ( C ) γ ( 1 ) x 2 ( 1 ) − μ ( 1 ) v a r ( 1 ) + ε + β ( 1 ) γ ( 2 ) x 2 ( 2 ) − μ ( 2 ) v a r ( 2 ) + ε + β ( 2 ) ⋯ γ ( C ) x 2 ( C ) − μ ( C ) v a r ( C ) + ε + β ( C ) ⋯ ⋯ ⋯ ⋯ γ ( 1 ) x N ( 1 ) − μ ( 1 ) v a r ( 1 ) + ε + β ( 1 ) γ ( 2 ) x N ( 2 ) − μ ( 2 ) v a r ( 2 ) + ε + β ( 2 ) ⋯ γ ( C ) x N ( C ) − μ ( C ) v a r ( C ) + ε + β ( C ) ] = [ y 1 ( 1 ) y 1 ( 2 ) ⋯ y 1 ( C ) y 2 ( 1 ) y 2 ( 2 ) ⋯ y 2 ( C ) ⋯ ⋯ ⋯ ⋯ y N ( 1 ) y N ( 2 ) ⋯ y N ( C ) ] y N ( C ) = f γ ( C ) , β ( C ) ( x N ( C ) , μ ( C ) , v a r ( C ) ) = γ ( C ) x N ( C ) − μ ( C ) v a r ( C ) + ε + β ( C ) \begin{array}{l} {{\bf{Y}}{(N \times C)}} = {{\bf{\gamma }}{(1 \times C)}} \times {{{\bf{\hat X}}}{(N \times C)}} + {{\bf{\beta }}{(1 \times C)}} = \left[ {\begin{array}{c} {{\gamma ^{(1)}}\frac{{{x_1}^{(1)} - {\mu ^{(1)}}}}{{\sqrt {{{{\mathop{\rm var}} }^{(1)}} + \varepsilon } }} + {\beta ^{(1)}}}&{{\gamma ^{(2)}}\frac{{{x_1}^{(2)} - {\mu ^{(2)}}}}{{\sqrt {{{{\mathop{\rm var}} }^{(2)}} + \varepsilon } }} + {\beta ^{(2)}}}& \cdots &{{\gamma ^{(C)}}\frac{{{x_1}^{(C)} - {\mu ^{(C)}}}}{{\sqrt {{{{\mathop{\rm var}} }^{(C)}} + \varepsilon } }} + {\beta ^{(C)}}}\\ {{\gamma ^{(1)}}\frac{{{x_2}^{(1)} - {\mu ^{(1)}}}}{{\sqrt {{{{\mathop{\rm var}} }^{(1)}} + \varepsilon } }} + {\beta ^{(1)}}}&{{\gamma ^{(2)}}\frac{{{x_2}^{(2)} - {\mu ^{(2)}}}}{{\sqrt {{{{\mathop{\rm var}} }^{(2)}} + \varepsilon } }} + {\beta ^{(2)}}}& \cdots &{{\gamma ^{(C)}}\frac{{{x_2}^{(C)} - {\mu ^{(C)}}}}{{\sqrt {{{{\mathop{\rm var}} }^{(C)}} + \varepsilon } }} + {\beta ^{(C)}}}\\ \cdots & \cdots & \cdots & \cdots \\ {{\gamma ^{(1)}}\frac{{{x_N}^{(1)} - {\mu ^{(1)}}}}{{\sqrt {{{{\mathop{\rm var}} }^{(1)}} + \varepsilon } }} + {\beta ^{(1)}}}&{{\gamma ^{(2)}}\frac{{{x_N}^{(2)} - {\mu ^{(2)}}}}{{\sqrt {{{{\mathop{\rm var}} }^{(2)}} + \varepsilon } }} + {\beta ^{(2)}}}& \cdots &{{\gamma ^{(C)}}\frac{{{x_N}^{(C)} - {\mu ^{(C)}}}}{{\sqrt {{{{\mathop{\rm var}} }^{(C)}} + \varepsilon } }} + {\beta ^{(C)}}} \end{array}} \right] = \left[ {\begin{array}{c} {{y_1}^{(1)}}&{{y_1}^{(2)}}& \cdots &{{y_1}^{(C)}}\\ {{y_2}^{(1)}}&{{y_2}^{(2)}}& \cdots &{{y_2}^{(C)}}\\ \cdots & \cdots & \cdots & \cdots \\ {{y_N}^{(1)}}&{{y_N}^{(2)}}& \cdots &{{y_N}^{(C)}} \end{array}} \right]\\ {y_N}^{(C)} = {f_{{\gamma ^{(C)}},{\beta ^{(C)}}}}\left( {{x_N}^{(C)},{\mu ^{(C)}},{{{\mathop{\rm var}} }^{(C)}}} \right) = {\gamma ^{(C)}}\frac{{{x_N}^{(C)} - {\mu ^{(C)}}}}{{\sqrt {{{{\mathop{\rm var}} }^{(C)}} + \varepsilon } }} + {\beta ^{(C)}} \end{array} Y(N×C)=γ(1×C)×X^(N×C)+β(1×C)= γ(1)var(1)+ε x1(1)−μ(1)+β(1)γ(1)var(1)+ε x2(1)−μ(1)+β(1)⋯γ(1)var(1)+ε xN(1)−μ(1)+β(1)γ(2)var(2)+ε x1(2)−μ(2)+β(2)γ(2)var(2)+ε x2(2)−μ(2)+β(2)⋯γ(2)var(2)+ε xN(2)−μ(2)+β(2)⋯⋯⋯⋯γ(C)var(C)+ε x1(C)−μ(C)+β(C)γ(C)var(C)+ε x2(C)−μ(C)+β(C)⋯γ(C)var(C)+ε xN(C)−μ(C)+β(C) = y1(1)y2(1)⋯yN(1)y1(2)y2(2)⋯yN(2)⋯⋯⋯⋯y1(C)y2(C)⋯yN(C) yN(C)=fγ(C),β(C)(xN(C),μ(C),var(C))=γ(C)var(C)+ε xN(C)−μ(C)+β(C) -
损失输出
l o s s = L ( Y ( N × C ) ) loss = L\left( {{{\bf{Y}}_{(N \times C)}}} \right) loss=L(Y(N×C)) -
损失梯度
g r a d ( N × C ) = ∂ L ( Y ( N × C ) ) ∂ Y ( N × C ) = [ ∂ L ( Y ( N × C ) ) ∂ y 1 ( 1 ) ∂ L ( Y ( N × C ) ) ∂ y 1 ( 2 ) ⋯ ∂ L ( Y ( N × C ) ) ∂ y 1 ( C ) ∂ L ( Y ( N × C ) ) ∂ y 2 ( 1 ) ∂ L ( Y ( N × C ) ) ∂ y 2 ( 2 ) ⋯ ∂ L ( Y ( N × C ) ) ∂ y 2 ( C ) ⋯ ⋯ ⋯ ⋯ ∂ L ( Y ( N × C ) ) ∂ y N ( 1 ) ∂ L ( Y ( N × C ) ) ∂ y N ( 2 ) ⋯ ∂ L ( Y ( N × C ) ) ∂ y N ( C ) ] = [ g 1 ( 1 ) g 1 ( 2 ) ⋯ g 1 ( C ) g 2 ( 1 ) g 2 ( 2 ) ⋯ g 2 ( C ) ⋯ ⋯ ⋯ ⋯ g N ( 1 ) g N ( 2 ) ⋯ g N ( C ) ] g N ( C ) = ∂ L ( Y ( N × C ) ) ∂ y N ( C ) \begin{array}{l} {\bf{gra}}{{\bf{d}}{(N \times C)}} = \frac{{\partial L\left( {{{\bf{Y}}{(N \times C)}}} \right)}}{{\partial {{\bf{Y}}{(N \times C)}}}} = \left[ {\begin{array}{c} {\frac{{\partial L\left( {{{\bf{Y}}{(N \times C)}}} \right)}}{{\partial {y_1}^{(1)}}}}&{\frac{{\partial L\left( {{{\bf{Y}}{(N \times C)}}} \right)}}{{\partial {y_1}^{(2)}}}}& \cdots &{\frac{{\partial L\left( {{{\bf{Y}}{(N \times C)}}} \right)}}{{\partial {y_1}^{(C)}}}}\\ {\frac{{\partial L\left( {{{\bf{Y}}{(N \times C)}}} \right)}}{{\partial {y_2}^{(1)}}}}&{\frac{{\partial L\left( {{{\bf{Y}}{(N \times C)}}} \right)}}{{\partial {y_2}^{(2)}}}}& \cdots &{\frac{{\partial L\left( {{{\bf{Y}}{(N \times C)}}} \right)}}{{\partial {y_2}^{(C)}}}}\\ \cdots & \cdots & \cdots & \cdots \\ {\frac{{\partial L\left( {{{\bf{Y}}{(N \times C)}}} \right)}}{{\partial {y_N}^{(1)}}}}&{\frac{{\partial L\left( {{{\bf{Y}}{(N \times C)}}} \right)}}{{\partial {y_N}^{(2)}}}}& \cdots &{\frac{{\partial L\left( {{{\bf{Y}}{(N \times C)}}} \right)}}{{\partial {y_N}^{(C)}}}} \end{array}} \right] = \left[ {\begin{array}{c} {{g_1}^{(1)}}&{{g_1}^{(2)}}& \cdots &{{g_1}^{(C)}}\\ {{g_2}^{(1)}}&{{g_2}^{(2)}}& \cdots &{{g_2}^{(C)}}\\ \cdots & \cdots & \cdots & \cdots \\ {{g_N}^{(1)}}&{{g_N}^{(2)}}& \cdots &{{g_N}^{(C)}} \end{array}} \right]\\ {g_N}^{(C)} = \frac{{\partial L\left( {{{\bf{Y}}_{(N \times C)}}} \right)}}{{\partial {y_N}^{(C)}}} \end{array} grad(N×C)=∂Y(N×C)∂L(Y(N×C))= ∂y1(1)∂L(Y(N×C))∂y2(1)∂L(Y(N×C))⋯∂yN(1)∂L(Y(N×C))∂y1(2)∂L(Y(N×C))∂y2(2)∂L(Y(N×C))⋯∂yN(2)∂L(Y(N×C))⋯⋯⋯⋯∂y1(C)∂L(Y(N×C))∂y2(C)∂L(Y(N×C))⋯∂yN(C)∂L(Y(N×C)) = g1(1)g2(1)⋯gN(1)g1(2)g2(2)⋯gN(2)⋯⋯⋯⋯g1(C)g2(C)⋯gN(C) gN(C)=∂yN(C)∂L(Y(N×C))
可学习参数梯度
要求得 d γ ( 1 × C ) {\bf{d}}{{\bf{\gamma }}{(1 \times C)}} dγ(1×C),只需先对损失以 γ ( 1 × C ) {{\bf{\gamma }}{(1 \times C)}} γ(1×C)向量最后一个元素 γ ( C ) {{\gamma ^{(C)}}} γ(C)的偏导数:
∂ l o s s ∂ γ ( C ) = ∂ L ( Y ( N × C ) ) ∂ γ ( C ) = ∑ k N ( ∂ L ( Y ( N × C ) ) ∂ y k ( C ) × ∂ ( γ ( C ) x k ( C ) − μ ( C ) v a r ( C ) + ε + β ( C ) ) ∂ γ ( C ) ) = ∑ k N ( ∂ L ( Y ( N × C ) ) ∂ y k ( C ) × x k ( C ) − μ ( C ) v a r ( C ) + ε ) = ∑ k N ( g k ( C ) x ^ k ( C ) ) \frac{{\partial loss}}{{\partial {\gamma ^{(C)}}}} = \frac{{\partial L\left( {{{\bf{Y}}{(N \times C)}}} \right)}}{{\partial {\gamma ^{(C)}}}} = \sum\nolimits_k^N {\left( {\frac{{\partial L\left( {{{\bf{Y}}{(N \times C)}}} \right)}}{{\partial {y_k}^{(C)}}} \times \frac{{\partial \left( {{\gamma ^{(C)}}\frac{{{x_k}^{(C)} - {\mu ^{(C)}}}}{{\sqrt {{{{\mathop{\rm var}} }^{(C)}} + \varepsilon } }} + {\beta ^{(C)}}} \right)}}{{\partial {\gamma ^{(C)}}}}} \right)} = \sum\nolimits_k^N {\left( {\frac{{\partial L\left( {{{\bf{Y}}_{(N \times C)}}} \right)}}{{\partial {y_k}^{(C)}}} \times \frac{{{x_k}^{(C)} - {\mu ^{(C)}}}}{{\sqrt {{{{\mathop{\rm var}} }^{(C)}} + \varepsilon } }}} \right)} = \sum\nolimits_k^N {\left( {{g_k}^{(C)}{{\hat x}_k}^{(C)}} \right)} ∂γ(C)∂loss=∂γ(C)∂L(Y(N×C))=∑kN ∂yk(C)∂L(Y(N×C))×∂γ(C)∂(γ(C)var(C)+ε xk(C)−μ(C)+β(C)) =∑kN(∂yk(C)∂L(Y(N×C))×var(C)+ε xk(C)−μ(C))=∑kN(gk(C)x^k(C))
观察 Y ( N × C ) {{{\bf{Y}}_{(N \times C)}}} Y(N×C),包含 γ ( C ) {{\gamma ^{(C)}}} γ(C)的路径仅有最后一列 y 1 ( C ) , y 2 ( C ) , ⋯ , y N ( C ) {y_1}^{(C)},{y_2}^{(C)}, \cdots ,{y_N}^{(C)} y1(C),y2(C),⋯,yN(C),根据多元复合函数链式法则,需要将这些路径求和。
根据该表达式求出所有元素的偏导数,并整合成梯度向量 d γ ( 1 × C ) {\bf{d}}{{\bf{\gamma }}{(1 \times C)}} dγ(1×C):
d γ ( 1 × C ) = ∂ l o s s ∂ γ ( 1 × C ) = [ ∑ k N ( g k ( 1 ) x ^ k ( 1 ) ) ∑ k N ( g k ( 2 ) x ^ k ( 2 ) ) ⋯ ∑ k N ( g k ( C ) x ^ k ( C ) ) ] = ( g r a d ( N × C ) × X n o r m ( N × C ) ) . s u m ( a x i s = 0 ) {\bf{d}}{{\bf{\gamma }}{(1 \times C)}} = \frac{{\partial loss}}{{\partial {{\bf{\gamma }}_{(1 \times C)}}}} = \left[ {\begin{array}{c} {\sum\nolimits_k^N {\left( {{g_k}^{(1)}{{\hat x}_k}^{(1)}} \right)} }&{\sum\nolimits_k^N {\left( {{g_k}^{(2)}{{\hat x}k}^{(2)}} \right)} }& \cdots &{\sum\nolimits_k^N {\left( {{g_k}^{(C)}{{\hat x}k}^{(C)}} \right)} } \end{array}} \right] = \left( {{\bf{gra}}{{\bf{d}}{(N \times C)}} \times \mathop {{{\bf{X}}{{\bf{norm}}}}}\limits^{(N \times C)} } \right).sum(axis = 0) dγ(1×C)=∂γ(1×C)∂loss=[∑kN(gk(1)x^k(1))∑kN(gk(2)x^k(2))⋯∑kN(gk(C)x^k(C))]=(grad(N×C)×Xnorm(N×C)).sum(axis=0)
同理求出参数 β ( 1 × C ) {{\bf{\beta }}{(1 \times C)}} β(1×C)的梯度向量 d β ( 1 × C ) {\bf{d}}{{\bf{\beta }}{(1 \times C)}} dβ(1×C):
∂ l o s s ∂ β ( C ) = ∂ L ( Y ( N × C ) ) ∂ β ( C ) = ∑ k N ( ∂ L ( Y ( N × C ) ) ∂ y k ( C ) × ∂ ( γ ( C ) x k ( C ) − μ ( C ) v a r ( C ) + ε + β ( C ) ) ∂ β ( C ) ) = ∑ k N ( ∂ L ( Y ( N × C ) ) ∂ y k ( C ) ) = ∑ k N g k ( C ) d β ( 1 × C ) = ∂ l o s s ∂ β ( 1 × C ) = [ ∑ k N g k ( 1 ) ∑ k N g k ( 2 ) ⋯ ∑ k N g k ( C ) ] = ( g r a d ( N × C ) ) . s u m ( a x i s = 0 ) \begin{array}{l} \frac{{\partial loss}}{{\partial {\beta ^{(C)}}}} = \frac{{\partial L\left( {{{\bf{Y}}{(N \times C)}}} \right)}}{{\partial {\beta ^{(C)}}}} = \sum\nolimits_k^N {\left( {\frac{{\partial L\left( {{{\bf{Y}}{(N \times C)}}} \right)}}{{\partial {y_k}^{(C)}}} \times \frac{{\partial \left( {{\gamma ^{(C)}}\frac{{{x_k}^{(C)} - {\mu ^{(C)}}}}{{\sqrt {{{{\mathop{\rm var}} }^{(C)}} + \varepsilon } }} + {\beta ^{(C)}}} \right)}}{{\partial {\beta ^{(C)}}}}} \right)} = \sum\nolimits_k^N {\left( {\frac{{\partial L\left( {{{\bf{Y}}{(N \times C)}}} \right)}}{{\partial {y_k}^{(C)}}}} \right)} = \sum\nolimits_k^N {{g_k}^{(C)}} \\ {\bf{d}}{{\bf{\beta }}{(1 \times C)}} = \frac{{\partial loss}}{{\partial {{\bf{\beta }}{(1 \times C)}}}} = \left[ {\begin{array}{c} {\sum\nolimits_k^N {{g_k}^{(1)}} }&{\sum\nolimits_k^N {{g_k}^{(2)}} }& \cdots &{\sum\nolimits_k^N {{g_k}^{(C)}} } \end{array}} \right] = \left( {{\bf{gra}}{{\bf{d}}{(N \times C)}}} \right){\rm{.sum(axis = 0)}} \end{array} ∂β(C)∂loss=∂β(C)∂L(Y(N×C))=∑kN ∂yk(C)∂L(Y(N×C))×∂β(C)∂(γ(C)var(C)+ε xk(C)−μ(C)+β(C)) =∑kN(∂yk(C)∂L(Y(N×C)))=∑kNgk(C)dβ(1×C)=∂β(1×C)∂loss=[∑kNgk(1)∑kNgk(2)⋯∑kNgk(C)]=(grad(N×C)).sum(axis=0)
层梯度
观察输出矩阵以及批均值和方差各项的定义:
Y ( N × C ) = [ γ ( 1 ) x 1 ( 1 ) − μ ( 1 ) v a r ( 1 ) + ε + β ( 1 ) γ ( 2 ) x 1 ( 2 ) − μ ( 2 ) v a r ( 2 ) + ε + β ( 2 ) ⋯ γ ( C ) x 1 ( C ) − μ ( C ) v a r ( C ) + ε + β ( C ) γ ( 1 ) x 2 ( 1 ) − μ ( 1 ) v a r ( 1 ) + ε + β ( 1 ) γ ( 2 ) x 2 ( 2 ) − μ ( 2 ) v a r ( 2 ) + ε + β ( 2 ) ⋯ γ ( C ) x 2 ( C ) − μ ( C ) v a r ( C ) + ε + β ( C ) ⋯ ⋯ ⋯ ⋯ γ ( 1 ) x N ( 1 ) − μ ( 1 ) v a r ( 1 ) + ε + β ( 1 ) γ ( 2 ) x N ( 2 ) − μ ( 2 ) v a r ( 2 ) + ε + β ( 2 ) ⋯ γ ( C ) x N ( C ) − μ ( C ) v a r ( C ) + ε + β ( C ) ] y N ( C ) = f γ ( C ) , β ( C ) ( x N ( C ) , μ ( C ) , v a r ( C ) ) = γ ( C ) x N ( C ) − μ ( C ) v a r ( C ) + ε + β ( C ) μ ( C ) = 1 N ∑ i N x i ( C ) v a r ( C ) = 1 N ∑ j N ( x j ( C ) − μ ( C ) ) 2 \begin{array}{l} {{{\bf{Y}}{(N \times C)}} = \left[ {\begin{array}{c} {{\gamma ^{(1)}}\frac{{{x_1}^{(1)} - {\mu ^{(1)}}}}{{\sqrt {{\rm{va}}{{\rm{r}}^{(1)}} + \varepsilon } }} + {\beta ^{(1)}}}&{{\gamma ^{(2)}}\frac{{{x_1}^{(2)} - {\mu ^{(2)}}}}{{\sqrt {{\rm{va}}{{\rm{r}}^{(2)}} + \varepsilon } }} + {\beta ^{(2)}}}& \cdots &{{\gamma ^{(C)}}\frac{{{x_1}^{(C)} - {\mu ^{(C)}}}}{{\sqrt {{\rm{va}}{{\rm{r}}^{(C)}} + \varepsilon } }} + {\beta ^{(C)}}}\\ {{\gamma ^{(1)}}\frac{{{x_2}^{(1)} - {\mu ^{(1)}}}}{{\sqrt {{\rm{va}}{{\rm{r}}^{(1)}} + \varepsilon } }} + {\beta ^{(1)}}}&{{\gamma ^{(2)}}\frac{{{x_2}^{(2)} - {\mu ^{(2)}}}}{{\sqrt {{\rm{va}}{{\rm{r}}^{(2)}} + \varepsilon } }} + {\beta ^{(2)}}}& \cdots &{{\gamma ^{(C)}}\frac{{{x_2}^{(C)} - {\mu ^{(C)}}}}{{\sqrt {{\rm{va}}{{\rm{r}}^{(C)}} + \varepsilon } }} + {\beta ^{(C)}}}\\ \cdots & \cdots & \cdots & \cdots \\ {{\gamma ^{(1)}}\frac{{{x_N}^{(1)} - {\mu ^{(1)}}}}{{\sqrt {{\rm{va}}{{\rm{r}}^{(1)}} + \varepsilon } }} + {\beta ^{(1)}}}&{{\gamma ^{(2)}}\frac{{{x_N}^{(2)} - {\mu ^{(2)}}}}{{\sqrt {{\rm{va}}{{\rm{r}}^{(2)}} + \varepsilon } }} + {\beta ^{(2)}}}& \cdots &{{\gamma ^{(C)}}\frac{{{x_N}^{(C)} - {\mu ^{(C)}}}}{{\sqrt {{\rm{va}}{{\rm{r}}^{(C)}} + \varepsilon } }} + {\beta ^{(C)}}} \end{array}} \right]}\\ \begin{array}{l} {y_N}^{(C)} = {f{{\gamma ^{(C)}},{\beta ^{(C)}}}}\left( {{x_N}^{(C)},{\mu ^{(C)}},{{{\mathop{\rm var}} }^{(C)}}} \right) = {\gamma ^{(C)}}\frac{{{x_N}^{(C)} - {\mu ^{(C)}}}}{{\sqrt {{{{\mathop{\rm var}} }^{(C)}} + \varepsilon } }} + {\beta ^{(C)}} \\ {\mu ^{(C)}} = \frac{1}{N}\sum\nolimits_i^N {{x_i}^{(C)}} \\ {{\mathop{\rm var}} ^{(C)}} = \frac{1}{N}\sum\nolimits_j^N {{{\left( {{x_j}^{(C)} - {\mu ^{(C)}}} \right)}^2}} \end{array} \end{array} Y(N×C)= γ(1)var(1)+ε x1(1)−μ(1)+β(1)γ(1)var(1)+ε x2(1)−μ(1)+β(1)⋯γ(1)var(1)+ε xN(1)−μ(1)+β(1)γ(2)var(2)+ε x1(2)−μ(2)+β(2)γ(2)var(2)+ε x2(2)−μ(2)+β(2)⋯γ(2)var(2)+ε xN(2)−μ(2)+β(2)⋯⋯⋯⋯γ(C)var(C)+ε x1(C)−μ(C)+β(C)γ(C)var(C)+ε x2(C)−μ(C)+β(C)⋯γ(C)var(C)+ε xN(C)−μ(C)+β(C) yN(C)=fγ(C),β(C)(xN(C),μ(C),var(C))=γ(C)var(C)+ε xN(C)−μ(C)+β(C)μ(C)=N1∑iNxi(C)var(C)=N1∑jN(xj(C)−μ(C))2
我们要求对损失以输入数据 X ( N × C ) {{{\bf{X}}_{(N \times C)}}} X(N×C)最后一项元素的偏导数 x N ( C ) {{x_N}^{(C)}} xN(C),因此要观察 x N ( C ) {{x_N}^{(C)}} xN(C)出现在哪些路径:
- 独立路径 :只有 Y ( N × C ) {{\bf{Y}}{(N \times C)}} Y(N×C)的最后一项存在独立于 μ \mu μ和 v a r \rm var var之外的 x N ( C ) {{x_N}^{(C)}} xN(C),对应如下路径
d x 1 = ∂ L ( Y ( N × C ) ) ∂ y N ( C ) ⋅ ∂ f γ ( C ) , β ( C ) ( x N ( C ) , μ ( C ) , v a r ( C ) ) ∂ ( x N ( C ) − μ ( C ) ) ⋅ ∂ ( x N ( C ) − μ ( C ) ) ∂ x N ( C ) dx_1=\frac{{\partial L\left( {{{\bf{Y}}{(N \times C)}}} \right)}}{{\partial {y_N}^{(C)}}} \cdot \frac{{\partial {f_{{\gamma ^{(C)}},{\beta ^{(C)}}}}\left( {{x_N}^{(C)},{\mu ^{(C)}},{{{\mathop{\rm var}} }^{(C)}}} \right)}} {{\partial \left( {{x_N}^{(C)} - {\mu ^{(C)}}} \right)}} \cdot \frac{{\partial \left( {{x_N}^{(C)} - {\mu ^{(C)}}} \right)}}{{\partial {x_N}^{(C)}}} dx1=∂yN(C)∂L(Y(N×C))⋅∂(xN(C)−μ(C))∂fγ(C),β(C)(xN(C),μ(C),var(C))⋅∂xN(C)∂(xN(C)−μ(C)) - 均值路径 :只有 μ ( C ) {{\mu ^{(C)}}} μ(C)中存在 x N ( C ) {{x_N}^{(C)}} xN(C),只有 Y ( N × C ) {{\bf{Y}}{(N \times C)}} Y(N×C)的最后一列存在 μ ( C ) {{\mu ^{(C)}}} μ(C),对应如下路径
d x 2 = ∑ k N ( ∂ L ( Y ( N × C ) ) ∂ y k ( C ) ⋅ ∂ f γ ( C ) , β ( C ) ( x k ( C ) , μ ( C ) , v a r ( C ) ) ∂ μ ( C ) ⋅ ∂ μ ( C ) ∂ x N ( C ) ) dx_2 = \sum\nolimits_k^N {\left( {\frac{{\partial L\left( {{{\bf{Y}}{(N \times C)}}} \right)}}{{\partial {y_k}^{(C)}}} \cdot \frac{{\partial {f_{{\gamma ^{(C)}},{\beta ^{(C)}}}}\left( {{x_k}^{(C)},{\mu ^{(C)}},{{{\mathop{\rm var}} }^{(C)}}} \right)}}{{\partial {\mu ^{(C)}}}} \cdot \frac{{\partial {\mu ^{(C)}}}}{{\partial {x_N}^{(C)}}}} \right)} dx2=∑kN(∂yk(C)∂L(Y(N×C))⋅∂μ(C)∂fγ(C),β(C)(xk(C),μ(C),var(C))⋅∂xN(C)∂μ(C)) - 方差路径 :只有 v a r ( C ) {{{{\mathop{\rm var}} }^{(C)}}} var(C)中存在 x N ( C ) {{x_N}^{(C)}} xN(C),只有 Y ( N × C ) {{\bf{Y}}{(N \times C)}} Y(N×C)的最后一列存在 v a r ( C ) {{{{\mathop{\rm var}} }^{(C)}}} var(C),且 v a r ( C ) {{{{\mathop{\rm var}} }^{(C)}}} var(C)中包含的 μ ( C ) {{\mu ^{(C)}}} μ(C)已在均值路径中完成计算,因此该路径中要忽略 μ \mu μ:
d x 3 = ∑ k N ( ∂ L ( Y ( N × C ) ) ∂ y k ( C ) ⋅ ∂ f γ ( C ) , β ( C ) ( x k ( C ) , μ ( C ) , v a r ( C ) ) ∂ v a r ( C ) ⋅ ∂ v a r ( C ) ∂ ( x N ( C ) − μ ( C ) ) ⋅ ∂ ( x N ( C ) − μ ( C ) ) ∂ x N ( C ) ) dx_3 = \sum\nolimits_k^N {\left( {\frac{{\partial L\left( {{{\bf{Y}}{(N \times C)}}} \right)}}{{\partial {y_k}^{(C)}}} \cdot \frac{{\partial {f_{{\gamma ^{(C)}},{\beta ^{(C)}}}}\left( {{x_k}^{(C)},{\mu ^{(C)}},{{{\mathop{\rm var}} }^{(C)}}} \right)}}{{\partial {{{\mathop{\rm var}} }^{(C)}}}} \cdot \frac{{\partial {{{\mathop{\rm var}} }^{(C)}}}}{{\partial \left( {{x_N}^{(C)} - {\mu ^{(C)}}} \right)}} \cdot \frac{{\partial \left( {{x_N}^{(C)} - {\mu ^{(C)}}} \right)}}{{\partial {x_N}^{(C)}}}} \right)} dx3=∑kN(∂yk(C)∂L(Y(N×C))⋅∂var(C)∂fγ(C),β(C)(xk(C),μ(C),var(C))⋅∂(xN(C)−μ(C))∂var(C)⋅∂xN(C)∂(xN(C)−μ(C)))
总偏导就是将以上三个路径求和:
∂ l o s s ∂ x N ( C ) = d x 1 + d x 2 + d x 3 \frac{{\partial loss}}{{\partial {x_N}^{(C)}}} = d{x_1} + d{x_2} + d{x_3} ∂xN(C)∂loss=dx1+dx2+dx3
独立路径 d x 1 dx_1 dx1
d x 1 = ∂ L ( Y ( N × C ) ) ∂ y N ( C ) ⋅ ∂ f γ ( C ) , β ( C ) ( x N ( C ) , μ ( C ) , v a r ( C ) ) ∂ ( x N ( C ) − μ ( C ) ) ⋅ ∂ ( x N ( C ) − μ ( C ) ) ∂ x N ( C ) = ∂ L ( Y ( N × C ) ) ∂ y N ( C ) ⋅ ∂ ( γ ( C ) x N ( C ) − μ ( C ) v a r ( C ) + ε + β ( C ) ) ∂ ( x N ( C ) − μ ( C ) ) ⋅ 1 = g N ( C ) ⋅ γ ( C ) v a r ( C ) + ε d{x_1} = \frac{{\partial L\left( {{{\bf{Y}}{(N \times C)}}} \right)}}{{\partial {y_N}^{(C)}}} \cdot \frac{{\partial {f{{\gamma ^{(C)}},{\beta ^{(C)}}}}\left( {{x_N}^{(C)},{\mu ^{(C)}},{{{\mathop{\rm var}} }^{(C)}}} \right)}}{{\partial \left( {{x_N}^{(C)} - {\mu ^{(C)}}} \right)}} \cdot \frac{{\partial \left( {{x_N}^{(C)} - {\mu ^{(C)}}} \right)}}{{\partial {x_N}^{(C)}}} = \frac{{\partial L\left( {{{\bf{Y}}_{(N \times C)}}} \right)}}{{\partial {y_N}^{(C)}}} \cdot \frac{{\partial \left( {{\gamma ^{(C)}}\frac{{{x_N}^{(C)} - {\mu ^{(C)}}}}{{\sqrt {{{{\mathop{\rm var}} }^{(C)}} + \varepsilon } }} + {\beta ^{(C)}}} \right)}}{{\partial \left( {{x_N}^{(C)} - {\mu ^{(C)}}} \right)}} \cdot 1 = {g_N}^{(C)} \cdot \frac{{{\gamma ^{(C)}}}}{{\sqrt {{{{\mathop{\rm var}} }^{(C)}} + \varepsilon } }} dx1=∂yN(C)∂L(Y(N×C))⋅∂(xN(C)−μ(C))∂fγ(C),β(C)(xN(C),μ(C),var(C))⋅∂xN(C)∂(xN(C)−μ(C))=∂yN(C)∂L(Y(N×C))⋅∂(xN(C)−μ(C))∂(γ(C)var(C)+ε xN(C)−μ(C)+β(C))⋅1=gN(C)⋅var(C)+ε γ(C)
均值路径 d x 2 dx_2 dx2
d x 2 = ∑ k N ( ∂ L ( Y ( N × C ) ) ∂ y k ( C ) ⋅ ∂ f γ ( C ) , β ( C ) ( x k ( C ) , μ ( C ) , v a r ( C ) ) ∂ μ ( C ) ⋅ ∂ μ ( C ) ∂ x N ( C ) ) = ∑ k N ( ∂ L ( Y ( N × C ) ) ∂ y k ( C ) × ∂ ( γ ( C ) x k ( C ) − μ ( C ) ∑ j N ( x j ( C ) − μ ( C ) ) 2 + ε + β ( C ) ) ∂ μ ( C ) ) ⋅ ∂ 1 N ∑ i N x i ( C ) ∂ x N ( C ) = 1 N ∑ k N ( ∂ L ( Y ( N × C ) ) ∂ y k ( C ) × γ ( C ) − ∑ j N ( x j ( C ) − μ ( C ) ) 2 + ε − ( x k ( C ) − μ ( C ) ) 1 ∑ j N ( x j ( C ) − μ ( C ) ) 2 + ε ∑ j N ( x j ( C ) − μ ( C ) ) ( ∑ j N ( x j ( C ) − μ ( C ) ) 2 + ε ) 2 ) \begin{array}{l} d{x_2} = \sum\nolimits_k^N {\left( {\frac{{\partial L\left( {{{\bf{Y}}{(N \times C)}}} \right)}}{{\partial {y_k}^{(C)}}} \cdot \frac{{\partial {f{{\gamma ^{(C)}},{\beta ^{(C)}}}}\left( {{x_k}^{(C)},{\mu ^{(C)}},{{{\mathop{\rm var}} }^{(C)}}} \right)}}{{\partial {\mu ^{(C)}}}} \cdot \frac{{\partial {\mu ^{(C)}}}}{{\partial {x_N}^{(C)}}}} \right)} = \sum\nolimits_k^N {\left( {\frac{{\partial L\left( {{{\bf{Y}}{(N \times C)}}} \right)}}{{\partial {y_k}^{(C)}}} \times \frac{{\partial \left( {{\gamma ^{(C)}}\frac{{{x_k}^{(C)} - {\mu ^{(C)}}}}{{\sqrt {\sum\nolimits_j^N {{{\left( {{x_j}^{(C)} - {\mu ^{(C)}}} \right)}^2}} + \varepsilon } }} + {\beta ^{(C)}}} \right)}}{{\partial {\mu ^{(C)}}}}} \right)} \cdot \frac{{\partial \frac{1}{N}\sum\nolimits_i^N {{x_i}^{(C)}} }}{{\partial {x_N}^{(C)}}}\\ = \frac{1}{N}\sum\nolimits_k^N {\left( {\frac{{\partial L\left( {{{\bf{Y}}{(N \times C)}}} \right)}}{{\partial {y_k}^{(C)}}} \times {\gamma ^{(C)}}\frac{{ - \sqrt {\sum\nolimits_j^N {{{\left( {{x_j}^{(C)} - {\mu ^{(C)}}} \right)}^2}} + \varepsilon } - \left( {{x_k}^{(C)} - {\mu ^{(C)}}} \right)\frac{1}{{\sqrt {\sum\nolimits_j^N {{{\left( {{x_j}^{(C)} - {\mu ^{(C)}}} \right)}^2}} + \varepsilon } }}\sum\nolimits_j^N {\left( {{x_j}^{(C)} - {\mu ^{(C)}}} \right)} }}{{{{\left( {\sqrt {\sum\nolimits_j^N {{{\left( {{x_j}^{(C)} - {\mu ^{(C)}}} \right)}^2}} + \varepsilon } } \right)}^2}}}} \right)} \end{array} dx2=∑kN(∂yk(C)∂L(Y(N×C))⋅∂μ(C)∂fγ(C),β(C)(xk(C),μ(C),var(C))⋅∂xN(C)∂μ(C))=∑kN ∂yk(C)∂L(Y(N×C))×∂μ(C)∂(γ(C)∑jN(xj(C)−μ(C))2+ε xk(C)−μ(C)+β(C)) ⋅∂xN(C)∂N1∑iNxi(C)=N1∑kN ∂yk(C)∂L(Y(N×C))×γ(C)(∑jN(xj(C)−μ(C))2+ε )2−∑jN(xj(C)−μ(C))2+ε −(xk(C)−μ(C))∑jN(xj(C)−μ(C))2+ε 1∑jN(xj(C)−μ(C))
注意到:
∑ j N ( x j ( C ) − μ ( C ) ) = ∑ j N ( x j ( C ) − 1 N ∑ i N x i ( C ) ) = ∑ j N x j ( C ) − N ⋅ 1 N ∑ i N x i ( C ) = 0 \sum\nolimits_j^N {\left( {{x_j}^{(C)} - {\mu ^{(C)}}} \right)} = \sum\nolimits_j^N {\left( {{x_j}^{(C)} - \frac{1}{N}\sum\nolimits_i^N {{x_i}^{(C)}} } \right)} = \sum\nolimits_j^N {{x_j}^{(C)}} - N \cdot \frac{1}{N}\sum\nolimits_i^N {{x_i}^{(C)}} = 0 ∑jN(xj(C)−μ(C))=∑jN(xj(C)−N1∑iNxi(C))=∑jNxj(C)−N⋅N1∑iNxi(C)=0
因此上式可简化为:
= − 1 N ∑ k N ( ∂ L ( Y ( N × C ) ) ∂ y k ( C ) × γ ( C ) ∑ j N ( x j ( C ) − μ ( C ) ) 2 + ε ) = − 1 N ∑ k N ( g k ( C ) × γ ( C ) v a r ( C ) + ε ) = − 1 N γ ( C ) v a r ( C ) + ε ∑ k N ( g k ( C ) ) \begin{array}{l} = - \frac{1}{N}\sum\nolimits_k^N {\left( {\frac{{\partial L\left( {{{\bf{Y}}_{(N \times C)}}} \right)}}{{\partial {y_k}^{(C)}}} \times \frac{{{\gamma ^{(C)}}}}{{\sqrt {\sum\nolimits_j^N {{{\left( {{x_j}^{(C)} - {\mu ^{(C)}}} \right)}^2}} + \varepsilon } }}} \right)} \\ = - \frac{1}{N}\sum\nolimits_k^N {\left( {{g_k}^{(C)} \times \frac{{{\gamma ^{(C)}}}}{{\sqrt {{{{\mathop{\rm var}} }^{(C)}} + \varepsilon } }}} \right)} \\ = - \frac{1}{N}\frac{{{\gamma ^{(C)}}}}{{\sqrt {{{{\mathop{\rm var}} }^{(C)}} + \varepsilon } }}\sum\nolimits_k^N {\left( {{g_k}^{(C)}} \right)} \end{array} =−N1∑kN(∂yk(C)∂L(Y(N×C))×∑jN(xj(C)−μ(C))2+ε γ(C))=−N1∑kN(gk(C)×var(C)+ε γ(C))=−N1var(C)+ε γ(C)∑kN(gk(C))
牛魔的CSDN又限字数是吧,文章就剩个结尾还得腰斩。