分类目录:《深入浅出Pytorch函数》总目录
相关文章:
· 深入浅出Pytorch函数------torch.nn.init.calculate_gain
· 深入浅出Pytorch函数------torch.nn.init.uniform_
· 深入浅出Pytorch函数------torch.nn.init.normal_
· 深入浅出Pytorch函数------torch.nn.init.constant_
· 深入浅出Pytorch函数------torch.nn.init.ones_
· 深入浅出Pytorch函数------torch.nn.init.zeros_
· 深入浅出Pytorch函数------torch.nn.init.eye_
· 深入浅出Pytorch函数------torch.nn.init.dirac_
· 深入浅出Pytorch函数------torch.nn.init.xavier_uniform_
· 深入浅出Pytorch函数------torch.nn.init.xavier_normal_
· 深入浅出Pytorch函数------torch.nn.init.kaiming_uniform_
· 深入浅出Pytorch函数------torch.nn.init.kaiming_normal_
· 深入浅出Pytorch函数------torch.nn.init.trunc_normal_
· 深入浅出Pytorch函数------torch.nn.init.orthogonal_
· 深入浅出Pytorch函数------torch.nn.init.sparse_
torch.nn.init
模块中的所有函数都用于初始化神经网络参数,因此它们都在torc.no_grad()
模式下运行,autograd
不会将其考虑在内。
根据Glorot, X.和Bengio, Y.在《Understanding the difficulty of training deep feedforward neural networks》中描述的方法,用一个正态分布生成值,填充输入的张量或变量。结果张量中的值采样自 N ( 0 , std 2 ) N(0, \text{std}^2) N(0,std2)的正态分布,其中标准差:
std = gain × 2 fan_in + fan_put \text{std}=\text{gain}\times\sqrt{\frac{2}{\text{fan\_in}+\text{fan\_put}}} std=gain×fan_in+fan_put2
这种方法也被称为Glorot initialisation。
语法
torch.nn.init.xavier_normal_(tensor, gain=1.0)
参数
tensor
:[Tensor
] 一个 N N N维张量torch.Tensor
gain
:[float
] 可选的缩放因子
返回值
一个torch.Tensor
且参数tensor
也会更新
实例
w = torch.empty(3, 5)
nn.init.xavier_normal_(w)
函数实现
def xavier_normal_(tensor: Tensor, gain: float = 1.) -> Tensor:
r"""Fills the input `Tensor` with values according to the method
described in `Understanding the difficulty of training deep feedforward
neural networks` - Glorot, X. & Bengio, Y. (2010), using a normal
distribution. The resulting tensor will have values sampled from
:math:`\mathcal{N}(0, \text{std}^2)` where
.. math::
\text{std} = \text{gain} \times \sqrt{\frac{2}{\text{fan\_in} + \text{fan\_out}}}
Also known as Glorot initialization.
Args:
tensor: an n-dimensional `torch.Tensor`
gain: an optional scaling factor
Examples:
>>> w = torch.empty(3, 5)
>>> nn.init.xavier_normal_(w)
"""
fan_in, fan_out = _calculate_fan_in_and_fan_out(tensor)
std = gain * math.sqrt(2.0 / float(fan_in + fan_out))
return _no_grad_normal_(tensor, 0., std)