深入浅出Pytorch函数——torch.nn.Linear

分类目录:《深入浅出Pytorch函数》总目录


对输入数据做线性变换 y = x A T + b y=xA^T+b y=xAT+b

语法

复制代码
torch.nn.Linear(in_features, out_features, bias=True, device=None, dtype=None)

参数

  • in_features:[int] 每个输入样本的大小
  • out_features :[int] 每个输出样本的大小
  • bias:[bool] 若设置为False,则该层不会学习偏置项目,默认值为True

变量形状

  • 输入变量: ( N , in_features ) (N, \text{in\_features}) (N,in_features)
  • 输出变量: ( N , out_features ) (N, \text{out\_features}) (N,out_features)

变量

  • weight:模块中形状为 ( out_features , in_features ) (\text{out\_features}, \text{in\_features}) (out_features,in_features)的可学习权重项
  • bias :模块中形状为 out_features \text{out\_features} out_features的可学习偏置项

实例

复制代码
>>> m = nn.Linear(20, 30)
>>> input = torch.randn(128, 20)
>>> output = m(input)
>>> print(output.size())
torch.Size([128, 30])

函数实现

复制代码
class Linear(Module):
    r"""Applies a linear transformation to the incoming data: :math:`y = xA^T + b`

    This module supports :ref:`TensorFloat32<tf32_on_ampere>`.

    On certain ROCm devices, when using float16 inputs this module will use :ref:`different precision<fp16_on_mi200>` for backward.

    Args:
        in_features: size of each input sample
        out_features: size of each output sample
        bias: If set to ``False``, the layer will not learn an additive bias.
            Default: ``True``

    Shape:
        - Input: :math:`(*, H_{in})` where :math:`*` means any number of
          dimensions including none and :math:`H_{in} = \text{in\_features}`.
        - Output: :math:`(*, H_{out})` where all but the last dimension
          are the same shape as the input and :math:`H_{out} = \text{out\_features}`.

    Attributes:
        weight: the learnable weights of the module of shape
            :math:`(\text{out\_features}, \text{in\_features})`. The values are
            initialized from :math:`\mathcal{U}(-\sqrt{k}, \sqrt{k})`, where
            :math:`k = \frac{1}{\text{in\_features}}`
        bias:   the learnable bias of the module of shape :math:`(\text{out\_features})`.
                If :attr:`bias` is ``True``, the values are initialized from
                :math:`\mathcal{U}(-\sqrt{k}, \sqrt{k})` where
                :math:`k = \frac{1}{\text{in\_features}}`

    Examples::

        >>> m = nn.Linear(20, 30)
        >>> input = torch.randn(128, 20)
        >>> output = m(input)
        >>> print(output.size())
        torch.Size([128, 30])
    """
    __constants__ = ['in_features', 'out_features']
    in_features: int
    out_features: int
    weight: Tensor

    def __init__(self, in_features: int, out_features: int, bias: bool = True,
                 device=None, dtype=None) -> None:
        factory_kwargs = {'device': device, 'dtype': dtype}
        super().__init__()
        self.in_features = in_features
        self.out_features = out_features
        self.weight = Parameter(torch.empty((out_features, in_features), **factory_kwargs))
        if bias:
            self.bias = Parameter(torch.empty(out_features, **factory_kwargs))
        else:
            self.register_parameter('bias', None)
        self.reset_parameters()

    def reset_parameters(self) -> None:
        # Setting a=sqrt(5) in kaiming_uniform is the same as initializing with
        # uniform(-1/sqrt(in_features), 1/sqrt(in_features)). For details, see
        # https://github.com/pytorch/pytorch/issues/57109
        init.kaiming_uniform_(self.weight, a=math.sqrt(5))
        if self.bias is not None:
            fan_in, _ = init._calculate_fan_in_and_fan_out(self.weight)
            bound = 1 / math.sqrt(fan_in) if fan_in > 0 else 0
            init.uniform_(self.bias, -bound, bound)

    def forward(self, input: Tensor) -> Tensor:
        return F.linear(input, self.weight, self.bias)

    def extra_repr(self) -> str:
        return 'in_features={}, out_features={}, bias={}'.format(
            self.in_features, self.out_features, self.bias is not None
        )
相关推荐
攻城狮7号16 小时前
腾讯混元翻译模型Hunyuan-MT-7B开源,先前拿了30个冠军
人工智能·hunyuan-mt-7b·腾讯混元翻译模型·30个冠军
zezexihaha16 小时前
从“帮写文案”到“管生活”:个人AI工具的边界在哪?
人工智能
算家云16 小时前
nano banana官方最强Prompt模板来了!六大场景模板详解
人工智能·谷歌·ai大模型·算家云·ai生图·租算力,到算家云·nano banana 提示词
暴躁的大熊16 小时前
AI助力决策:告别生活与工作中的纠结,明析抉择引领明智选择
人工智能
Gyoku Mint16 小时前
提示词工程(Prompt Engineering)的崛起——为什么“会写Prompt”成了新技能?
人工智能·pytorch·深度学习·神经网络·语言模型·自然语言处理·nlp
梁小憨憨16 小时前
zotero扩容
人工智能·笔记
大数据张老师16 小时前
AI架构师的思维方式与架构设计原则
人工智能·架构师·ai架构·后端架构
AKAMAI16 小时前
Entity Digital Sports 降低成本并快速扩展
人工智能·云计算
m0_6176636217 小时前
Deeplizard深度学习课程(七)—— 神经网络实验
人工智能·深度学习·神经网络
笔触狂放17 小时前
【机器学习】综合实训(一)
人工智能·机器学习