DAY01:【pytorch】张量

1、张量的简介

1.1 Variable

Variable 是 torch.autograd 中的数据类型,主要用于封装 Tensor,进行自动求导

  • data:被包装的 Tensor
  • grad:data 的梯度
  • grad_fn:创建 Tensor 的 Function,是自动求导的关键
  • requires_grad:指示是否需要梯度
  • is_leaf:指示是否是叶子结点(张量)

1.2 Tensor

pytorch4.0 版开始,Variable 并入 Tensor

  • dtype:张量的数据类型,如torch.FloatTensortorch.cuda.FloatTensor
  • shape:张量的形状,如(64, 3, 224, 224)
  • device:张量所在设备,CPU/GPU,是加速关键

2、张量的创建

2.1 直接创建

2.1.1 torch.tensor()

python 复制代码
# torch.tensor(
#     data,
#     dtype=None,
#     device=None,
#     requires_grad=False,
#     pin_memory=False
# )

功能:从 data 创建 tensor

  • data:数据,可以是 list,numpy
  • dtype:数据类型,默认与 data 一致
  • device:所在设备,cuda/cpu
  • requires_grad:是否需要梯度
  • pin_memory:是否存于锁页内存
python 复制代码
flag = True
if flag:
    arr = np.ones((3, 3))
    print("数据类型:", arr.dtype)

    t = torch.tensor(arr)
    print(t)

"""
输出:
数据类型: float64
tensor([[1., 1., 1.],
        [1., 1., 1.],
        [1., 1., 1.]], dtype=torch.float64)
"""

2.1.2 torch.from_numpy(ndarray)

功能:从 numpy 创建 tensor

注意:

  1. 从 torch.from_numpy 创建的 tensor 于原 ndarray 共享内存,当修改其中一个的数据,另一个也会被修改
python 复制代码
flag = True
if flag:
    arr1 = np.array([[1, 2, 3], [4, 5, 6]])
    t = torch.from_numpy(arr1)
    print("array:", arr1)
    print("tensor:", t)

    print('-'*100)

    arr1[0, 0] = 0
    print("array:", arr1)
    print("tensor:", t)

    print('-'*100)

    t[0, 0] = -1
    print("array:", arr1)
    print("tensor:", t)

"""
输出:
array: [[1 2 3]
 [4 5 6]]
tensor: tensor([[1, 2, 3],
        [4, 5, 6]], dtype=torch.int32)
----------------------------------------------------------------------------------------------------
array: [[0 2 3]
 [4 5 6]]
tensor: tensor([[0, 2, 3],
        [4, 5, 6]], dtype=torch.int32)
----------------------------------------------------------------------------------------------------
array: [[-1  2  3]
 [ 4  5  6]]
tensor: tensor([[-1,  2,  3],
        [ 4,  5,  6]], dtype=torch.int32)
"""

2.2 依数值创建

2.2.1 torch.zeros()

python 复制代码
# torch.zeros(
#     *size,
#     out=None,
#     dtype=None,
#     layout=torch.strided,
#     device=None,
#     requires_grad=False
# )

功能:依 size 创建全 0 张量

  • size:张量的形状,如(3, 3)(3, 224, 224)
  • out:输出的张量
  • layout:内存中布局形式,有 stridedsparse_coo
  • device:所在设备,gpu/cpu
  • requires_grad:是否需要梯度
python 复制代码
flag = True
if flag:
    t2 = torch.tensor([1])
    t = torch.zeros((3, 3), out=t2)

    print("t:", t)
    print("t2:", t2)
    print(id(t), id(t2), id(t) == id(t2))

"""
输出:
t: tensor([[0, 0, 0],
        [0, 0, 0],
        [0, 0, 0]])
t2: tensor([[0, 0, 0],
        [0, 0, 0],
        [0, 0, 0]])
1266643551632 1266643551632 True
"""

2.2.2 torch.zeros_like()

python 复制代码
# torch.zeros_like(
#     input,
#     dtype=None,
#     layout=None,
#     device=None,
#     requires_grad=False
# )

功能:依 input 形状创建全0张量

  • input:创建与 input 同形状的全o张量
  • dtype:数据类型
  • layout:内存中布局形式

2.2.3 torch.ones()

python 复制代码
# torch.ones(
#     *size,
#     out=None,
#     dtype=None,
#     layout=torch.strided,
#     device=None,
#     requires_grad=False
# )

2.2.4 torch.ones_like()

python 复制代码
# torch.ones_like(
#     input,
#     dtype=None,
#     layout=None,
#     device=None,
#     requires_grad=False
# )

功能:依 input 形状创建全1张量

  • szie:张量的形状
  • dtype:数据类型
  • layout:内存中布局形式
  • device:所在设备,cpu/gpu
  • requires_grad:是否需要梯度

2.2.5 torch.full()

python 复制代码
# torch.fu1l(
#     *size,
#     out=None,
#     dtype=None,
#     layout=torch.strided,
#     device=None,
#     requires_grad=False
# )
python 复制代码
flag = True
if flag:
    t = torch.full((3, 3), 1)
    print(t)

"""
输出:
tensor([[1, 1, 1],
        [1, 1, 1],
        [1, 1, 1]])
"""

2.2.6 torch.full_like()

python 复制代码
# torch.full_like(
#     input,
#     fill_value,
#     dtype=None,
#     layout=None,
#     device=None,
#     requires_grad=False
# )

功能:依 input 形状创建全0张量

  • size:张量的形状
  • fill_value:张量的值

2.2.7 torch.arange()

python 复制代码
# torch.arange(
#     start=0,
#     end,
#     step=1,
#     out=None,
#     dtype=None,
#     layout=torch.strided,
#     device=None,
#     requires_grad=False
# )

功能:创建等差的1维张量

注意:

  1. 数值区间为 [start, end)
  • start:数列起始值
  • end:数列"结束值"
  • step:数列公差,默认为1
python 复制代码
flag = True
if flag:
    t = torch.arange(2, 10, 2)
    print(t)

"""
输出:
tensor([2, 4, 6, 8])
"""

2.2.8 torch.linspace()

python 复制代码
# torch.linspace(
#     start,
#     end,
#     steps,
#     out=None,
#     dtype=None,
#     layout=torch.strided,
#     device=None,
#     requires_grad=False
# )

功能:创建均分的1维张量

注意:

  1. 数值区间为 [start, end]
  • start:数列起始值
  • end:数列结束值
  • steps:数列长度
python 复制代码
flag = True
if flag:
    t = torch.linspace(2, 10, 6)
    print(t)

"""
输出:
tensor([ 2.0000,  3.6000,  5.2000,  6.8000,  8.4000, 10.0000])
"""

2.2.9 torch.logspace()

python 复制代码
# torch.logspace(
#     start,
#     end,
#     steps,
#     base=10.0,
#     out=None,
#     dtype=None,
#     layout=torch.strided,
#     device=None,
#     requires_grad=False
# )

功能:创建对数均分的1维张量

注意:

  1. 长度为 steps,底为 base
  • start:数列起始值
  • end:数列结束值
  • steps:数列长度
  • base:对数函数的底,默认为10

2.2.10 torch.eye()

python 复制代码
# torch.eye(
#     n,
#     m=None,
#     out=None,
#     dtype=None,
#     layout=torch.strided,
#     device=None,
#     requires_grad=False
# )

功能:创建单位对角矩阵(2维张量)

注意:

  1. 默认为方阵
  • n:矩阵行数
  • m:矩阵列数

2.3 依概率分布创建张量

2.3.1 torch.normal()

python 复制代码
# torch.normal(
#     mean=0.0,
#     std=1.0,
#     size=(2, 3),
#     out=None,
#     dtype=None,
#     layout=torch.strided,
#     device=None,
#     requires_grad=False
# )

功能:生成正态分布(高斯分布)

  • mean:均值
  • std:标准差
python 复制代码
flag = True
if flag:
    # (1)mean:张量, std:张量
    mean = torch.arange(1, 5, dtype=torch.float)
    std = torch.arange(1, 5, dtype=torch.float)
    t_normal = torch.normal(mean, std)
    print("mean:", mean)
    print("std:", std)
    print("t_normal:", t_normal)
    print('-'*100)

    # (2)mean:标量, std:标量
    t_normal1 = torch.normal(0., 1., size=(4,))
    print("t_normal1:", t_normal1)
    print('-'*100)

    # (3)mean:标量, std:张量
    mean = 0
    std = torch.arange(1, 5, dtype=torch.float)
    t_normal3 = torch.normal(mean, std)
    print("mean:", mean)
    print("std:", std)
    print("t_normal3:", t_normal3)
    print('-'*100)
    

    # (4)mean:张量, std:标量
    mean = torch.arange(1, 5, dtype=torch.float)
    std = 1
    t_normal2 = torch.normal(mean, std)
    print("mean:", mean)
    print("std:", std)
    print("t_normal2:", t_normal2)

"""
输出:
mean: tensor([1., 2., 3., 4.])
std: tensor([1., 2., 3., 4.])
t_normal: tensor([0.4311, 4.0635, 3.1900, 1.2404])
----------------------------------------------------------------------------------------------------
t_normal1: tensor([-0.1281, -1.9552,  1.5685,  0.5102])
----------------------------------------------------------------------------------------------------
mean: 0
std: tensor([1., 2., 3., 4.])
t_normal3: tensor([ 1.1218,  3.5642, -4.7367,  2.7311])
----------------------------------------------------------------------------------------------------
mean: tensor([1., 2., 3., 4.])
std: 1
t_normal2: tensor([1.1351, 2.5704, 3.0849, 6.0902])
"""

2.3.2 torch.randn()

python 复制代码
# torch.randn(
#     *size,
#     out=None,
#     dtype=None,
#     layout=torch.strided,
#     device=None,
#     requires_grad=False
# )

2.3.3 torch.randn_like()

python 复制代码
# torch.randn_like(
#     input,
#     dtype=None,
#     layout=None,
#     device=None,
#     requires_grad=False
# )

功能:生成标准正态分布

  • size:张量的形状

2.3.4 torch.rand()

python 复制代码
# torch.rand(
#     *size,
#     out=None,
#     dtype=None,
#     layout=torch.strided,
#     device=None,
#     requires_grad=False
# )

2.3.5 torch.rand_like()

python 复制代码
# torch.rand_like(
#     input,
#     dtype=None,
#     layout=None,
#     device=None,
#     requires_grad=False   
# )

功能:在区间 [0, 1) 上,生成均匀分布

2.3.6 torch.randint()

python 复制代码
# torch.randint(
#     low=0,
#     high=10,
#     size=(2, 3),
#     dtype=None,
#     layout=torch.strided,
#     device=None,
#     requires_grad=False
# )

2.3.7 torch.randint_like()

python 复制代码
# torch.randint_like(
#     input,
#     low=0,
#     high=10,
#     dtype=None,
#     layout=None,
#     device=None,
#     requires_grad=False
# )

功能:在区间 [low, high)生成整数均匀分布

  • size:张量的形状

2.3.8 torch.randperm()

python 复制代码
# torch.randperm(
#     n,
#     dtype=None,
#     layout=torch.strided,
#     device=None,
#     requires_grad=False
# )

功能:生成从0到n-1的随机排列

  • n:张量的长度

2.3.9 toch.bernoulli()

python 复制代码
# torch.bernoulli(
#     input,
#     generator=None,
#     out=None,
#     dtype=None,
#     layout=torch.strided,
#     device=None,
#     requires_grad=False
# )

功能:依 input 为概率,生成伯努力分布(0-1分布,两点分布)

  • input:概率值

3、张量的操作

3.1 拼接与切分

3.1.1 torch.cat()

python 复制代码
# torch.cat(
#     tensors,
#     dim=0,
#     out=None
# )

功能:将张量按维度 dim 进行拼接

  • tensor:张量序列
  • dim:要拼接的维度
python 复制代码
if True:
    t = torch.ones((2, 3))
    t0 = torch.cat([t, t], dim=0)
    t1 = torch.cat([t, t], dim=1)
    t2 = torch.cat([t, t, t], dim=1)
    print("t:", t, t.shape)
    print('-'*100)
    print("t0:", t0, t0.shape)
    print('-'*100)
    print("t1:", t1, t1.shape)
    print('-'*100)
    print("t2:", t2, t2.shape)

"""
输出:
t: tensor([[1., 1., 1.],
        [1., 1., 1.]]) torch.Size([2, 3])
----------------------------------------------------------------------------------------------------
t0: tensor([[1., 1., 1.],
        [1., 1., 1.],
        [1., 1., 1.],
        [1., 1., 1.]]) torch.Size([4, 3])
----------------------------------------------------------------------------------------------------
t1: tensor([[1., 1., 1., 1., 1., 1.],
        [1., 1., 1., 1., 1., 1.]]) torch.Size([2, 6])
----------------------------------------------------------------------------------------------------
t2: tensor([[1., 1., 1., 1., 1., 1., 1., 1., 1.],
        [1., 1., 1., 1., 1., 1., 1., 1., 1.]]) torch.Size([2, 9])
"""

3.1.2 torch.stack()

python 复制代码
# torch.stack(
#     tensors,
#     dim=0,
#     out=None
# )

功能:在新创建的维度 dim 上进行拼接

  • tensors:张量序列
  • dim:要拼接的维度
python 复制代码
if True:
    t = torch.ones((2, 3))
    t0 = torch.stack([t, t], dim=0)
    t1 = torch.stack([t, t], dim=1)
    t2 = torch.stack([t, t, t], dim=0)
    print("t:", t, t.shape)
    print('-'*100)
    print("t0:", t0, t0.shape)
    print('-'*100)
    print("t1:", t1, t1.shape)
    print('-'*100)
    print("t2:", t2, t2.shape)

"""
输出:
t: tensor([[1., 1., 1.],
        [1., 1., 1.]]) torch.Size([2, 3])
----------------------------------------------------------------------------------------------------
t0: tensor([[[1., 1., 1.],
         [1., 1., 1.]],

        [[1., 1., 1.],
         [1., 1., 1.]]]) torch.Size([2, 2, 3])
----------------------------------------------------------------------------------------------------
t1: tensor([[[1., 1., 1.],
         [1., 1., 1.]],

        [[1., 1., 1.],
         [1., 1., 1.]]]) torch.Size([2, 2, 3])
----------------------------------------------------------------------------------------------------
t2: tensor([[[1., 1., 1.],
         [1., 1., 1.]],

        [[1., 1., 1.],
         [1., 1., 1.]],

        [[1., 1., 1.],
         [1., 1., 1.]]]) torch.Size([3, 2, 3])
"""

3.1.3 torch.chunk()

python 复制代码
# torch.chunk(
#     tensor,
#     chunks,
#     dim=0,
#     out=None
# )

功能:将张量按维度 dim 进行平均切分

返回值:张量列表

注意:

  1. 若不能整除,最后一份张量小于其他张量
  • input:要切分的张量
  • chunks:要切分的份数
  • dim:要切分的维度
python 复制代码
if True:
    a = torch.ones((2, 5))
    list_of_tensors = torch.chunk(a, dim=1, chunks=2)

    for idx, t in enumerate(list_of_tensors):
        print("t{}:".format(idx), t, t.shape)

"""
输出:
t0: tensor([[1., 1., 1.],
        [1., 1., 1.]]) torch.Size([2, 3])
t1: tensor([[1., 1.],
        [1., 1.]]) torch.Size([2, 2])
"""

3.1.4 torch.split()

python 复制代码
# torch.split(
#     tensor,
#     split_size_or_sections,
#     dim=0,
#     out=None
# )

功能:将张量按维度 dim 进行切分

返回值:张量列表

  • tensor:要切分的张量
  • split_size_or_sections:int,表示每一份的长度;list,表示按 list 元素切分
  • dim:要切分的维度
python 复制代码
if True:
    t = torch.ones((2, 5))
    list_of_tensors = torch.split(t, 2, dim=1)
    for idx, t in enumerate(list_of_tensors):
        print("t{}:".format(idx), t, t.shape)

    print('-'*100)

    t = torch.ones((2, 5))
    list_of_tensors1 = torch.split(t, [2, 1, 2], dim=1)
    for idx, t in enumerate(list_of_tensors1):
        print("t{}:".format(idx), t, t.shape)

"""
输出:
t0: tensor([[1., 1.],
        [1., 1.]]) torch.Size([2, 2])
t1: tensor([[1., 1.],
        [1., 1.]]) torch.Size([2, 2])
t2: tensor([[1.],
        [1.]]) torch.Size([2, 1])
----------------------------------------------------------------------------------------------------
t0: tensor([[1., 1.],
        [1., 1.]]) torch.Size([2, 2])
t1: tensor([[1.],
        [1.]]) torch.Size([2, 1])
t2: tensor([[1., 1.],
        [1., 1.]]) torch.Size([2, 2])
"""

3.2 索引

3.2.1 torch.index_select()

python 复制代码
# torch.index_select(
#     input,
#     dim,
#     index,
#     out=None
# )

功能:在维度 dim 上,按 index 索引数据

返回值:依 index 索引数据拼接的张量

  • input:要索引的张量
  • dim:要索引的维度
  • index:要索引数据的序号
python 复制代码
if True:
    t = torch.randint(0, 9, size=(3, 3))
    idx = torch.tensor([0, 2], dtype=torch.long) # 此处不可以用 float 类型的索引
    t_select = torch.index_select(t, dim=0, index=idx)
    print("t:", t, t.shape)
    print('-'*100)
    print("t_select:", t_select, t_select.shape)

"""
输出:
t: tensor([[6, 5, 3],
        [7, 1, 6],
        [0, 2, 1]]) torch.Size([3, 3])
----------------------------------------------------------------------------------------------------
t_select: tensor([[6, 5, 3],
        [0, 2, 1]]) torch.Size([2, 3])
"""

3.2.2 torch.masked_select()

python 复制代码
# torch.masked_select(
#     input,
#     mask,
#     out=None
# )

功能:按 mask 中的 True 进行索引

返回值:一维张量

  • input:要索引的张量
  • mask:与 input 同形状的布尔类型张量
python 复制代码
if True:
    t = torch.randint(0, 9, size=(3, 3))
    mask = t.ge(5) # 大于等于 5 的元素为 True
    t_select = torch.masked_select(t, mask)
    print("t:", t, t.shape)
    print('-'*100)
    print("mask:", mask, mask.shape)
    print('-'*100)
    print("t_select:", t_select, t_select.shape)

"""
输出:
t: tensor([[2, 8, 1],
        [2, 5, 4],
        [7, 2, 1]]) torch.Size([3, 3])
----------------------------------------------------------------------------------------------------
mask: tensor([[False,  True, False],
        [False,  True, False],
        [ True, False, False]]) torch.Size([3, 3])
----------------------------------------------------------------------------------------------------
t_select: tensor([8, 5, 7]) torch.Size([3])
"""

3.3 变换

3.3.1 torch.reshape()

python 复制代码
# torch.reshape(
#     input,
#     shape,
#     *,  # 位置参数之后的参数必须使用关键字参数传入
#     out=None
# )

功能:变换张量形状

注意:

  1. 当张量在内存中是连续时,新张量与 input 共享内存数据
  • input:要变换的张量
  • shape:新张量的形状
python 复制代码
if True:
    t = torch.randperm(8)
    t_reshape = torch.reshape(t, (2, 4))
    print("t:", t, t.shape)
    print('-'*100)
    print("t_reshape:", t_reshape, t_reshape.shape)
    print('-'*100)

    t[0] = 100
    print("t:", t, t.shape, id(t.data))
    print('-'*100)
    print("t_reshape:", t_reshape, t_reshape.shape, id(t_reshape.data))

"""
输出:
t: tensor([2, 1, 7, 6, 5, 4, 3, 0]) torch.Size([8])
----------------------------------------------------------------------------------------------------
t_reshape: tensor([[2, 1, 7, 6],
        [5, 4, 3, 0]]) torch.Size([2, 4])
----------------------------------------------------------------------------------------------------
t: tensor([100,   1,   7,   6,   5,   4,   3,   0]) torch.Size([8]) 2475942463360
----------------------------------------------------------------------------------------------------
t_reshape: tensor([[100,   1,   7,   6],
        [  5,   4,   3,   0]]) torch.Size([2, 4]) 2475942598192
"""

3.3.2 torch.transpose()

python 复制代码
# torch.transpose(
#     input,
#     dim0,
#     dim1,
#     out=None
# )

功能:变换张量的两个维度

  • input:要变换的张量
  • dim0:要变换的维度
  • dim1:要变换的维度
python 复制代码
if True:
    t = torch.rand((2, 3, 4))
    t_transpose = torch.transpose(t, dim0=1, dim1=2)
    print("t:", t, t.shape)
    print('-'*100)
    print("t_transpose:", t_transpose, t_transpose.shape)

"""
输出:
t: tensor([[[0.1896, 0.7278, 0.9688, 0.8187],
         [0.9657, 0.1515, 0.0736, 0.0501],
         [0.2745, 0.4241, 0.6331, 0.8326]],

        [[0.3693, 0.2227, 0.2960, 0.7170],
         [0.7384, 0.3133, 0.3174, 0.1396],
         [0.3428, 0.8153, 0.6683, 0.9056]]]) torch.Size([2, 3, 4])
----------------------------------------------------------------------------------------------------
t_transpose: tensor([[[0.1896, 0.9657, 0.2745],
         [0.7278, 0.1515, 0.4241],
         [0.9688, 0.0736, 0.6331],
         [0.8187, 0.0501, 0.8326]],

        [[0.3693, 0.7384, 0.3428],
         [0.2227, 0.3133, 0.8153],
         [0.2960, 0.3174, 0.6683],
         [0.7170, 0.1396, 0.9056]]]) torch.Size([2, 4, 3])
"""

3.3.3 torch.t()

python 复制代码
# torch.t(
#     input,
#     out=None
# )

功能:2维张量转置,对矩阵而言,等价于 torch.transpose(input, 0, 1)

3.3.4 torch.squeeze()

python 复制代码
# torch.squeeze(
#     input,
#     dim=None,
#     out=None
# )

功能:压轴长度为1的维度(轴)

  • dim:若为None,移除所有长度为1的轴;若为指定维度,当且仅当该轴为1时,可以以被移除
python 复制代码
if True:
    t = torch.rand((2, 1, 3, 1, 4))
    t_squeeze = torch.squeeze(t)
    print("t:", t, t.shape)
    print('-'*100)
    print("t_squeeze:", t_squeeze, t_squeeze.shape)
    print('-'*100)

    t_squeeze1 = torch.squeeze(t, dim=0)
    print("t_squeeze1:", t_squeeze1, t_squeeze1.shape)
    print('-'*100)

    t_squeeze2 = torch.squeeze(t, dim=1)
    print("t_squeeze2:", t_squeeze2, t_squeeze2.shape)

"""
输出:
t: tensor([[[[[0.8422, 0.1384, 0.6145, 0.5869]],

          [[0.4118, 0.7120, 0.0601, 0.1063]],

          [[0.6059, 0.9717, 0.7325, 0.2440]]]],



        [[[[0.3857, 0.4698, 0.9613, 0.1157]],

          [[0.7650, 0.8241, 0.7231, 0.4109]],

          [[0.1844, 0.3643, 0.4845, 0.7820]]]]]) torch.Size([2, 1, 3, 1, 4])
----------------------------------------------------------------------------------------------------
t_squeeze: tensor([[[0.8422, 0.1384, 0.6145, 0.5869],
         [0.4118, 0.7120, 0.0601, 0.1063],
         [0.6059, 0.9717, 0.7325, 0.2440]],

        [[0.3857, 0.4698, 0.9613, 0.1157],
         [0.7650, 0.8241, 0.7231, 0.4109],
         [0.1844, 0.3643, 0.4845, 0.7820]]]) torch.Size([2, 3, 4])
----------------------------------------------------------------------------------------------------
t_squeeze1: tensor([[[[[0.8422, 0.1384, 0.6145, 0.5869]],

          [[0.4118, 0.7120, 0.0601, 0.1063]],

          [[0.6059, 0.9717, 0.7325, 0.2440]]]],



        [[[[0.3857, 0.4698, 0.9613, 0.1157]],

          [[0.7650, 0.8241, 0.7231, 0.4109]],

          [[0.1844, 0.3643, 0.4845, 0.7820]]]]]) torch.Size([2, 1, 3, 1, 4])
----------------------------------------------------------------------------------------------------
t_squeeze2: tensor([[[[0.8422, 0.1384, 0.6145, 0.5869]],

         [[0.4118, 0.7120, 0.0601, 0.1063]],

         [[0.6059, 0.9717, 0.7325, 0.2440]]],


        [[[0.3857, 0.4698, 0.9613, 0.1157]],

         [[0.7650, 0.8241, 0.7231, 0.4109]],

         [[0.1844, 0.3643, 0.4845, 0.7820]]]]) torch.Size([2, 3, 1, 4])
"""

3.3.5 torch.unsqueeze()

python 复制代码
# torch.unsqueeze(
#     input,
#     dim,
#     out=None
# )

功能:依据dim扩展维度

  • dim:扩展的维度

4、数学运算

4.1 加减乘除

4.1.1 torch.add()

功能:对两个张量进行逐元素相加,并返回一个新的张量作为结果

python 复制代码
if True:
    # (1)基本用法
    a = torch.tensor([1, 2, 3])
    b = torch.tensor([4, 5, 6])
    result = torch.add(a, b)
    print(result)
    print('-'*100)

    # (2)带权重的加法
    result1 = torch.add(a, b, alpha=2)
    print(result1)
    print('-'*100)

    # (3)广播机制
    a = torch.tensor([[1, 2], [3, 4]])
    b = torch.tensor([10, 20])
    result2 = torch.add(a, b)
    print(result2)

"""
输出:
tensor([5, 7, 9])
----------------------------------------------------------------------------------------------------
tensor([ 9, 12, 15])
----------------------------------------------------------------------------------------------------
tensor([[11, 22],
        [13, 24]])
"""

4.1.2 torch.addcdiv()

功能:将一个张量与两个其他张量按元素相除的结果相加,并支持通过一个标量对除法结果进行缩放

计算公式: o u t p u t = i n p u t + v a l u e ∗ ( t e n s o r 1 / t e n s o r 2 ) output = input + value * (tensor1 / tensor2) output=input+value∗(tensor1/tensor2)

python 复制代码
if True:
    input = torch.tensor([1.0, 2.0, 3.0])
    tensor1 = torch.tensor([4.0, 5.0, 6.0])
    tensor2 = torch.tensor([2.0, 2.0, 2.0])

    result = torch.addcdiv(input, tensor1, tensor2, value=0.5)
    print(result)

"""
输出:
tensor([2.0000, 3.2500, 4.5000])
"""

4.1.3 torch.addcmul()

功能:将一个张量与两个其他张量按元素相乘的结果相加,并支持通过一个标量对乘法结果进行缩放

计算公式: o u t p u t = i n p u t + v a l u e ∗ ( t e n s o r 1 ∗ t e n s o r 2 ) output = input + value * (tensor1 * tensor2) output=input+value∗(tensor1∗tensor2)

python 复制代码
if True:
    input = torch.tensor([1.0, 2.0, 3.0])
    tensor1 = torch.tensor([4.0, 5.0, 6.0])
    tensor2 = torch.tensor([2.0, 2.0, 2.0])

    result = torch.addcmul(input, tensor1, tensor2, value=0.5)
    print(result)

"""
输出:
tensor([5., 7., 9.])
"""

4.1.4 torch.sub()

功能:对两个张量进行逐元素相减,并返回一个新的张量作为结果

python 复制代码
if True:
    # (1)基本用法
    a = torch.tensor([5, 7, 9])
    b = torch.tensor([1, 2, 3])
    result = torch.sub(a, b)
    print(result)
    print('-'*100)

    # (2)带权重的减法
    result1 = torch.sub(a, b, alpha=2)
    print(result1)
    print('-'*100)

    # (3)广播机制
    c = torch.tensor([[5, 7, 9], [10, 12, 14]])
    d = torch.tensor([1, 2, 3])
    result3 = torch.sub(a, b)
    print(result3)

"""
输出:
tensor([4, 5, 6])
----------------------------------------------------------------------------------------------------
tensor([3, 3, 3])
----------------------------------------------------------------------------------------------------
tensor([4, 5, 6])
"""

4.1.5 torch.div()

功能:对两个张量进行逐元素相除,并返回一个新的张量作为结果

python 复制代码
if True:
    # (1)基本用法
    a = torch.tensor([10.0, 20.0, 30.0])
    b = torch.tensor([2.0, 4.0, 5.0])
    # 张量与张量相除
    result = torch.div(a, b)
    print(result)
    print('-'*100)

    # 张量与标量相除
    result1 = torch.div(a, 2.0)
    print(result1)
    print('-'*100)

    # (2)广播机制
    c = torch.tensor([[10.0, 20.0, 30.0], [40.0, 50.0, 60.0]])
    d = torch.tensor([2.0, 4.0, 5.0])
    result2 = torch.div(c, d)
    print(result2)

"""
输出:
tensor([5., 5., 6.])
----------------------------------------------------------------------------------------------------
tensor([ 5., 10., 15.])
----------------------------------------------------------------------------------------------------
tensor([[ 5.0000,  5.0000,  6.0000],
        [20.0000, 12.5000, 12.0000]])
"""

4.1.6 torch.mul()

功能:对两个张量的对应元素进行相乘操作,并返回一个新的张量作为结果

python 复制代码
if True:
    # (1)基本用法
    a = torch.tensor([1, 2, 3])
    b = torch.tensor([4, 5, 6])
    # 张量与张量相乘
    result = torch.mul(a, b)
    print(result)
    print('-'*100)

    # 张量与标量相乘
    result1 = torch.mul(a, 2)
    print(result1)
    print('-'*100)

    # (2)广播机制
    c = torch.tensor([[1, 2, 3], [4, 5, 6]])
    d = torch.tensor([2, 3, 4])
    result2 = torch.mul(c, d)
    print(result2)

"""
输出:
tensor([ 4, 10, 18])
----------------------------------------------------------------------------------------------------
tensor([2, 4, 6])
----------------------------------------------------------------------------------------------------
tensor([[ 2,  6, 12],
        [ 8, 15, 24]])
"""

4.2 对数、指数、幂函数

4.2.1 torch.log(input, out=None)

功能:计算张量中每个元素的自然对数(以 e 为底的对数)的函数

python 复制代码
if True:
    # (1)基本用法
    a = torch.tensor([1.0, 2.7183, 10.0])

    result = torch.log(a)
    print(result)
    print('-'*100)

    # (2)使用out参数
    output = torch.empty(3)
    torch.log(a, out=output)
    print(output)

"""
输出:
tensor([0.0000, 1.0000, 2.3026])
----------------------------------------------------------------------------------------------------
tensor([0.0000, 1.0000, 2.3026])
"""

4.2.2 torch.log10(input, out=None)

功能:计算张量中每个元素以10为底的对数的函数

python 复制代码
if True:
    # (1)基本用法
    a = torch.tensor([1.0, 10.0, 100.0])
    result = torch.log10(a)
    print(result)
    print('-'*100)

    # (2)使用out参数
    output = torch.empty(3)
    torch.log10(a, out=output)
    print(output)

"""
输出:
tensor([0., 1., 2.])
----------------------------------------------------------------------------------------------------
tensor([0., 1., 2.])
"""

4.2.3 torch.log2(input, out=None)

功能:计算张量中每个元素以2为底的对数的函数

python 复制代码
if True:
    # (1)基本用法
    a = torch.tensor([1.0, 2.0, 8.0])
    result = torch.log2(a)
    print(result)
    print('-'*100)

    # (2)使用out参数
    output = torch.empty(3)
    torch.log2(a, out=output)
    print(output)

"""
输出:
tensor([0., 1., 3.])
----------------------------------------------------------------------------------------------------
tensor([0., 1., 3.])
"""

4.2.4 torch.exp(input, out=None)

功能:计算张量中每个元素的指数值的函数

python 复制代码
if True:
    # (1)基本用法
    a = torch.tensor([0.0, 1.0, 2.0])
    result = torch.exp(a)
    print(result)
    print('-'*100)

    # (2)使用out参数
    output = torch.empty(3)
    torch.exp(a, out=output)
    print(output)

"""
输出:
tensor([1.0000, 2.7183, 7.3891])
----------------------------------------------------------------------------------------------------
tensor([1.0000, 2.7183, 7.3891])
"""

4.2.5 torch.pow()

功能:计算张量中每个元素的幂次方的函数

python 复制代码
if True:
    # (1)基本用法
    # 张量与标量
    a = torch.tensor([2, 3, 4])
    result = torch.pow(a, 2)
    print(result)
    print('-'*100)

    # 张量与张量
    base = torch.tensor([2, 3, 4])
    exponent = torch.tensor([3, 2, 1])
    result1 = torch.pow(base, exponent)
    print(result1)
    print('-'*100)

    # (2)广播机制
    base2 = torch.tensor([[2, 3, 4], [5, 6, 7]])
    exponent2 = torch.tensor([2, 3, 1])
    result3 = torch.pow(base2, exponent2)
    print(result3)

"""
输出:
tensor([ 4,  9, 16])
----------------------------------------------------------------------------------------------------
tensor([8, 9, 4])
----------------------------------------------------------------------------------------------------
tensor([[  4,  27,   4],
        [ 25, 216,   7]])
"""

4.3 三角函数

4.3.1 torch.abs(input, out=None)

功能:对张量中的每个元素计算其绝对值

python 复制代码
if True:
    tensor = torch.tensor([-3.0, -1.5, 0.0, 2.5, 4.0])

    abs_tensor = torch.abs(tensor)
    print(abs_tensor)

"""
输出:
tensor([3.0000, 1.5000, 0.0000, 2.5000, 4.0000])
"""

4.3.2 torch.acos(input, out=None)

功能:计算输入张量中的每个元素的反余弦值

python 复制代码
if True:
    input_tensor = torch.tensor([1.0, 0.0, -1.0])
    output_tensor = torch.acos(input_tensor)

    print(output_tensor)

"""
输出:
tensor([0.0000, 1.5708, 3.1416])
"""

4.3.3 torch.cosh(input, out=None)

功能:计算输入张量中每个元素的双曲余弦值

公式: c o s h ( x ) = ( e x + e − x ) / 2 cosh(x) = (e^x + e^{-x}) / 2 cosh(x)=(ex+e−x)/2

python 复制代码
if True:
    input_tensor = torch.tensor([0.0, 1.0, -1.0])
    output_tensor = torch.cosh(input_tensor)

    print(output_tensor)

"""
输出:
tensor([1.0000, 1.5431, 1.5431])
"""

4.3.4 torch.cos(input, out=None)

功能:计算输入张量中每个元素的余弦值

python 复制代码
if True:
    input_tensor = torch.tensor([0.0, 3.1416 / 2, 3.1416])  # 分别对应 0, π/2, π
    output_tensor = torch.cos(input_tensor)

    print(output_tensor)

"""
输出:
tensor([ 1.0000e+00, -3.6200e-06, -1.0000e+00])
"""

4.3.5 torch.asin(input, out=None)

功能:计算输入张量中每个元素的反正弦值

python 复制代码
if True:
    input_tensor = torch.tensor([0.0, 0.5, -1.0])
    output_tensor = torch.asin(input_tensor)

    print(output_tensor)

"""
输出:
tensor([ 0.0000,  0.5236, -1.5708])
"""

4.3.6 torch.atan(input, out=None)

功能:计算输入张量中每个元素的反正切值

python 复制代码
if True:
    input_tensor = torch.tensor([0.0, 1.0, -1.0])
    output_tensor = torch.atan(input_tensor)

    print(output_tensor)

"""
输出:
tensor([ 0.0000,  0.7854, -0.7854])
"""

4.3.7 torch.atan2(input, other, out=None)

功能:计算输入张量中每对元素的反正切值

公式: a n g l e = a t a n 2 ( y , x ) angle = atan2(y, x) angle=atan2(y,x)

python 复制代码
if True:
    y = torch.tensor([1.0, -1.0, 0.0])
    x = torch.tensor([1.0, 1.0, -1.0])
    angles = torch.atan2(y, x)

    print(angles)

"""
输出:
tensor([ 0.7854, -0.7854,  3.1416])
"""

微语录:我们笑着说再见,却深知再见遥遥无期。

相关推荐
hu_yuchen6 分钟前
如何使用PyCharm自动化测试
ide·python·pycharm
落叶_托管中17 分钟前
不懂代码不会设计,AI帮我2天搞定公司官网
人工智能
阿豪啊28 分钟前
VSCode AI三大模式,Copilot与通义灵码对比指南
人工智能·visual studio code
FL162386312935 分钟前
[python]通过whl文件安装pyheif安装教程和简单使用案例
开发语言·python
_一条咸鱼_38 分钟前
AI 大模型 A2A 与 MCP 协议的区别
人工智能·深度学习·机器学习
wapicn991 小时前
查看手机在线状态,保障设备安全运行
java·网络·数据库·python·php
飞哥数智坊1 小时前
即梦3.0:真正可用的AI生图
人工智能
沛沛老爹1 小时前
RPA VS AI Agent
人工智能·rpa·ai agent
Andy__M1 小时前
【AI入门】MCP 初探
人工智能·python·mac
北京_宏哥1 小时前
🔥PC端自动化测试实战教程-4-pywinauto 操作PC端应用程序窗口 - 上篇(详细教程)
前端·python·测试