常见的原地操作:nn.ReLu(inplace=True)
python
a = torch.randn(2) # tensor([-0.3690, 0.0626])
b = a.clone() # tensor([-0.3690, 0.0626])
c = a # tensor([-0.3690, 0.0626])
relu = nn.ReLu(inplace=True)
情况1
python
out = relu(a) # tensor([0.0000, 0.0626])
a # tensor([0.0000, 0.0626])
b # tensor([-0.3690, 0.0626])
c # tensor([0.0000, 0.0626])
如果没有clone, c则会变化
情况2
python
out1 = a + relu(a) # tensor([-0.3690, 0.1252])
out2 = relu(a) + a # tensor([0, 0.1252])
两个完全不同结果