《动手学深度学习(PyTorch版)》笔记3.3

注:书中对代码的讲解并不详细,本文对很多细节做了详细注释。另外,书上的源代码是在Jupyter Notebook上运行的,较为分散,本文将代码集中起来,并加以完善,全部用vscode在python 3.9.18下测试通过。

Chapter3 Linear Neural Networks

3.3 Concise Implementations of Linear Regression

复制代码
import numpy as np
import torch
from torch.utils import data
from d2l import torch as d2l

true_w=torch.tensor([2,-3.4])
true_b=4.2
features,labels=d2l.synthetic_data(true_w,true_b,1000)

#构造一个pytorch数据迭代器
def load_array(data_arrays,batch_size,is_train=True): #@save
    dataset=data.TensorDataset(*data_arrays)
    #"TensorDataset" is a class provided by the torch.utils.data module which is a dataset wrapper that allows you to create a dataset from a sequence of tensors. 
    #"*data_arrays" is used to unpack the tuple into individual tensors.
    #The '*' operator is used for iterable unpacking.
    #Here, data_arrays is expected to be a tuple containing the input features and corresponding labels. The "*data_arrays" syntax is used to unpack the elements of the tuple and pass them as separate arguments.
    return data.DataLoader(dataset,batch_size,shuffle=is_train)
    #Constructs a PyTorch DataLoader object which is an iterator that provides batches of data during training or testing.
batch_size=10
data_iter=load_array([features,labels],batch_size)
print(next(iter(data_iter)))#调用next()函数时会返回迭代器的下一个项目,并更新迭代器的内部状态以便下次调用

#定义模型变量,nn是神经网络的缩写
from torch import nn
net=nn.Sequential(nn.Linear(2,1))
#Creates a sequential neural network with one linear layer.
#Input size (in_features) is 2, indicating the network expects input with 2 features.
#Output size (out_features) is 1, indicating the network produces 1 output.

#初始化模型参数
net[0].weight.data.normal_(0,0.01)#The underscore at the end (normal_) indicates that this operation is performed in-place, modifying the existing tensor in memory.
net[0].bias.data.fill_(0)

#定义均方误差损失函数,也称平方L2范数,返回所有样本损失的平均值
loss=nn.MSELoss()#MSE:mean squared error 

#定义优化算法(仍是小批量随机梯度下降)
#update the parameters of the neural network (net.parameters()) using gradients computed during backpropagation. 
trainer=torch.optim.SGD(net.parameters(),lr=0.03)#SGD:stochastic gradient descent(随机梯度下降)

#训练
num_epochs=3
for epoch in range(num_epochs):
    for X,y in data_iter:
        l=loss(net(X),y)
        trainer.zero_grad()
        l.backward()
        trainer.step()#Updates the model parameters using the computed gradients and the optimization algorithm.
    l=loss(net(features),labels)
    print(f'epoch {epoch+1},loss {l:.6f}')#{l:.f}表示将变量l格式化为小数点后有6位的浮点数。
    
w=net[0].weight.data
print('w的估计误差:',true_w-w.reshape(true_w.shape))
b=net[0].bias.data
print('b的估计误差:',true_b-b)
相关推荐
程序员爱钓鱼26 分钟前
Python编程实战:面向对象与进阶语法——类型注解与代码规范(PEP 8)
后端·python·ipython
程序员爱钓鱼26 分钟前
Python实战:用高德地图API批量获取地址所属街道并写回Excel
后端·python·ipython
吃着火锅x唱着歌32 分钟前
LeetCode 1128.等价多米诺骨牌对的数量
算法·leetcode·职场和发展
ᐇ9591 小时前
Java HashMap深度解析:数据结构、原理与实战指南
java·开发语言·数据结构
十八岁讨厌编程1 小时前
【算法训练营 · 补充】LeetCode Hot100(中)
算法·leetcode
橘颂TA1 小时前
【剑斩OFFER】算法的暴力美学——最小覆盖字串
算法·c/c++·就业
wearegogog1231 小时前
基于混合蛙跳算法和漏桶算法的无线传感器网络拥塞控制与分簇新方法
网络·算法
QT 小鲜肉1 小时前
【个人成长笔记】在 Linux 系统下撰写老化测试脚本以实现自动压测效果(亲测有效)
linux·开发语言·笔记·单片机·压力测试
MeowKnight9581 小时前
【Qt】Qt实践记录2——TCP通信服务器和客户端demo
笔记·qt
程序员龙一1 小时前
C++之static_cast关键字
开发语言·c++·static_cast