《动手学深度学习(PyTorch版)》笔记3.3

注:书中对代码的讲解并不详细,本文对很多细节做了详细注释。另外,书上的源代码是在Jupyter Notebook上运行的,较为分散,本文将代码集中起来,并加以完善,全部用vscode在python 3.9.18下测试通过。

Chapter3 Linear Neural Networks

3.3 Concise Implementations of Linear Regression

复制代码
import numpy as np
import torch
from torch.utils import data
from d2l import torch as d2l

true_w=torch.tensor([2,-3.4])
true_b=4.2
features,labels=d2l.synthetic_data(true_w,true_b,1000)

#构造一个pytorch数据迭代器
def load_array(data_arrays,batch_size,is_train=True): #@save
    dataset=data.TensorDataset(*data_arrays)
    #"TensorDataset" is a class provided by the torch.utils.data module which is a dataset wrapper that allows you to create a dataset from a sequence of tensors. 
    #"*data_arrays" is used to unpack the tuple into individual tensors.
    #The '*' operator is used for iterable unpacking.
    #Here, data_arrays is expected to be a tuple containing the input features and corresponding labels. The "*data_arrays" syntax is used to unpack the elements of the tuple and pass them as separate arguments.
    return data.DataLoader(dataset,batch_size,shuffle=is_train)
    #Constructs a PyTorch DataLoader object which is an iterator that provides batches of data during training or testing.
batch_size=10
data_iter=load_array([features,labels],batch_size)
print(next(iter(data_iter)))#调用next()函数时会返回迭代器的下一个项目,并更新迭代器的内部状态以便下次调用

#定义模型变量,nn是神经网络的缩写
from torch import nn
net=nn.Sequential(nn.Linear(2,1))
#Creates a sequential neural network with one linear layer.
#Input size (in_features) is 2, indicating the network expects input with 2 features.
#Output size (out_features) is 1, indicating the network produces 1 output.

#初始化模型参数
net[0].weight.data.normal_(0,0.01)#The underscore at the end (normal_) indicates that this operation is performed in-place, modifying the existing tensor in memory.
net[0].bias.data.fill_(0)

#定义均方误差损失函数,也称平方L2范数,返回所有样本损失的平均值
loss=nn.MSELoss()#MSE:mean squared error 

#定义优化算法(仍是小批量随机梯度下降)
#update the parameters of the neural network (net.parameters()) using gradients computed during backpropagation. 
trainer=torch.optim.SGD(net.parameters(),lr=0.03)#SGD:stochastic gradient descent(随机梯度下降)

#训练
num_epochs=3
for epoch in range(num_epochs):
    for X,y in data_iter:
        l=loss(net(X),y)
        trainer.zero_grad()
        l.backward()
        trainer.step()#Updates the model parameters using the computed gradients and the optimization algorithm.
    l=loss(net(features),labels)
    print(f'epoch {epoch+1},loss {l:.6f}')#{l:.f}表示将变量l格式化为小数点后有6位的浮点数。
    
w=net[0].weight.data
print('w的估计误差:',true_w-w.reshape(true_w.shape))
b=net[0].bias.data
print('b的估计误差:',true_b-b)
相关推荐
tcoding2 分钟前
《Hadoop 权威指南》笔记
大数据·hadoop·笔记
林泽毅3 分钟前
PaddleNLP框架训练模型:使用SwanLab教程
人工智能·深度学习·机器学习·大模型·paddlepaddle·模型训练·swanlab
胡耀超8 分钟前
图像颜色理论与数据挖掘应用的全景解析
人工智能·python·opencv·计算机视觉·数据挖掘·视觉检测·pillow
R²AIN SUITE12 分钟前
快消零售AI转型:R²AIN SUITE如何破解效率困局
大数据·人工智能·产品运营
明月看潮生15 分钟前
青少年编程与数学 02-019 Rust 编程基础 12课题、所有权系统
开发语言·青少年编程·rust·编程与数学
ONLYOFFICE15 分钟前
集成 ONLYOFFICE 与 AI 插件,为您的服务带来智能文档编辑器
人工智能·ai·编辑器·onlyoffice·文档编辑器·文档预览·文档协作
一个天蝎座 白勺 程序猿21 分钟前
GpuGeek全栈AI开发实战:从零构建企业级大模型生产管线(附完整案例)
人工智能·gpugeek
love530love23 分钟前
家用或办公 Windows 电脑玩人工智能开源项目配备核显的必要性(含 NPU 及显卡类型补充)
人工智能·windows·python·开源·电脑
深圳市青牛科技实业有限公司25 分钟前
D2203使用手册—高压、小电流LDO产品4.6V~36V、150mA
人工智能·单片机·嵌入式硬件·电动工具·工业散热风扇
半导体守望者28 分钟前
AE FC77X77XXFC78X78XXFC79X MFC质量流量计 Mass Flow Controllers user manual
经验分享·笔记·功能测试·自动化·制造