《动手学深度学习(PyTorch版)》笔记3.3

注:书中对代码的讲解并不详细,本文对很多细节做了详细注释。另外,书上的源代码是在Jupyter Notebook上运行的,较为分散,本文将代码集中起来,并加以完善,全部用vscode在python 3.9.18下测试通过。

Chapter3 Linear Neural Networks

3.3 Concise Implementations of Linear Regression

复制代码
import numpy as np
import torch
from torch.utils import data
from d2l import torch as d2l

true_w=torch.tensor([2,-3.4])
true_b=4.2
features,labels=d2l.synthetic_data(true_w,true_b,1000)

#构造一个pytorch数据迭代器
def load_array(data_arrays,batch_size,is_train=True): #@save
    dataset=data.TensorDataset(*data_arrays)
    #"TensorDataset" is a class provided by the torch.utils.data module which is a dataset wrapper that allows you to create a dataset from a sequence of tensors. 
    #"*data_arrays" is used to unpack the tuple into individual tensors.
    #The '*' operator is used for iterable unpacking.
    #Here, data_arrays is expected to be a tuple containing the input features and corresponding labels. The "*data_arrays" syntax is used to unpack the elements of the tuple and pass them as separate arguments.
    return data.DataLoader(dataset,batch_size,shuffle=is_train)
    #Constructs a PyTorch DataLoader object which is an iterator that provides batches of data during training or testing.
batch_size=10
data_iter=load_array([features,labels],batch_size)
print(next(iter(data_iter)))#调用next()函数时会返回迭代器的下一个项目,并更新迭代器的内部状态以便下次调用

#定义模型变量,nn是神经网络的缩写
from torch import nn
net=nn.Sequential(nn.Linear(2,1))
#Creates a sequential neural network with one linear layer.
#Input size (in_features) is 2, indicating the network expects input with 2 features.
#Output size (out_features) is 1, indicating the network produces 1 output.

#初始化模型参数
net[0].weight.data.normal_(0,0.01)#The underscore at the end (normal_) indicates that this operation is performed in-place, modifying the existing tensor in memory.
net[0].bias.data.fill_(0)

#定义均方误差损失函数,也称平方L2范数,返回所有样本损失的平均值
loss=nn.MSELoss()#MSE:mean squared error 

#定义优化算法(仍是小批量随机梯度下降)
#update the parameters of the neural network (net.parameters()) using gradients computed during backpropagation. 
trainer=torch.optim.SGD(net.parameters(),lr=0.03)#SGD:stochastic gradient descent(随机梯度下降)

#训练
num_epochs=3
for epoch in range(num_epochs):
    for X,y in data_iter:
        l=loss(net(X),y)
        trainer.zero_grad()
        l.backward()
        trainer.step()#Updates the model parameters using the computed gradients and the optimization algorithm.
    l=loss(net(features),labels)
    print(f'epoch {epoch+1},loss {l:.6f}')#{l:.f}表示将变量l格式化为小数点后有6位的浮点数。
    
w=net[0].weight.data
print('w的估计误差:',true_w-w.reshape(true_w.shape))
b=net[0].bias.data
print('b的估计误差:',true_b-b)
相关推荐
飞Link5 小时前
大模型时代的“语言编程”:Prompt Engineering (提示词工程) 深度解析与实战指南
开发语言·python·prompt
无限进步_5 小时前
【C++】巧用静态变量与构造函数:一种非常规的求和实现
开发语言·c++·git·算法·leetcode·github·visual studio
Advancer-5 小时前
RedisTemplate 两种序列化实践方案
java·开发语言·redis
Kel5 小时前
Claude Code 架构深度剖析:从终端输入到大模型响应的完整过程
人工智能·设计模式·架构
降临-max5 小时前
Git 协同开发与冲突解决
笔记·git
郝学胜-神的一滴5 小时前
Socket实战:从单端聊天到多用户连接的实现秘籍
服务器·开发语言·python·网络协议·pycharm
zzwq.5 小时前
线程池与进程池:concurrent.futures高效并发
python
taWSw5OjU5 小时前
从模型评估、梯度难题到科学初始化:一步步解析深度学习的训练问题
人工智能·深度学习
小超超爱学习99375 小时前
大数乘法,超级简单模板
开发语言·c++·算法
刘佬GEO5 小时前
【无标题】
网络·人工智能·搜索引擎·ai·语言模型