《动手学深度学习(PyTorch版)》笔记3.3

注:书中对代码的讲解并不详细,本文对很多细节做了详细注释。另外,书上的源代码是在Jupyter Notebook上运行的,较为分散,本文将代码集中起来,并加以完善,全部用vscode在python 3.9.18下测试通过。

Chapter3 Linear Neural Networks

3.3 Concise Implementations of Linear Regression

复制代码
import numpy as np
import torch
from torch.utils import data
from d2l import torch as d2l

true_w=torch.tensor([2,-3.4])
true_b=4.2
features,labels=d2l.synthetic_data(true_w,true_b,1000)

#构造一个pytorch数据迭代器
def load_array(data_arrays,batch_size,is_train=True): #@save
    dataset=data.TensorDataset(*data_arrays)
    #"TensorDataset" is a class provided by the torch.utils.data module which is a dataset wrapper that allows you to create a dataset from a sequence of tensors. 
    #"*data_arrays" is used to unpack the tuple into individual tensors.
    #The '*' operator is used for iterable unpacking.
    #Here, data_arrays is expected to be a tuple containing the input features and corresponding labels. The "*data_arrays" syntax is used to unpack the elements of the tuple and pass them as separate arguments.
    return data.DataLoader(dataset,batch_size,shuffle=is_train)
    #Constructs a PyTorch DataLoader object which is an iterator that provides batches of data during training or testing.
batch_size=10
data_iter=load_array([features,labels],batch_size)
print(next(iter(data_iter)))#调用next()函数时会返回迭代器的下一个项目,并更新迭代器的内部状态以便下次调用

#定义模型变量,nn是神经网络的缩写
from torch import nn
net=nn.Sequential(nn.Linear(2,1))
#Creates a sequential neural network with one linear layer.
#Input size (in_features) is 2, indicating the network expects input with 2 features.
#Output size (out_features) is 1, indicating the network produces 1 output.

#初始化模型参数
net[0].weight.data.normal_(0,0.01)#The underscore at the end (normal_) indicates that this operation is performed in-place, modifying the existing tensor in memory.
net[0].bias.data.fill_(0)

#定义均方误差损失函数,也称平方L2范数,返回所有样本损失的平均值
loss=nn.MSELoss()#MSE:mean squared error 

#定义优化算法(仍是小批量随机梯度下降)
#update the parameters of the neural network (net.parameters()) using gradients computed during backpropagation. 
trainer=torch.optim.SGD(net.parameters(),lr=0.03)#SGD:stochastic gradient descent(随机梯度下降)

#训练
num_epochs=3
for epoch in range(num_epochs):
    for X,y in data_iter:
        l=loss(net(X),y)
        trainer.zero_grad()
        l.backward()
        trainer.step()#Updates the model parameters using the computed gradients and the optimization algorithm.
    l=loss(net(features),labels)
    print(f'epoch {epoch+1},loss {l:.6f}')#{l:.f}表示将变量l格式化为小数点后有6位的浮点数。
    
w=net[0].weight.data
print('w的估计误差:',true_w-w.reshape(true_w.shape))
b=net[0].bias.data
print('b的估计误差:',true_b-b)
相关推荐
ling__i2 分钟前
java day18
java·开发语言
矛取矛求2 分钟前
日期类的实现
开发语言·c++·算法
lingggggaaaa4 分钟前
小迪安全v2023学习笔记(八十讲)—— 中间件安全&WPS分析&Weblogic&Jenkins&Jetty&CVE
笔记·学习·安全·web安全·网络安全·中间件·wps
yinmaisoft11 分钟前
当低代码遇上AI,有趣,实在有趣
android·人工智能·低代码·开发工具·rxjava
正经教主13 分钟前
【慢教程】Ollama4:ollama命令汇总
人工智能·ollama
大翻哥哥13 分钟前
Python 2025:AI工程化与智能代理开发实战
开发语言·人工智能·python
站大爷IP23 分钟前
Python文件处理:从基础操作到实战技巧全解析
python
在下雨59932 分钟前
项目讲解1
开发语言·数据结构·c++·算法·单例模式
再努力"亿"点点35 分钟前
Sklearn(机器学习)实战:鸢尾花数据集处理技巧
开发语言·python
中杯可乐多加冰35 分钟前
深度解析文心大模型X1.1:智能涌现与技术革新
人工智能