《动手学深度学习 Pytorch版》 4.3 多层感知机的简洁实现

python 复制代码
import torch
from torch import nn
from d2l import torch as d2l

模型

python 复制代码
net = nn.Sequential(nn.Flatten(),
                    nn.Linear(784, 256),
                    nn.ReLU(),  # 与 3.7 节相比多了一层
                    nn.Linear(256, 10))

def init_weights(m):
    if type(m) == nn.Linear:  # 使用正态分布中的随机值初始化权重
        nn.init.normal_(m.weight, std=0.01)

net.apply(init_weights)
复制代码
Sequential(
  (0): Flatten(start_dim=1, end_dim=-1)
  (1): Linear(in_features=784, out_features=256, bias=True)
  (2): ReLU()
  (3): Linear(in_features=256, out_features=10, bias=True)
)
python 复制代码
batch_size, lr, num_epochs = 256, 0.1, 10
loss = nn.CrossEntropyLoss(reduction='none')
trainer = torch.optim.SGD(net.parameters(), lr=lr)

train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, trainer)


练习

(1)尝试添加不同数量的隐藏层(也可以修改学习率),怎样设置效果最好?

python 复制代码
net2 = nn.Sequential(nn.Flatten(),
                    nn.Linear(784, 256),
                    nn.ReLU(),
                    nn.Linear(256, 128),
                    nn.ReLU(),
                    nn.Linear(128, 10))

def init_weights(m):
    if type(m) == nn.Linear:  # 使用正态分布中的随机值初始化权重
        nn.init.normal_(m.weight, std=0.01)

net2.apply(init_weights)

batch_size2, lr2, num_epochs2 = 256, 0.3, 10
loss2 = nn.CrossEntropyLoss(reduction='none')
trainer2 = torch.optim.SGD(net2.parameters(), lr=lr2)

train_iter2, test_iter2 = d2l.load_data_fashion_mnist(batch_size2)
d2l.train_ch3(net2, train_iter2, test_iter2, loss2, num_epochs2, trainer2)



(2)尝试不同的激活函数,哪个激活函数效果最好?

python 复制代码
net3 = nn.Sequential(nn.Flatten(),
                    nn.Linear(784, 256),
                    nn.Sigmoid(),
                    nn.Linear(256, 10))

net4 = nn.Sequential(nn.Flatten(),
                    nn.Linear(784, 256),
                    nn.Tanh(),
                    nn.Linear(256, 10))

def init_weights(m):
    if type(m) == nn.Linear:
        nn.init.normal_(m.weight, std=0.01)

net3.apply(init_weights)
net4.apply(init_weights)


train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
python 复制代码
batch_size, lr, num_epochs = 256, 0.1, 10
loss = nn.CrossEntropyLoss(reduction='none')
trainer = torch.optim.SGD(net3.parameters(), lr=lr)
d2l.train_ch3(net3, train_iter, test_iter, loss, num_epochs, trainer)
复制代码
---------------------------------------------------------------------------

AssertionError                            Traceback (most recent call last)

Cell In[5], line 4
      2 loss = nn.CrossEntropyLoss(reduction='none')
      3 trainer = torch.optim.SGD(net3.parameters(), lr=lr)
----> 4 d2l.train_ch3(net3, train_iter, test_iter, loss, num_epochs, trainer)


File c:\Software\Miniconda3\envs\d2l\lib\site-packages\d2l\torch.py:340, in train_ch3(net, train_iter, test_iter, loss, num_epochs, updater)
    338     animator.add(epoch + 1, train_metrics + (test_acc,))
    339 train_loss, train_acc = train_metrics
--> 340 assert train_loss < 0.5, train_loss
    341 assert train_acc <= 1 and train_acc > 0.7, train_acc
    342 assert test_acc <= 1 and test_acc > 0.7, test_acc


AssertionError: 0.5017133202234904
python 复制代码
batch_size, lr, num_epochs = 256, 0.1, 10
loss = nn.CrossEntropyLoss(reduction='none')
trainer = torch.optim.SGD(net4.parameters(), lr=lr)
d2l.train_ch3(net4, train_iter, test_iter, loss, num_epochs, trainer)


还是 ReLU 比较奈斯。


(3)尝试不同的方案来初始化权重,什么方案效果最好。

累了,不想试试了。略...

相关推荐
Dm_dotnet2 小时前
公益站Agent Router注册送200刀额度竟然是真的
人工智能
算家计算2 小时前
7B参数拿下30个世界第一!Hunyuan-MT-7B本地部署教程:腾讯混元开源业界首个翻译集成模型
人工智能·开源
机器之心2 小时前
LLM开源2.0大洗牌:60个出局,39个上桌,AI Coding疯魔,TensorFlow已死
人工智能·openai
Juchecar4 小时前
交叉熵:深度学习中最常用的损失函数
人工智能
林木森ai4 小时前
爆款AI动物运动会视频,用Coze(扣子)一键搞定全流程(附保姆级拆解)
人工智能·aigc
聚客AI4 小时前
🙋‍♀️Transformer训练与推理全流程:从输入处理到输出生成
人工智能·算法·llm
BeerBear6 小时前
【保姆级教程-从0开始开发MCP服务器】一、MCP学习压根没有你想象得那么难!.md
人工智能·mcp
小气小憩6 小时前
“暗战”百度搜索页:Monica悬浮球被“围剿”,一场AI Agent与传统巨头的流量攻防战
前端·人工智能
神经星星6 小时前
准确度提升400%!印度季风预测模型基于36个气象站点,实现城区尺度精细预报
人工智能
IT_陈寒9 小时前
JavaScript 性能优化:5 个被低估的 V8 引擎技巧让你的代码快 200%
前端·人工智能·后端