Pytorch学习--神经网络--搭建小实战(手撕CIFAR 10 model structure)和 Sequential 的使用

一、Sequential 的使用方法

在手撕代码中进一步体现
torch.nn.Sequential

二、手撕 CIFAR 10 model structure

手撕代码:

python 复制代码
import torch
from torch import nn
from torch.nn import Conv2d, MaxPool2d, Flatten, Linear
from torch.utils.tensorboard import SummaryWriter


class Mary(nn.Module):
    def __init__(self):
        super(Mary,self).__init__()
        self.conv1 = Conv2d(3,32,5,padding=2)
        self.maxpool1 = MaxPool2d(2)
        self.conv2 = Conv2d(32,32,5,padding=2)
        self.maxpool2 = MaxPool2d(2)
        self.conv3 = Conv2d(32,64,5,padding=2)
        self.maxpool3 = MaxPool2d(2)
        self.flatten = Flatten()
        self.linear1 = Linear(1024,64)
        self.linear2 = Linear(64,10)
    def forward(self,x):
        x = self.conv1(x)
        x = self.maxpool1(x)
        x = self.conv2(x)
        x = self.maxpool2(x)
        x = self.conv3(x)
        x = self.maxpool3(x)
        x = self.flatten(x)
        x = self.linear1(x)
        x = self.linear2(x)
        return x
Yorelee = Mary()
print(Yorelee)
# 检测
input = torch.ones((64,3,32,32))
output = Yorelee(input)
print(output.shape)  #如果是[64,10]即为正确

#用Tensorboard去检测
writer = SummaryWriter("logs")
writer.add_graph(Yorelee,input)
writer.close()

Tensorboard 输出:

使用nn.Sequential的代码:

python 复制代码
import torch
from torch import nn
from torch.nn import Conv2d, MaxPool2d, Flatten, Linear
from torch.utils.tensorboard import SummaryWriter


class Mary(nn.Module):
    def __init__(self):
        super(Mary,self).__init__()
        # self.conv1 = Conv2d(3,32,5,padding=2)
        # self.maxpool1 = MaxPool2d(2)
        # self.conv2 = Conv2d(32,32,5,padding=2)
        # self.maxpool2 = MaxPool2d(2)
        # self.conv3 = Conv2d(32,64,5,padding=2)
        # self.maxpool3 = MaxPool2d(2)
        # self.flatten = Flatten()
        # self.linear1 = Linear(1024,64)
        # self.linear2 = Linear(64,10)
        self.model1 = nn.Sequential(
            Conv2d(3, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 64, 5, padding=2),
            MaxPool2d(2),
            Flatten(),
            Linear(1024, 64),
            Linear(64, 10)
        )
    def forward(self,x):
        # x = self.conv1(x)
        # x = self.maxpool1(x)
        # x = self.conv2(x)
        # x = self.maxpool2(x)
        # x = self.conv3(x)
        # x = self.maxpool3(x)
        # x = self.flatten(x)
        # x = self.linear1(x)
        # x = self.linear2(x)
        x = self.model1(x)
        return x
Yorelee = Mary()
print(Yorelee)
# 检测
input = torch.ones((64,3,32,32))
output = Yorelee(input)
print(output.shape)  #如果是[64,10]即为正确

#用Tensorboard去检测
writer = SummaryWriter("logs")
writer.add_graph(Yorelee,input)
writer.close()
相关推荐
Cristiano777.2 分钟前
周学习记录
学习
听风吟丶34 分钟前
Java 函数式编程深度实战:从 Lambda 到 Stream API 的工程化落地
开发语言·python
你也渴望鸡哥的力量么38 分钟前
GeoSeg 框架解析
人工智能
唐华班竹40 分钟前
PoA 如何把 CodexField 从“创作平台”推向“内容经济网络”
人工智能·web3
渡我白衣1 小时前
深入理解 OverlayFS:用分层的方式重新组织 Linux 文件系统
android·java·linux·运维·服务器·开发语言·人工智能
IT_陈寒1 小时前
Vue 3.4 正式发布:5个不可错过的性能优化与Composition API新特性
前端·人工智能·后端
极客BIM工作室1 小时前
解密VQVAE中的Codebook
人工智能
DogDaoDao1 小时前
大语言模型四大核心技术架构深度解析
人工智能·语言模型·架构·大模型·transformer·循环神经网络·对抗网络
shayudiandian1 小时前
Transformer结构完全解读:从Attention到LLM
人工智能·深度学习·transformer