Pytorch学习--神经网络--搭建小实战(手撕CIFAR 10 model structure)和 Sequential 的使用

一、Sequential 的使用方法

在手撕代码中进一步体现
torch.nn.Sequential

二、手撕 CIFAR 10 model structure

手撕代码:

python 复制代码
import torch
from torch import nn
from torch.nn import Conv2d, MaxPool2d, Flatten, Linear
from torch.utils.tensorboard import SummaryWriter


class Mary(nn.Module):
    def __init__(self):
        super(Mary,self).__init__()
        self.conv1 = Conv2d(3,32,5,padding=2)
        self.maxpool1 = MaxPool2d(2)
        self.conv2 = Conv2d(32,32,5,padding=2)
        self.maxpool2 = MaxPool2d(2)
        self.conv3 = Conv2d(32,64,5,padding=2)
        self.maxpool3 = MaxPool2d(2)
        self.flatten = Flatten()
        self.linear1 = Linear(1024,64)
        self.linear2 = Linear(64,10)
    def forward(self,x):
        x = self.conv1(x)
        x = self.maxpool1(x)
        x = self.conv2(x)
        x = self.maxpool2(x)
        x = self.conv3(x)
        x = self.maxpool3(x)
        x = self.flatten(x)
        x = self.linear1(x)
        x = self.linear2(x)
        return x
Yorelee = Mary()
print(Yorelee)
# 检测
input = torch.ones((64,3,32,32))
output = Yorelee(input)
print(output.shape)  #如果是[64,10]即为正确

#用Tensorboard去检测
writer = SummaryWriter("logs")
writer.add_graph(Yorelee,input)
writer.close()

Tensorboard 输出:

使用nn.Sequential的代码:

python 复制代码
import torch
from torch import nn
from torch.nn import Conv2d, MaxPool2d, Flatten, Linear
from torch.utils.tensorboard import SummaryWriter


class Mary(nn.Module):
    def __init__(self):
        super(Mary,self).__init__()
        # self.conv1 = Conv2d(3,32,5,padding=2)
        # self.maxpool1 = MaxPool2d(2)
        # self.conv2 = Conv2d(32,32,5,padding=2)
        # self.maxpool2 = MaxPool2d(2)
        # self.conv3 = Conv2d(32,64,5,padding=2)
        # self.maxpool3 = MaxPool2d(2)
        # self.flatten = Flatten()
        # self.linear1 = Linear(1024,64)
        # self.linear2 = Linear(64,10)
        self.model1 = nn.Sequential(
            Conv2d(3, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 64, 5, padding=2),
            MaxPool2d(2),
            Flatten(),
            Linear(1024, 64),
            Linear(64, 10)
        )
    def forward(self,x):
        # x = self.conv1(x)
        # x = self.maxpool1(x)
        # x = self.conv2(x)
        # x = self.maxpool2(x)
        # x = self.conv3(x)
        # x = self.maxpool3(x)
        # x = self.flatten(x)
        # x = self.linear1(x)
        # x = self.linear2(x)
        x = self.model1(x)
        return x
Yorelee = Mary()
print(Yorelee)
# 检测
input = torch.ones((64,3,32,32))
output = Yorelee(input)
print(output.shape)  #如果是[64,10]即为正确

#用Tensorboard去检测
writer = SummaryWriter("logs")
writer.add_graph(Yorelee,input)
writer.close()
相关推荐
沉下心来学鲁班1 分钟前
复现LLM:带你从零认识语言模型
人工智能·语言模型
数据猎手小k2 分钟前
AndroidLab:一个系统化的Android代理框架,包含操作环境和可复现的基准测试,支持大型语言模型和多模态模型。
android·人工智能·机器学习·语言模型
YRr YRr11 分钟前
深度学习:循环神经网络(RNN)详解
人工智能·rnn·深度学习
sp_fyf_202423 分钟前
计算机前沿技术-人工智能算法-大语言模型-最新研究进展-2024-11-01
人工智能·深度学习·神经网络·算法·机器学习·语言模型·数据挖掘
红客59724 分钟前
Transformer和BERT的区别
深度学习·bert·transformer
多吃轻食27 分钟前
大模型微调技术 --> 脉络
人工智能·深度学习·神经网络·自然语言处理·embedding
萧鼎32 分钟前
Python并发编程库:Asyncio的异步编程实战
开发语言·数据库·python·异步
学地理的小胖砸33 分钟前
【一些关于Python的信息和帮助】
开发语言·python
疯一样的码农34 分钟前
Python 继承、多态、封装、抽象
开发语言·python
charles_vaez1 小时前
开源模型应用落地-glm模型小试-glm-4-9b-chat-快速体验(一)
深度学习·语言模型·自然语言处理