Pytorch-----torch.nn.Module.modules()

文章目录


前言

在使用pytorch构建神经网络时,定义的网络模型必须要继承自torch.nn.Module这一父类。在Module类中,有一个函数可以返回网络中所有模块的迭代器。这就是torch.nn.Module.modules()


提示:以下是本篇文章正文内容,下面案例可供参考

一、torch.nn.Module.modules()

源码中的解释如下:

bash 复制代码
    def modules(self) -> Iterator['Module']:
        r"""Returns an iterator over all modules in the network.

        Yields:
            Module: a module in the network

        Note:
            Duplicate modules are returned only once. In the following
            example, ``l`` will be returned only once.

        Example::

            >>> l = nn.Linear(2, 2)
            >>> net = nn.Sequential(l, l)
            >>> for idx, m in enumerate(net.modules()):
                    print(idx, '->', m)

            0 -> Sequential(
              (0): Linear(in_features=2, out_features=2, bias=True)
              (1): Linear(in_features=2, out_features=2, bias=True)
            )
            1 -> Linear(in_features=2, out_features=2, bias=True)

        """
        for _, module in self.named_modules():
            yield module

不只是返回网络中的某一层,而是所有模块和每一层,例如,在下面的代码中,我首先定义了VGG16的网络结构,然后打印出VGG16的modules()

结果如下:

bash 复制代码
E:\study\python\anaconda\envs\graduate_project\python.exe E:/study/project/compute_vision/VGG/model.py
0 -> VGG16(
  (features): Sequential(
    (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): ReLU()
    (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (3): ReLU()
    (4): MaxPool2d(kernel_size=(2, 2), stride=2, padding=0, dilation=1, ceil_mode=False)
    (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (6): ReLU()
    (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (8): ReLU()
    (9): MaxPool2d(kernel_size=(2, 2), stride=2, padding=0, dilation=1, ceil_mode=False)
    (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (11): ReLU()
    (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (13): ReLU()
    (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (15): ReLU()
    (16): MaxPool2d(kernel_size=(2, 2), stride=2, padding=0, dilation=1, ceil_mode=False)
    (17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (18): ReLU()
    (19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (20): ReLU()
    (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (22): ReLU()
    (23): MaxPool2d(kernel_size=(2, 2), stride=2, padding=0, dilation=1, ceil_mode=False)
    (24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (25): ReLU()
    (26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (27): ReLU()
    (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (29): ReLU()
    (30): MaxPool2d(kernel_size=(2, 2), stride=2, padding=0, dilation=1, ceil_mode=False)
  )
  (classifier): Sequential(
    (0): Dropout(p=0.5, inplace=False)
    (1): Linear(in_features=25088, out_features=4096, bias=True)
    (2): Dropout(p=0.5, inplace=False)
    (3): Linear(in_features=4096, out_features=4096, bias=True)
    (4): Linear(in_features=4096, out_features=1000, bias=True)
  )
)
1 -> Sequential(
  (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (1): ReLU()
  (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (3): ReLU()
  (4): MaxPool2d(kernel_size=(2, 2), stride=2, padding=0, dilation=1, ceil_mode=False)
  (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (6): ReLU()
  (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (8): ReLU()
  (9): MaxPool2d(kernel_size=(2, 2), stride=2, padding=0, dilation=1, ceil_mode=False)
  (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (11): ReLU()
  (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (13): ReLU()
  (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (15): ReLU()
  (16): MaxPool2d(kernel_size=(2, 2), stride=2, padding=0, dilation=1, ceil_mode=False)
  (17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (18): ReLU()
  (19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (20): ReLU()
  (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (22): ReLU()
  (23): MaxPool2d(kernel_size=(2, 2), stride=2, padding=0, dilation=1, ceil_mode=False)
  (24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (25): ReLU()
  (26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (27): ReLU()
  (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (29): ReLU()
  (30): MaxPool2d(kernel_size=(2, 2), stride=2, padding=0, dilation=1, ceil_mode=False)
)
2 -> Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
3 -> ReLU()
4 -> Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
5 -> ReLU()
6 -> MaxPool2d(kernel_size=(2, 2), stride=2, padding=0, dilation=1, ceil_mode=False)
7 -> Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
8 -> ReLU()
9 -> Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
10 -> ReLU()
11 -> MaxPool2d(kernel_size=(2, 2), stride=2, padding=0, dilation=1, ceil_mode=False)
12 -> Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
13 -> ReLU()
14 -> Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
15 -> ReLU()
16 -> Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
17 -> ReLU()
18 -> MaxPool2d(kernel_size=(2, 2), stride=2, padding=0, dilation=1, ceil_mode=False)
19 -> Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
20 -> ReLU()
21 -> Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
22 -> ReLU()
23 -> Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
24 -> ReLU()
25 -> MaxPool2d(kernel_size=(2, 2), stride=2, padding=0, dilation=1, ceil_mode=False)
26 -> Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
27 -> ReLU()
28 -> Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
29 -> ReLU()
30 -> Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
31 -> ReLU()
32 -> MaxPool2d(kernel_size=(2, 2), stride=2, padding=0, dilation=1, ceil_mode=False)
33 -> Sequential(
  (0): Dropout(p=0.5, inplace=False)
  (1): Linear(in_features=25088, out_features=4096, bias=True)
  (2): Dropout(p=0.5, inplace=False)
  (3): Linear(in_features=4096, out_features=4096, bias=True)
  (4): Linear(in_features=4096, out_features=1000, bias=True)
)
34 -> Dropout(p=0.5, inplace=False)
35 -> Linear(in_features=25088, out_features=4096, bias=True)
36 -> Dropout(p=0.5, inplace=False)
37 -> Linear(in_features=4096, out_features=4096, bias=True)
38 -> Linear(in_features=4096, out_features=1000, bias=True)

Process finished with exit code 0

序号0:整个VGG16模块,包括特征提取模块和分类模块

序号1:特征提取模块

序号2-32:特征提取模块的每一层

序号33:分类模块

序号34-38:分类模块的每一层

相关推荐
蓝桉8029 分钟前
图片爬取案例
开发语言·数据库·python
liruiqiang0510 分钟前
机器学习 - 衡量模型的特性
人工智能·机器学习
wang_yb17 分钟前
『Python底层原理』--Python整数为什么可以无限大
python·databook
thinkMoreAndDoMore20 分钟前
深度学习(3)-TensorFlow入门(梯度带)
人工智能·深度学习·tensorflow
敲上瘾24 分钟前
基础dp——动态规划
java·数据结构·c++·python·算法·线性回归·动态规划
Dream251226 分钟前
【Transformer架构】
人工智能·深度学习·transformer
黎智程27 分钟前
AI助力小微企业技术开发规范化管理 | 杂谈
人工智能
阑梦清川39 分钟前
Jupyter里面的manim编程学习
python·jupyter·manim
Dongwoo Jeong1 小时前
类型系统下的语言分类与类型系统基础
java·笔记·python·lisp·fortran·type
web150854159351 小时前
超级详细Spring AI运用Ollama大模型
人工智能·windows·spring