深度学习入门(3) - CNN

CNN

Convolutional Layer

We use a filter to slide over the image spatially (computing dot products)

Interspersed with activation function as well

What it learns?

First-layer conv filters: local image templates (Often learns oriented edges, opposing colors)

Problems:
  1. For large images, we need many layers to get information about the whole image

​ Solution: Downsample inside the network

  1. Feature map shrinks with each layer

    Solution: Padding : adding zeros around the input

Pooling layer

-> downsampling

Without parameters that needs to be learnt.

ex:

max pooling

Aver pooling

...

FC layer(Fully Connected)

The last layer should always be a FC layer.

Batch normalization

we need to force inputs to be nicely scaled at each layer so that we can do the optimization more easily.

Usually inserted after FC layer / Convolutional layer, before non-linearity

Pros:

make the network easier to train

robust to initialization

Cons:

behaves differently during training and testing

Architechtures (History of ImageNet Challenge)

AlexNet

Input 3 * 277 * 277

Layer filters 64 kernel 11 stride 4 pad 2

We need to pay attention to the Memory, pramas, flop size

ZFNet

larger AlexNet

VGG

Rules:

  1. All conv 3*3 stride 1 pad 1
  2. max pool 2*2 stride 2
  3. after pool double channels

Stages:

conv-conv-pool

conv-conv-pool

conv-conv-pool

conv-conv-[conv]-pool

conv-conv-[conv]-pool

GoogLeNet

Stem network: aggressively downsamples input

Inception module:

Use such local unit with different kernal size

Use 1*1 Bottleneck to reduce channel dimensions

At the end, rather than flatting to destroy the spatial information with giant parameters

GoogLeNet use average pooling: 7 * 7 * 1024 -> 1024

There is only on FClayer at the last.

找到瓶颈位置,尽可能降低需要学习的参数数量/内存占用

Auxiliary Classifiers:

To help the deep network converge (batch normalization was not invented then): Auxiliary classification outputs to inject additional gradient at lower layers

Residual Networks

We find out that, somtimes we make the net deeper but it turns out to be underfitted.

Deeper network should strictly have the capability to do whatever a shallow one can, but it's hard to learn the parameters.

So we need the residual network!

This can help learning Identity, with all the parameters to be 0.

The still imitate VGG with its sat b

ResNeXt

Adding grops improves preforamance with same computational complexity.

MobileNets

reduce cost to make it affordable on mobile devices

Transfer learning

We can pretrain the model on a dataset.

When applying it to a new dataset, just finetune/Use linear classifier on the top layers.

Froze the main body of the net.

有一定争议,不需要预训练也能在2-3x的时间达到近似的效果

相关推荐
杨小扩5 小时前
第4章:实战项目一 打造你的第一个AI知识库问答机器人 (RAG)
人工智能·机器人
whaosoft-1435 小时前
51c~目标检测~合集4
人工智能
雪兽软件5 小时前
2025 年网络安全与人工智能发展趋势
人工智能·安全·web安全
元宇宙时间6 小时前
全球发展币GDEV:从中国出发,走向全球的数字发展合作蓝图
大数据·人工智能·去中心化·区块链
小黄人20257 小时前
自动驾驶安全技术的演进与NVIDIA的创新实践
人工智能·安全·自动驾驶
ZStack开发者社区8 小时前
首批 | 云轴科技ZStack加入施耐德电气技术本地化创新生态
人工智能·科技·云计算
X Y O8 小时前
神经网络初步学习3——数据与损失
人工智能·神经网络·学习
FL16238631299 小时前
如何使用目标检测深度学习框架yolov8训练钢管管道表面缺陷VOC+YOLO格式1159张3类别的检测数据集步骤和流程
深度学习·yolo·目标检测
唯创知音9 小时前
玩具语音方案选型决策OTP vs Flash 的成本功耗与灵活性
人工智能·语音识别
Jamence9 小时前
多模态大语言模型arxiv论文略读(151)
论文阅读·人工智能·语言模型·自然语言处理·论文笔记