深度学习入门(3) - CNN

CNN

Convolutional Layer

We use a filter to slide over the image spatially (computing dot products)

Interspersed with activation function as well

What it learns?

First-layer conv filters: local image templates (Often learns oriented edges, opposing colors)

Problems:
  1. For large images, we need many layers to get information about the whole image

​ Solution: Downsample inside the network

  1. Feature map shrinks with each layer

    Solution: Padding : adding zeros around the input

Pooling layer

-> downsampling

Without parameters that needs to be learnt.

ex:

max pooling

Aver pooling

...

FC layer(Fully Connected)

The last layer should always be a FC layer.

Batch normalization

we need to force inputs to be nicely scaled at each layer so that we can do the optimization more easily.

Usually inserted after FC layer / Convolutional layer, before non-linearity

Pros:

make the network easier to train

robust to initialization

Cons:

behaves differently during training and testing

Architechtures (History of ImageNet Challenge)

AlexNet

Input 3 * 277 * 277

Layer filters 64 kernel 11 stride 4 pad 2

We need to pay attention to the Memory, pramas, flop size

ZFNet

larger AlexNet

VGG

Rules:

  1. All conv 3*3 stride 1 pad 1
  2. max pool 2*2 stride 2
  3. after pool double channels

Stages:

conv-conv-pool

conv-conv-pool

conv-conv-pool

conv-conv-[conv]-pool

conv-conv-[conv]-pool

GoogLeNet

Stem network: aggressively downsamples input

Inception module:

Use such local unit with different kernal size

Use 1*1 Bottleneck to reduce channel dimensions

At the end, rather than flatting to destroy the spatial information with giant parameters

GoogLeNet use average pooling: 7 * 7 * 1024 -> 1024

There is only on FClayer at the last.

找到瓶颈位置,尽可能降低需要学习的参数数量/内存占用

Auxiliary Classifiers:

To help the deep network converge (batch normalization was not invented then): Auxiliary classification outputs to inject additional gradient at lower layers

Residual Networks

We find out that, somtimes we make the net deeper but it turns out to be underfitted.

Deeper network should strictly have the capability to do whatever a shallow one can, but it's hard to learn the parameters.

So we need the residual network!

This can help learning Identity, with all the parameters to be 0.

The still imitate VGG with its sat b

ResNeXt

Adding grops improves preforamance with same computational complexity.

MobileNets

reduce cost to make it affordable on mobile devices

Transfer learning

We can pretrain the model on a dataset.

When applying it to a new dataset, just finetune/Use linear classifier on the top layers.

Froze the main body of the net.

有一定争议,不需要预训练也能在2-3x的时间达到近似的效果

相关推荐
H Journey3 分钟前
openCV图像学-二值化
人工智能·opencv·计算机视觉
算法即正义3 分钟前
知识竞赛计分规则设置指南:七种计分模式详解与实操建议
人工智能
这张生成的图像能检测吗7 分钟前
(论文速读)基于微调大语言模型的数控车床故障诊断
人工智能·语言模型·故障诊断·车床技术
大写-凌祁7 分钟前
RescueADI:基于自主智能体的遥感图像自适应灾害解译
人工智能·计算机视觉·语言模型·自然语言处理·aigc
fof92011 分钟前
Base LLM | 从 NLP 到 LLM 的算法全栈教程 第六天
人工智能·自然语言处理
Godspeed Zhao15 分钟前
科技信息最前沿——TurboQuant:以极致压缩重新定义人工智能效率
人工智能·科技
AI医影跨模态组学17 分钟前
Radiology子刊(IF=6.3)复旦大学附属金山医院强金伟教授等团队:基于多参数MRI的深度学习和影像组学评估早期宫颈癌淋巴结转移
人工智能·深度学习·论文·医学·医学影像
Ln5x9qZC218 分钟前
Laravel AI SDK 正式发布
android·人工智能·laravel
nimadan1219 分钟前
生成剧本杀软件2025推荐,创新剧情设计工具引领潮流
人工智能·python
嵌入式小企鹅22 分钟前
阿里编程模型赶超、半导体涨价蔓延、RISC-V新品密集上线
人工智能·学习·ai·程序员·risc-v·芯片