深度学习入门(3) - CNN

CNN

Convolutional Layer

We use a filter to slide over the image spatially (computing dot products)

Interspersed with activation function as well

What it learns?

First-layer conv filters: local image templates (Often learns oriented edges, opposing colors)

Problems:
  1. For large images, we need many layers to get information about the whole image

​ Solution: Downsample inside the network

  1. Feature map shrinks with each layer

    Solution: Padding : adding zeros around the input

Pooling layer

-> downsampling

Without parameters that needs to be learnt.

ex:

max pooling

Aver pooling

...

FC layer(Fully Connected)

The last layer should always be a FC layer.

Batch normalization

we need to force inputs to be nicely scaled at each layer so that we can do the optimization more easily.

Usually inserted after FC layer / Convolutional layer, before non-linearity

Pros:

make the network easier to train

robust to initialization

Cons:

behaves differently during training and testing

Architechtures (History of ImageNet Challenge)

AlexNet

Input 3 * 277 * 277

Layer filters 64 kernel 11 stride 4 pad 2

We need to pay attention to the Memory, pramas, flop size

ZFNet

larger AlexNet

VGG

Rules:

  1. All conv 3*3 stride 1 pad 1
  2. max pool 2*2 stride 2
  3. after pool double channels

Stages:

conv-conv-pool

conv-conv-pool

conv-conv-pool

conv-conv-[conv]-pool

conv-conv-[conv]-pool

GoogLeNet

Stem network: aggressively downsamples input

Inception module:

Use such local unit with different kernal size

Use 1*1 Bottleneck to reduce channel dimensions

At the end, rather than flatting to destroy the spatial information with giant parameters

GoogLeNet use average pooling: 7 * 7 * 1024 -> 1024

There is only on FClayer at the last.

找到瓶颈位置,尽可能降低需要学习的参数数量/内存占用

Auxiliary Classifiers:

To help the deep network converge (batch normalization was not invented then): Auxiliary classification outputs to inject additional gradient at lower layers

Residual Networks

We find out that, somtimes we make the net deeper but it turns out to be underfitted.

Deeper network should strictly have the capability to do whatever a shallow one can, but it's hard to learn the parameters.

So we need the residual network!

This can help learning Identity, with all the parameters to be 0.

The still imitate VGG with its sat b

ResNeXt

Adding grops improves preforamance with same computational complexity.

MobileNets

reduce cost to make it affordable on mobile devices

Transfer learning

We can pretrain the model on a dataset.

When applying it to a new dataset, just finetune/Use linear classifier on the top layers.

Froze the main body of the net.

有一定争议,不需要预训练也能在2-3x的时间达到近似的效果

相关推荐
Tom Boom9 分钟前
1.11.信息系统的分类【DSS】
人工智能·算法·机器学习·职场和发展·分类·数据挖掘·系统架构
扫地僧98513 分钟前
MuMu-LLaMA:通过大型语言模型进行多模态音乐理解和生成(Python代码实现+论文)
人工智能·语言模型·llama
skywalk816315 分钟前
Trae 是一款由 AI 驱动的 IDE,让编程更加愉悦和高效。国际版集成了 GPT-4 和 Claude 3.5,国内版集成了DeepSeek-r1
人工智能·trae
WenGyyyL21 分钟前
使用OpenCV和MediaPipe库——驼背检测(姿态监控)
人工智能·python·opencv·算法·计算机视觉·numpy
梓羽玩Python33 分钟前
开源版Manus来了!14.7k标星的OpenManus,让AI替你全自动执行任务!
人工智能·github
广拓科技34 分钟前
中国视频生成 AI 开源潮:腾讯阿里掀技术普惠革命,重塑内容创作格局
人工智能·开源
dr李四维44 分钟前
Java在小米SU7 Ultra汽车中的技术赋能
java·人工智能·安卓·智能驾驶·互联·小米su7ultra·hdfs架构
guanshiyishi1 小时前
ABeam 德硕 | 中国汽车市场(1)——正在推进电动化的中国汽车市场
人工智能·物联网·汽车
思茂信息1 小时前
CST直角反射器 --- 距离多普勒(RD图), 毫米波汽车雷达ADAS
前端·人工智能·5g·汽车·无人机·软件工程
瑞瑞大大1 小时前
简单介绍下Manus功能
人工智能