深度学习入门(3) - CNN

CNN

Convolutional Layer

We use a filter to slide over the image spatially (computing dot products)

Interspersed with activation function as well

What it learns?

First-layer conv filters: local image templates (Often learns oriented edges, opposing colors)

Problems:
  1. For large images, we need many layers to get information about the whole image

​ Solution: Downsample inside the network

  1. Feature map shrinks with each layer

    Solution: Padding : adding zeros around the input

Pooling layer

-> downsampling

Without parameters that needs to be learnt.

ex:

max pooling

Aver pooling

...

FC layer(Fully Connected)

The last layer should always be a FC layer.

Batch normalization

we need to force inputs to be nicely scaled at each layer so that we can do the optimization more easily.

Usually inserted after FC layer / Convolutional layer, before non-linearity

Pros:

make the network easier to train

robust to initialization

Cons:

behaves differently during training and testing

Architechtures (History of ImageNet Challenge)

AlexNet

Input 3 * 277 * 277

Layer filters 64 kernel 11 stride 4 pad 2

We need to pay attention to the Memory, pramas, flop size

ZFNet

larger AlexNet

VGG

Rules:

  1. All conv 3*3 stride 1 pad 1
  2. max pool 2*2 stride 2
  3. after pool double channels

Stages:

conv-conv-pool

conv-conv-pool

conv-conv-pool

conv-conv-[conv]-pool

conv-conv-[conv]-pool

GoogLeNet

Stem network: aggressively downsamples input

Inception module:

Use such local unit with different kernal size

Use 1*1 Bottleneck to reduce channel dimensions

At the end, rather than flatting to destroy the spatial information with giant parameters

GoogLeNet use average pooling: 7 * 7 * 1024 -> 1024

There is only on FClayer at the last.

找到瓶颈位置,尽可能降低需要学习的参数数量/内存占用

Auxiliary Classifiers:

To help the deep network converge (batch normalization was not invented then): Auxiliary classification outputs to inject additional gradient at lower layers

Residual Networks

We find out that, somtimes we make the net deeper but it turns out to be underfitted.

Deeper network should strictly have the capability to do whatever a shallow one can, but it's hard to learn the parameters.

So we need the residual network!

This can help learning Identity, with all the parameters to be 0.

The still imitate VGG with its sat b

ResNeXt

Adding grops improves preforamance with same computational complexity.

MobileNets

reduce cost to make it affordable on mobile devices

Transfer learning

We can pretrain the model on a dataset.

When applying it to a new dataset, just finetune/Use linear classifier on the top layers.

Froze the main body of the net.

有一定争议,不需要预训练也能在2-3x的时间达到近似的效果

相关推荐
IT空门:门主12 分钟前
Spring AI的教程,持续更新......
java·人工智能·spring·spring ai
美狐美颜SDK开放平台20 分钟前
美颜sdk是什么?如何将美颜SDK接入安卓/iOS直播平台?
人工智能·美颜sdk·直播美颜sdk·美颜api·美狐美颜sdk
AI营销资讯站21 分钟前
AI营销内容生产:哪些平台支持全球多语言内容同步生产?
大数据·人工智能
飞哥数智坊22 分钟前
AutoGLM 开源实测:一句话让 AI 帮我点个鸡排
人工智能·chatglm (智谱)
F_D_Z30 分钟前
简明 | Yolo-v3结构理解摘要
深度学习·神经网络·yolo·计算机视觉·resnet
2022.11.7始学前端42 分钟前
n8n第九节 使用LangChain与Gemini构建带对话记忆的AI助手
java·人工智能·n8n
LYFlied1 小时前
在AI时代,前端开发者如何构建全栈开发视野与核心竞争力
前端·人工智能·后端·ai·全栈
core5121 小时前
深度解析DeepSeek-R1中GRPO强化学习算法
人工智能·算法·机器学习·deepseek·grpo
Surpass余sheng军1 小时前
AI 时代下的网关技术选型
人工智能·经验分享·分布式·后端·学习·架构
说私域1 小时前
基于开源AI智能名片链动2+1模式S2B2C商城小程序源码的所有物服务创新研究
人工智能