Computer Vision-CNN

CNN(Convolutional Neural Network)

Import a question:classification

given a feature representing for images, how do we learn a model for distinguishing features from different classes?

The machine learning framework

1:prediction function to get desired output:

f(🍎)=apple

f(🍅)=tomato

f(🐮)=cow

2:The framework

here, there are two activities:

  • Training:knowing training set {(x1,y1)......(xn,yn)} estimate the prediction function f
  • Testing:knowing f ,to test x and output value y=f(x)

Neural Networks(Linear)

  • Perceptron(感知机)
  • Linear classifier-vector of weights w and a 'bias b

    This is convolution!

An example of binary classifying an image

  • Each pixel of the image would be an input, so, for a 28x28 image, we vectorize(矢量化),x=1x784

矢量化是一种将图像、图形或其他类型的数据转换为矢量格式的过程。在矢量格式中,图像和图形被表示为数学公式,而不是像素或其他离散数据点的集合。这种表示方式具有许多优点,包括:

可缩放性:矢量图形可以无限放大或缩小,而不会失去清晰度或产生锯齿状边缘。

编辑性:矢量图形可以轻松地编辑和修改,例如更改颜色、形状、大小等,而不会影响图像的质量。

交互性:矢量图形可以与其他应用程序进行交互,例如在网站上使用矢量图形可以使页面加载更快,并且可以通过CSS样式表轻松地更改图形属性。

打印质量:矢量图形具有更高的打印质量,因为它们不会失去清晰度或产生锯齿状边缘。

总之,矢量化可以提高图像和图形的质量,并使其更易于编辑、缩放和使用。

  • w is a vector of weights for each pixel: 784x1
  • b is a scalar(标量) bias per perceptron
  • result=xw +b ->(1x784)(784x1)+b->(1x1)+b

    Notice: the result of multiplying **xw** is a scalar(dot product)

Multuclass(add more perceptrons)

  • x same as above example ->x=1x784
  • W is a matrix of weights for each pixel/each perceptron
    w=784x10(assume 10-class classification)
  • b is a bias per perceptron(vector of biases)->b=1x10
  • result=xW+b=(1x784)x(784x10)+b=(1x10)+(1x10)=output vector

Bias convenience

  • create a 'fake' feature with value 1 to represent the bias
  • Add an extra weight that can vary

Then: the composition :

Outputs from one perceptron are fed into inputs of another perceptron

It's all just matrix multiplication!

Two problems

1:with all linear functions, the composition of functions is really just a single function(not complex function)

2:Linear classifiers:small change in input can cause large change in binary output=problem for composition of functions.

The thing we want:

Neural Network(Non-Linearities)

MLP(Multi-layer perceptron)

  • with enough parameters, it can approximate any function
  • images as input to neural networks(spatial correlation is local+waste of resource and we have not enough training samples)

so we import an activity: Sparse interactions

  • composition of layers will expand local to global


    Note:after such operation,the parameterization is good when input image is registered

Convolution Layer


pooling Layer:Receptive Field Size

Pooling is similar to downsampling

  • In convolution neural network, we always adopt pooling layer after a convolution layer operation.(Often using Max pooling not average pooling)
  • There are many kind of pooling layer(max/average)

Local contrast Normalization


相关推荐
天涯海风2 小时前
检索增强生成(RAG) 缓存增强生成(CAG) 生成中检索(RICHES) 知识库增强语言模型(KBLAM)
人工智能·缓存·语言模型
lxmyzzs3 小时前
基于深度学习CenterPoint的3D目标检测部署实战
人工智能·深度学习·目标检测·自动驾驶·ros·激光雷达·3d目标检测
跟着珅聪学java4 小时前
Apache OpenNLP简介
人工智能·知识图谱
AwhiteV4 小时前
利用图数据库高效解决 Text2sql 任务中表结构复杂时占用过多大模型上下文的问题
数据库·人工智能·自然语言处理·oracle·大模型·text2sql
Black_Rock_br5 小时前
AI on Mac, Your Way!全本地化智能代理,隐私与性能兼得
人工智能·macos
☺����5 小时前
实现自己的AI视频监控系统-第一章-视频拉流与解码2
开发语言·人工智能·python·音视频
fsnine5 小时前
机器学习——数据清洗
人工智能·机器学习
小猿姐6 小时前
KubeBlocks AI:AI时代的云原生数据库运维探索
数据库·人工智能·云原生·kubeblocks
算法_小学生6 小时前
循环神经网络(RNN, Recurrent Neural Network)
人工智能·rnn·深度学习
吱吱企业安全通讯软件7 小时前
吱吱企业通讯软件保证内部通讯安全,搭建数字安全体系
大数据·网络·人工智能·安全·信息与通信·吱吱办公通讯