目标检测之YOLO论文简读

前言

YOLOV1是YOLO系列的开篇之作。系统后续有许多版本都是针对前一版本的改进,进而可以学到大佬们是怎么思考的。

YOLO

论文传送门

摘要

首先来看看YOLO的特点

A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation.

  1. 用一个网络模型就可以完成边界预测和分类

Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors.

  1. 非常快,实时;基础模型可以45帧每秒,小版本的模型甚至可以达到155帧每秒

介绍

Using our system, you only look once (YOLO) at an image to predict what objects are present and where they are.

在YOLO的系统里,只要看一眼就能预测出坐标和类别,就是一步到位。

三大优势

First, YOLO is extremely fast. Since we frame detection as a regression problem we don't need a complex pipeline.

  1. 非常快

Second, YOLO reasons globally about the image when making predictions.

  1. 可以分析整个图片

Third, YOLO learns generalizable representations of objects.

  1. 泛化能力强

统一检测

This means our network reasons globally about the full image and all the objects in the image. The YOLO design enables end-to-end training and realtime speeds while maintaining high average precision.

这意味着我们的网络能够对完整图像及其中所有目标进行全局推理。YOLO的设计实现了端到端训练和实时处理速度,同时保持较高的平均精度。

接下来就是预测过程了

Our system divides the input image into an S × S grid. If the center of an object falls into a grid cell, that grid cell is responsible for detecting that object.

需要预测的图片在进入之前会被分割成S*S个格子

Each grid cell predicts B bounding boxes and confidence scores for those boxes. These confidence scores reflect how confident the model is that the box contains an object and also how accurate it thinks the box is that it predicts. Formally we define confidence as Pr(Object) ∗ IOUtruth pred . If no object exists in that cell, the confidence scores should be zero. Otherwise we want the confidence score to equal the intersection over union (IOU) between the predicted box and the ground truth.

每个格子会预测两个边框(bbox),这个bbox的分数是用IOU计算,没有过IOU的阈值,就当成不存在,则置为0。

Each bounding box consists of 5 predictions: x, y, w, h, and confidence. The (x, y) coordinates represent the center of the box relative to the bounds of the grid cell.

每个bbox包含了x,y,w,h和置信值。confidence是iou计算的值。

Each grid cell also predicts C conditional class probabilities, Pr(Classi|Object). These probabilities are conditioned on the grid cell containing an object. We only predict one set of class probabilities per grid cell, regardless of the number of boxes B.

每个格子都预测一个类型

At test time we multiply the conditional class probabilities and the individual box confidence predictions,

which gives us class-specific confidence scores for each box. These scores encode both the probability of that class appearing in the box and how well the predicted box fits the object.

这里的置信度是用了类别的概率乘于定位的概率(Pr(Class_i|Object)*(Pr(Object)*IOU))

For evaluating YOLO on PASCAL VOC, we use S = 7,B = 2. PASCAL VOC has 20 labelled classes so C = 20. Our final prediction is a 7 × 7 × 30 tensor.

用PASCAL VOC数据集来说,看到以上的图片,一张图片被分割为77个格子(就是S S),然后要做两个动作;第一是要bbox预测,每个格子需要预测2个(就是B),每个格子有20种预测概率(就是C);最后就是会有一个77 (25+20)的张量(就是SS*(B*5+C))。

网络设计

Our network has 24 convolutional layers followed by 2 fully connected layers.

Fast YOLO uses a neural network with fewer convolutional layers (9 instead of 24) and fewer filters in those layers

模型是用了24个卷积层和两个全连接层,更小的模型则是用9个卷积层替换。

以下就是整个预测过程的流程图:

如图,其实关键就是需要训练好model,一张图输入到model,输出一个SS(B*5+C)的张量,然后通过基本的算法计算出预测的bbox+类别。

训练

其实训练无非就是的选择尺度选择、激活函数和损失函数。

输入尺度

Detection often requires fine-grained visual information so we increase the input resolution of the network from 224 × 224 to 448 × 448.

训练选择了448*448的图片大小

激活函数

We use a linear activation function for the final layer and all other layers use the following leaky rectified linear activation:

这里是说到最后一层用的是线性激活函数,其他层用的是Leakey Relu。

损失函数

λ c o o r d λ_{coord} λcoord=5是使负责检测物体的变大, λ n o o b j λ_{noobj} λnoobj=0.5是使不负责检测物体的变小。
S 2 S^2 S2是分割的格子数量,B是每个格子的预测框的数量。
1 i j o b j 1^{obj}_{ij} 1ijobj是第几个格子的第几个预测框是否有对象,有是1,否则0。

这是损失函数的设计是快速将有识别对象的收敛,将没有识别到对象的背景剔除;根号差是将大框的误差尽量缩小。

YOLO的缺点

This spatial constraint limits the number of nearby objects that our model can predict. Our model struggles with small objects that appear in groups, such as flocks of birds.

  1. 预测的数量比较少
  2. 对于小目标难以预测

This spatial constraint limits the number of nearby objects that our model can predict. Our model struggles with small objects that appear in groups, such as flocks of birds.

  1. 表征比较粗糙

This spatial constraint limits the number of nearby objects that our model can predict. Our model struggles with small objects that appear in groups, such as flocks of birds.

  1. 定位误差大

最后

论文的内容很多,还有和其他预测模型的对比和实验等。这里只是对其模型的原理进行了通读。内容有误请多多指教!!!

相关推荐
打小就很皮...35 分钟前
使用 React 实现语音识别并转换功能
人工智能·语音识别
老朋友此林42 分钟前
MiniMind:3块钱成本 + 2小时!训练自己的0.02B的大模型。minimind源码解读、MOE架构
人工智能·python·nlp
LitchiCheng1 小时前
复刻低成本机械臂 SO-ARM100 单关节控制(附代码)
人工智能·机器学习·机器人
微学AI1 小时前
大模型的应用中A2A(Agent2Agent)架构的部署过程,A2A架构实现不同机器人之间的高效通信与协作
人工智能·架构·机器人·a2a
AI视觉网奇1 小时前
MoE 学习笔记
人工智能
多巴胺与内啡肽.2 小时前
Opencv进阶操作:图像拼接
人工智能·opencv·计算机视觉
小草cys2 小时前
查看YOLO版本的三种方法
人工智能·深度学习·yolo
白熊1883 小时前
【计算机视觉】OpenCV实战项目:ETcTI_smart_parking智能停车系统深度解析
人工智能·opencv·计算机视觉
消失在人海中4 小时前
数据分析基础:需要掌握的入门知识
数据库·人工智能·数据分析
西红柿土豆丶4 小时前
基于Flask、Bootstrap及深度学习的水库智能监测分析平台
人工智能·python·深度学习·flask·bootstrap