目标检测之YOLO论文简读

前言

YOLOV1是YOLO系列的开篇之作。系统后续有许多版本都是针对前一版本的改进,进而可以学到大佬们是怎么思考的。

YOLO

论文传送门

摘要

首先来看看YOLO的特点

A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation.

  1. 用一个网络模型就可以完成边界预测和分类

Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors.

  1. 非常快,实时;基础模型可以45帧每秒,小版本的模型甚至可以达到155帧每秒

介绍

Using our system, you only look once (YOLO) at an image to predict what objects are present and where they are.

在YOLO的系统里,只要看一眼就能预测出坐标和类别,就是一步到位。

三大优势

First, YOLO is extremely fast. Since we frame detection as a regression problem we don't need a complex pipeline.

  1. 非常快

Second, YOLO reasons globally about the image when making predictions.

  1. 可以分析整个图片

Third, YOLO learns generalizable representations of objects.

  1. 泛化能力强

统一检测

This means our network reasons globally about the full image and all the objects in the image. The YOLO design enables end-to-end training and realtime speeds while maintaining high average precision.

这意味着我们的网络能够对完整图像及其中所有目标进行全局推理。YOLO的设计实现了端到端训练和实时处理速度,同时保持较高的平均精度。

接下来就是预测过程了

Our system divides the input image into an S × S grid. If the center of an object falls into a grid cell, that grid cell is responsible for detecting that object.

需要预测的图片在进入之前会被分割成S*S个格子

Each grid cell predicts B bounding boxes and confidence scores for those boxes. These confidence scores reflect how confident the model is that the box contains an object and also how accurate it thinks the box is that it predicts. Formally we define confidence as Pr(Object) ∗ IOUtruth pred . If no object exists in that cell, the confidence scores should be zero. Otherwise we want the confidence score to equal the intersection over union (IOU) between the predicted box and the ground truth.

每个格子会预测两个边框(bbox),这个bbox的分数是用IOU计算,没有过IOU的阈值,就当成不存在,则置为0。

Each bounding box consists of 5 predictions: x, y, w, h, and confidence. The (x, y) coordinates represent the center of the box relative to the bounds of the grid cell.

每个bbox包含了x,y,w,h和置信值。confidence是iou计算的值。

Each grid cell also predicts C conditional class probabilities, Pr(Classi|Object). These probabilities are conditioned on the grid cell containing an object. We only predict one set of class probabilities per grid cell, regardless of the number of boxes B.

每个格子都预测一个类型

At test time we multiply the conditional class probabilities and the individual box confidence predictions,

which gives us class-specific confidence scores for each box. These scores encode both the probability of that class appearing in the box and how well the predicted box fits the object.

这里的置信度是用了类别的概率乘于定位的概率(Pr(Class_i|Object)*(Pr(Object)*IOU))

For evaluating YOLO on PASCAL VOC, we use S = 7,B = 2. PASCAL VOC has 20 labelled classes so C = 20. Our final prediction is a 7 × 7 × 30 tensor.

用PASCAL VOC数据集来说,看到以上的图片,一张图片被分割为77个格子(就是S S),然后要做两个动作;第一是要bbox预测,每个格子需要预测2个(就是B),每个格子有20种预测概率(就是C);最后就是会有一个77 (25+20)的张量(就是SS*(B*5+C))。

网络设计

Our network has 24 convolutional layers followed by 2 fully connected layers.

Fast YOLO uses a neural network with fewer convolutional layers (9 instead of 24) and fewer filters in those layers

模型是用了24个卷积层和两个全连接层,更小的模型则是用9个卷积层替换。

以下就是整个预测过程的流程图:

如图,其实关键就是需要训练好model,一张图输入到model,输出一个SS(B*5+C)的张量,然后通过基本的算法计算出预测的bbox+类别。

训练

其实训练无非就是的选择尺度选择、激活函数和损失函数。

输入尺度

Detection often requires fine-grained visual information so we increase the input resolution of the network from 224 × 224 to 448 × 448.

训练选择了448*448的图片大小

激活函数

We use a linear activation function for the final layer and all other layers use the following leaky rectified linear activation:

这里是说到最后一层用的是线性激活函数,其他层用的是Leakey Relu。

损失函数

λ c o o r d λ_{coord} λcoord=5是使负责检测物体的变大, λ n o o b j λ_{noobj} λnoobj=0.5是使不负责检测物体的变小。
S 2 S^2 S2是分割的格子数量,B是每个格子的预测框的数量。
1 i j o b j 1^{obj}_{ij} 1ijobj是第几个格子的第几个预测框是否有对象,有是1,否则0。

这是损失函数的设计是快速将有识别对象的收敛,将没有识别到对象的背景剔除;根号差是将大框的误差尽量缩小。

YOLO的缺点

This spatial constraint limits the number of nearby objects that our model can predict. Our model struggles with small objects that appear in groups, such as flocks of birds.

  1. 预测的数量比较少
  2. 对于小目标难以预测

This spatial constraint limits the number of nearby objects that our model can predict. Our model struggles with small objects that appear in groups, such as flocks of birds.

  1. 表征比较粗糙

This spatial constraint limits the number of nearby objects that our model can predict. Our model struggles with small objects that appear in groups, such as flocks of birds.

  1. 定位误差大

最后

论文的内容很多,还有和其他预测模型的对比和实验等。这里只是对其模型的原理进行了通读。内容有误请多多指教!!!

相关推荐
青松@FasterAI9 分钟前
【Arxiv 大模型最新进展】PEAR: 零额外推理开销,提升RAG性能!(★AI最前线★)
人工智能
huoyingcg16 分钟前
武汉火影数字|VR沉浸式空间制作 VR大空间打造
人工智能·科技·vr·虚拟现实·增强现实
冷冷清清中的风风火火31 分钟前
本地部署DeepSeek的硬件配置建议
人工智能·ai
sauTCc39 分钟前
RAG实现大致流程
人工智能·知识图谱
lqqjuly1 小时前
人工智能驱动的自动驾驶:技术解析与发展趋势
人工智能·机器学习·自动驾驶
山东布谷科技官方1 小时前
AI大模型发展对语音直播交友系统源码开发搭建的影响
人工智能·实时音视频·交友
thinkMoreAndDoMore1 小时前
深度学习(2)-深度学习关键网络架构
人工智能·深度学习·机器学习
山海青风2 小时前
从零开始玩转TensorFlow:小明的机器学习故事 1
人工智能·机器学习·tensorflow
圣心2 小时前
Ollama 快速入门
开发语言·javascript·人工智能
小屁孩大帅-杨一凡2 小时前
如何实现使用DeepSeek的CV模型对管道内模糊、低光照或水渍干扰的图像进行去噪、超分辨率重建。...
图像处理·人工智能·opencv·计算机视觉·超分辨率重建