文章目录
- 前言
- 一、汽车360影像介绍
- 二、前期准备
- 三、算法流程
-
- [3.1 透射变换与棋盘格校正](#3.1 透射变换与棋盘格校正)
- [3.2 边角透射与校正](#3.2 边角透射与校正)
- [3.3 图像拼接](#3.3 图像拼接)
前言
在这个信息化、智能化的时代,汽车已经成为我们日常生活中不可或缺的交通工具。随着科技的不断发展,汽车电子技术也在日新月异。今天,我将为大家带来一篇关于利用OpenCV实现汽车360全景影像制作的博客。通过本篇博客,我们将一起探索如何运用计算机视觉技术,为汽车安全驾驶保驾护航,为您的爱车打造一款炫酷的360全景影像系统。
一、汽车360影像介绍
汽车360度全景影像技术是一种创新的行车辅助系统,它为驾驶者提供了无死角的视野,极大地提升了驾驶时对周围环境的感知能力。这项技术通过在车辆四周关键位置安装摄像头和传感器,将捕捉到的图像和数据融合,形成一幅完整的全景视图,驾驶者可以通过车载屏幕或智能手机应用程序来查看。
通常,汽车360度全景影像系统由四个鱼眼摄像头和四个超声波传感器构成,它们分别位于车辆的前方、后方、左侧和右侧。每个摄像头负责捕捉其对应方向的大范围视角,而这些图像会被一个中央处理单元整合,形成一幅连贯的360度全景图像。这样的图像能够有效地检测并展示车辆周围的障碍物和其他车辆,帮助驾驶者更加轻松地掌握周边情况。以下是一般的安装位置示意图:
此外,那些配备了L2级辅助驾驶系统的汽车,往往还搭载了毫米波雷达和超声波雷达等高级传感器。这些传感器能够精确感知车辆周围的环境,包括与其他车辆的距离、速度和行驶方向等关键信息。结合这些传感器所提供的数据,与汽车360度全景影像系统相辅相成,不仅实现了自动泊车和防碰撞等安全功能,也显著提升了驾驶安全性和便利性。
汽车360度全景影像技术的应用前景广阔。在电动汽车领域,该技术能够帮助驾驶者更清晰地了解车辆周围的充电设施和充电状态,从而更有效地规划充电路线。更进一步,在自动驾驶技术的研究与开发中,360度全景影像技术正发挥着重要作用。它通过提供全方位的视觉和数据信息,为自动驾驶系统带来了更全面、更精确的环境感知能力,推动了自动驾驶技术向更智能、更高效的方向发展。
二、前期准备
本次项目需要在车周围环绕一圈标识物,用来进行图像的校正与拼接,如下图所示:
前:
后:
左:
右:
本次选择的四个海康威视的鱼眼镜头,内设校正算法,直接校正了鱼眼相机的畸变,若是没有校正需要先对鱼眼镜头进行初次校正,将成像拉长。
三、算法流程
3.1 透射变换与棋盘格校正
透射变换(Perspective Transformation),又称为透视变换,是在计算机视觉和图像处理中常用的一种几何变换。它模拟了现实世界中物体在视角变化时的视觉效果,即将一个平面上的图像通过透视变换映射到另一个平面上,这个过程中保持了直线的直线性和平行的平行性,但是改变了图像的形状和大小,使其呈现出三维空间中的透视效果。具体可以看我之前的博客:
以前摄像头为例子,首先先定义仿射变换的四个点,本次选择的点入下图所示
四个点坐标在图像中分别为(312,260),(1024,252),(1112,664),(237,669)
代码如下:
python
image = cv2.imread(r'D:\qian.png')
pts_src = np.float32([[312,260], [1024,252], [1112,664], [237,669]])
pts_dst = np.float32([[0, 0], [700, 0], [700, 400], [0, 400]])
# 计算透视变换矩阵
matrix = cv2.getPerspectiveTransform(pts_src, pts_dst)
# 应用透视变换
birdseye_view = cv2.warpPerspective(image, matrix, (700, 400))
cv2.imshow("ccc",birdseye_view)
cv2.waitKey(0)
效果:
可以看到图像已经被透射为鸟瞰图,但由于鱼眼相机成像的原因导致图像畸变较大,需要矫正,若不校正拼接后会影响结果,根据鸟瞰图可以看出,我们选择的完好棋盘格行数和列数分别为29,15
按照下面的代码进行图像校正:
python
CHECKERBOARD = (28, 14)
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# 世界坐标中的3D角点,z恒为0
objpoints = []
# 像素坐标中的2D点
imgpoints = []
# 利用棋盘定义世界坐标系中的角点
objp = np.zeros((1, CHECKERBOARD[0] * CHECKERBOARD[1], 3), np.float32)
objp[0, :, :2] = np.mgrid[0:CHECKERBOARD[0], 0:CHECKERBOARD[1]].T.reshape(-1, 2)
# 应用透视变换
# birdseye_view = cv2.warpPerspective(img, matrix, (800, 550))
# cv2.imshow('birdseye_view Image', birdseye_view)
# cv2.waitKey(0)
gray = cv2.cvtColor(birdseye_view, cv2.COLOR_BGR2GRAY)
# 查找棋盘角点
ret, corners = cv2.findChessboardCorners(gray, CHECKERBOARD, cv2.CALIB_CB_ADAPTIVE_THRESH +
cv2.CALIB_CB_FAST_CHECK + cv2.CALIB_CB_NORMALIZE_IMAGE)
"""
使用cornerSubPix优化探测到的角点
"""
if ret == True:
objpoints.append(objp)
corners2 = cv2.cornerSubPix(gray, corners, (11, 11), (-1, -1), criteria)
imgpoints.append(corners2)
print(1)
# 显示角点
# new_img = Image.fromarray(img.astype(np.uint8))
# new_img.save('chessboard_{}.png'.format(i))
# plt.imshow(img)
# plt.show()
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None)
undistorted_image = cv2.undistort(birdseye_view, mtx, dist)
print(mtx)
print(dist)
cv2.imshow('undistorted_image', undistorted_image)
# 显示结果
cv2.imshow('cccc', birdseye_view)
cv2.waitKey(0)
cv2.destroyAllWindows()
其中的birdseye_view就是仿射变换后的结果
效果为:
由此变成了一个可用于图像拼接的鸟瞰图。
3.2 边角透射与校正
前后左右图片都可以按照上面的流程进行处理,获得前后左右四个图片的鸟瞰图但值得注意的是,鱼眼相机图像的四个边角与图像主体不能共用一套校正参数,如果用一套校正效果会很差,所以需要对四个角进行分别标定与校正如下图所示:
实际使用如下所示:
首先获取边角四个点坐标:pts_src = np.float32([[980, 252], [1173, 254], [1256, 615], [1057, 667]]),按照上面的步骤进行透射变换与校正,最后效果为:
3.3 图像拼接
在对四个主体与四个边角投影校正之后需要将他们进行图像拼接,按照前后左右,左上右上左下右下进行拼接,并对左右进行旋转即可拼接为一个大图由于时间关系左下右下我没有进行校正与投影,并且没有对左上右上去黑边整体代码如下所示:
python
import cv2
import numpy as np
import os
car = cv2.imread(r"C:\Users\zhw\Downloads\mmagic-main\1729039419395.png")
car = cv2.rotate(car, cv2.ROTATE_90_COUNTERCLOCKWISE)
car =cv2.resize(car,(400,400))
qian_mtx =np.array( [[689.01841642, 0. , 624.04787349],
[ 0. , 695.06335924, 385.22289159],
[ 0. , 0. , 1. ]]
)
qian_dist = np.array([[-3.06149376e-01, 1.08057585e-01 ,-5.94786969e-03 ,-2.87823197e-04,
-1.94291355e-02]])
hou_mtx =np.array([[2.82069696e+03 ,0.00000000e+00, 3.82543476e+02],
[0.00000000e+00 ,3.62841759e+03 ,3.23462838e+02],
[0.00000000e+00, 0.00000000e+00, 1.00000000e+00]])
hou_dist = np.array([[-5.68505258e+00 , 5.37570484e+01, -1.46297539e-01, -1.83247222e-02,
-4.28925377e+02]])
zuo_mtx =np.array([[760.60648809 , 0. ,427.71915249] ,
[ 0. , 720.62542545 ,380.17260714] ,
[ 0. , 0. , 1. ]])
zuo_dist = np.array([[-0.3514514 , 0.17524327 ,-0.04839918 ,-0.01099606 ,-0.08950693]])
you_mtx =np.array(
[
[832.73342819 , 0. , 307.63099778],
[ 0., 784.17044336 ,256.34799819],
[ 0. , 0. , 1. ]
])
you_dist = np.array([[-0.36298936, 0.01787147 , 0.00133558, 0.0346145 , 0.04992992]])
zuoshang_mtx = np.array(
[[3.82145557e+03,0.00000000e+00 ,2.44425106e+02],
[0.00000000e+00 ,2.29300341e+03 ,2.66405576e+02],
[0.00000000e+00,0.00000000e+00,1.00000000e+00]])
zuoshang_dist = np.array([[ 1.71313182e+00,-4.88648556e+02,1.13798417e-01, 5.82589369e-01,
2.24893539e+04]])
youshang_mtx = np.array([[4.81765751e+03, 0.00000000e+00, 3.51007864e+02],
[0.00000000e+00, 2.55761498e+03 ,2.42995129e+02],
[0.00000000e+00 ,0.00000000e+00, 1.00000000e+00]])
youshang_dist =np.array( [[ 5.48636551e-01 ,-2.92814591e+02, 1.43689246e-01, -7.04739402e-01,
7.56981142e+03]])
pts_src_qian = np.float32([[315, 249], [1022, 240], [1108, 668], [234, 672]])
pts_src_hou = np.float32([[282, 205], [1009, 208], [1105, 644], [196, 621]])
pts_src_you = np.float32([[292, 145], [935, 161], [1043, 648], [172, 661]])
pts_src_zuo = np.float32([[298, 170], [948, 155], [1058, 653], [190, 660]])
pts_src_zuo_top = np.float32([[100, 266], [297, 252], [220, 655], [20, 614]])
pts_src_zuo_bottom = np.float32([[105, 163], [296, 169], [213, 484], [0, 457]])
pts_src_you_top = np.float32([[980, 252], [1173, 254], [1256, 615], [1057, 667]])
pts_src_you_bottom = np.float32([[935, 163], [1136, 166], [1239, 450], [1016, 471]])
pts_dst = np.float32([[0, 0], [800, 0], [800, 550], [0, 550]])
pts_dst_small = np.float32([[0, 0], [550, 0], [550, 550], [0, 550]])
img_qian = cv2.imread(r"D:\2222/"+"qian.png")
img_hou = cv2.imread(r"D:\2222/"+"hou.png")
img_zuo = cv2.imread(r"D:\2222/"+"zuo.png")
img_you = cv2.imread(r"D:\2222/"+"you.png")
matrix_qian = cv2.getPerspectiveTransform(pts_src_qian, pts_dst)
birdseye_view_qian = cv2.warpPerspective(img_qian, matrix_qian, (800, 550))
# 显示结果
undistorted_image_qian = cv2.undistort(birdseye_view_qian, you_mtx, you_dist)
undistorted_image_qian = cv2.resize(undistorted_image_qian,(0,0),fx=0.5,fy=0.5)
matrix_zuo_top = cv2.getPerspectiveTransform(pts_src_zuo_top, pts_dst_small)
birdseye_view_zuo_top = cv2.warpPerspective(img_qian, matrix_zuo_top, (550, 550))
# 显示结果
undistorted_image_zuo_top = cv2.undistort(birdseye_view_zuo_top, zuoshang_mtx, zuoshang_dist)
undistorted_image_zuo_top = cv2.resize(undistorted_image_zuo_top,(0,0),fx=0.5,fy=0.5)
matrix_zuo_bottom = cv2.getPerspectiveTransform(pts_src_zuo_bottom, pts_dst_small)
birdseye_view_zuo_bottom = cv2.warpPerspective(img_zuo, matrix_zuo_bottom, (550, 550))
# 显示结果
undistorted_image_zuo_bottom = cv2.undistort(birdseye_view_zuo_bottom, zuo_mtx, zuo_dist)
undistorted_image_zuo_bottom = cv2.rotate(undistorted_image_zuo_bottom, cv2.ROTATE_90_COUNTERCLOCKWISE)
undistorted_image_zuo_bottom = cv2.resize(undistorted_image_zuo_bottom,(0,0),fx=0.5,fy=0.5)
#youshang
matrix_you_top = cv2.getPerspectiveTransform(pts_src_you_top, pts_dst_small)
birdseye_view_you_top = cv2.warpPerspective(img_you, matrix_you_top, (550, 550))
# 显示结果
undistorted_image_you_top = cv2.undistort(birdseye_view_you_top, youshang_mtx, youshang_dist)
undistorted_image_you_top = cv2.resize(undistorted_image_you_top,(0,0),fx=0.5,fy=0.5)
matrix_you_bottom = cv2.getPerspectiveTransform(pts_src_you_bottom, pts_dst_small)
birdseye_view_you_bottom = cv2.warpPerspective(img_you, matrix_you_bottom, (550, 550))
# 显示结果
undistorted_image_you_bottom = cv2.undistort(birdseye_view_you_bottom, you_mtx, you_dist)
undistorted_image_you_bottom = cv2.resize(undistorted_image_you_bottom,(0,0),fx=0.5,fy=0.5)
matrix_hou = cv2.getPerspectiveTransform(pts_src_hou, pts_dst)
birdseye_view_hou = cv2.warpPerspective(img_hou, matrix_hou, (800, 550))
# 显示结果
undistorted_image_hou = cv2.undistort(birdseye_view_hou, hou_mtx, hou_dist)
rotated_180_hou = cv2.rotate(undistorted_image_hou, cv2.ROTATE_180)
rotated_180_hou = cv2.resize(rotated_180_hou,(0,0,),fx=0.5,fy=0.5)
matrix_zuo = cv2.getPerspectiveTransform(pts_src_zuo, pts_dst)
birdseye_view_zuo = cv2.warpPerspective(img_zuo, matrix_zuo, (800, 550))
# 显示结果
undistorted_image_zuo = cv2.undistort(birdseye_view_zuo, zuo_mtx, zuo_dist)
rotated_90_counterclockwise_zuo = cv2.rotate(undistorted_image_zuo, cv2.ROTATE_90_COUNTERCLOCKWISE)
rotated_90_counterclockwise_zuo = cv2.resize(rotated_90_counterclockwise_zuo,(0,0,),fx=0.5,fy=0.5)
matrix_you = cv2.getPerspectiveTransform(pts_src_you, pts_dst)
birdseye_view_you = cv2.warpPerspective(img_you, matrix_you, (800, 550))
# 显示结果
undistorted_image_you = cv2.undistort(birdseye_view_you, you_mtx, you_dist)
rotated_90_clockwise_you = cv2.rotate(undistorted_image_you, cv2.ROTATE_90_CLOCKWISE)
rotated_90_clockwise_you = cv2.resize(rotated_90_clockwise_you,(0,0,),fx=0.5,fy=0.5)
img_new = np.zeros([950,950,3]).astype(np.uint8)
img_new[0:275, 275:275 + 400] = undistorted_image_qian
img_new[275:675, 0:275] = rotated_90_counterclockwise_zuo
img_new[275:675, 675:950] = rotated_90_clockwise_you
img_new[675:950, 275:275 + 400] = rotated_180_hou
img_new[275:675, 275:275 + 400] = car
img_new[0:275, 0:275] = undistorted_image_zuo_top
img_new[675:950, 0:275] = undistorted_image_zuo_bottom
img_new[ 0:275,675:950] = undistorted_image_you_top
img_new[ 675:950,675:950] = undistorted_image_you_bottom
# 显示校正后的图像
cv2.imshow('Undistorted Image_qian', img_new)
# cv2.imshow('Undistorted Image_hou', rotated_180_hou)
#
# cv2.imshow('Undistorted Image_zuo', rotated_90_counterclockwise_zuo)
# cv2.imshow('Undistorted Image_you', rotated_90_clockwise_you)
cv2.waitKey(0)
cv2.destroyAllWindows()
最终效果: