opencv slovePnP 物体的姿态 估计
物体的姿态(位置和方向)
通过已知的图像坐标点数组,和对应的世界坐标点数组,相机的内参,畸变参数,求解相机姿态,即旋转向量和平移向量,
例如:
cpp
cv::Mat im = cv::imread("headPose.jpg");
// 2D image points. If you change the image, you need to change vector
std::vector<cv::Point2d> image_points;
image_points.push_back( cv::Point2d(359, 391) ); // Nose tip
image_points.push_back( cv::Point2d(399, 561) ); // Chin
image_points.push_back( cv::Point2d(337, 297) ); // Left eye left corner
image_points.push_back( cv::Point2d(513, 301) ); // Right eye right corner
image_points.push_back( cv::Point2d(345, 465) ); // Left Mouth corner
image_points.push_back( cv::Point2d(453, 469) ); // Right mouth corner
// 3D model points.
std::vector<cv::Point3d> model_points;
model_points.push_back(cv::Point3d(0.0f, 0.0f, 0.0f)); // Nose tip
model_points.push_back(cv::Point3d(0.0f, -330.0f, -65.0f)); // Chin
model_points.push_back(cv::Point3d(-225.0f, 170.0f, -135.0f)); // Left eye left corner
model_points.push_back(cv::Point3d(225.0f, 170.0f, -135.0f)); // Right eye right corner
model_points.push_back(cv::Point3d(-150.0f, -150.0f, -125.0f)); // Left Mouth corner
model_points.push_back(cv::Point3d(150.0f, -150.0f, -125.0f)); // Right mouth corner
// Camera internals
double focal_length = im.cols; // Approximate focal length.
Point2d center = cv::Point2d(im.cols/2,im.rows/2);
cv::Mat camera_matrix = (cv::Mat_<double>(3,3) << focal_length, 0, center.x, 0 , focal_length, center.y, 0, 0, 1);
cv::Mat dist_coeffs = cv::Mat::zeros(4,1,cv::DataType<double>::type); // Assuming no lens distortion
cout << "Camera Matrix " << endl << camera_matrix << endl ;
// Output rotation and translation
cv::Mat rotation_vector; // Rotation in axis-angle form
cv::Mat translation_vector;
// Solve for pose
cv::solvePnP(model_points, image_points, camera_matrix, dist_coeffs, rotation_vector, translation_vector);
// Project a 3D point (0, 0, 1000.0) onto the image plane.
// We use this to draw a line sticking out of the nose
vector<Point3d> nose_end_point3D;
vector<Point2d> nose_end_point2D;
nose_end_point3D.push_back(Point3d(0,0,1000.0));
projectPoints(nose_end_point3D, rotation_vector, translation_vector, camera_matrix, dist_coeffs, nose_end_point2D);
for(int i=0; i < image_points.size(); i++)
{
circle(im, image_points[i], 3, Scalar(0,0,255), -1);
}
cv::line(im,image_points[0], nose_end_point2D[0], cv::Scalar(255,0,0), 2);
cout << "Rotation Vector " << endl << rotation_vector << endl;
cout << "Translation Vector" << endl << translation_vector << endl;
cout << nose_end_point2D << endl;
计算相机到被测物中心的实际距离
cpp
float distance = sqrt(translation_vector.at<double>(0,0) * translation_vector.at<double>(0,0) + translation_vector.at<double>(1,0) * translation_vector.at<double>(1,0) + translation_vector.at<double>(2,0) * translation_vector.at<double>(2,0)) / 10;
#求解旋转平移矩阵
cpp
cv::solvePnP(model_points, image_points, camera_matrix, dist_coeffs, rotation_vector, translation_vector);
Rodrigues()
Rodrigues()可以将旋转向量转化为旋转矩阵,也可以将旋转矩阵转化为旋转向量。
求解欧拉角
cpp
///*
//double rm[9];
//cv::Mat rotM(3, 3, CV_64FC1, rm);
//Rodrigues(rvec, rotM);*/
cv::Rodrigues(rotation_vec, rotation_mat);
cv::hconcat(rotation_mat, translation_vec, pose_mat);
cv::decomposeProjectionMatrix(pose_mat, out_intrinsics, out_rotation, out_translation, cv::noArray(), cv::noArray(), cv::noArray(), euler_angle);
//
相机的位置计算
公式计算相机在世界框架中的位置:
cam_worl_pos = - 逆® * tvec。(这个公式在很多博客都验证过)
Calculate camera position
camera_position = -np.matrix(rotation_mat).T * np.matrix(translation_vector)
坐标轴如何画
三维姿态欧式角解算
- 旋转矩阵R
可以用下面代码求解 以上的旋转矩阵转换成三维姿态角(滚轮角、偏航角、俯仰角),atan()求反三角函数
r11-r33从旋转矩阵中提出,为double型变量
cpp
/计算出相机坐标系的三轴旋转欧拉角,旋转后可以转出世界坐标系。
//旋转顺序为z、y、x
const double PI = 3.141592653;
double thetaz = atan2(r21, r11) / PI * 180;
double thetay = atan2(-1 * r31, sqrt(r32*r32 + r33*r33)) / PI * 180;
double thetax = atan2(r32, r33) / PI * 180;