M-LOAM(INITIALIZATION)
Article Analysis
- Scan-Based Motion Estimation
通过在consecutive frame (each LiDAR) (因为omp parallel)中寻找correspondences然后通过最小化所有考虑feature之间residual error的transformation between frame to frame
针对于一个LASER在Kframe的相邻两次scan ,分别考虑planar feature 以及 corner feature
planar feature
其点到平面的距离公式通过点的坐标和法向量表示为:
具体分析如下:
最终构建的edge residual 如下:
M-LOAM在initialization的outstanding feature为:提供了状态的额外约束,同时残差表示为向量,能够✖协方差阵
最终的函数构造为:
在获取增量运动的点云后,选择使用将点转换到最后一帧数据来对点云进行畸变校正 ,也就是在[t~k-1~,t~k~]获得的增量点云数据,如果进行畸变校正 需要转移到t~k-1~时刻进行畸变校正 ,这也就需要从current time到t~k-1~time的rotation matrix 以及translation 这里针对于[t~k-1~,t~k~],采用线性插值插值处于该时段每个时刻出现的t~i~
- Calibration of Multi-LiDAR System
初始化外部参数通过对齐两个传感器的运动序列 进行外参初始化 。
在外参校准的过程中使用手眼标定方法 手眼标定
手眼标定选择两部法实现 这里(t~li~^b^与R~li~^b^是待求的) - 旋转初始化(Rotation Initialization):
通过运用quaternion四元数 重写上式。转成线性方程是为了更好的实现多个时间段四元数线性叠加的过程
(需要足够的运动激励)这里如果运动约束充分 那么对应的3-DoF的rotation是可观测的,那么对应的Q~k~的0空间对应的秩为1,因为3DoF可以观测 所以 对应的能观性矩阵的秩数为3(如果存在多轴退化的情况 那么导致多个维度的运动信息不可观测 那么就会出现Q~k~秩大于1的情况(多维度不可观测))
零空间对应着维度的不可观测性
运动退化 导致维度不可观测
多轴的位姿退化 使得Q~k~的零空间代表的不可观测性维度增加,导致Q~k~的秩大于1 - 平移初始化(Translation Initialization):
通过rotational calibration 校正完成,根据手眼标定 基于已经标定的旋转矩阵来标定平移变量(translation)。本文中因为仅仅考虑二维平面 所以 在z方向的平移t~z~不可观测(在后续精化标定中涉及)
INITIALIZATION代码分析
初始化步骤的开始:void Estimator::process()
根据在Measurement Processing阶段得到的feature set 实现初始化
首先判断if(ESTIMATE_EXTRINSIC == 2) (2 Have no prior about extrinsic parameters. We will initialize and optimize around them)
按照论文中的方式:实现手眼标定实现参数初始化
这里巧妙的运用了**#pragma omp parallel for num_threads(NUM_OF_LASER)通过为每个laser数据处理标定分配多线程独立实现EXTRINSIC初始估计。
通过函数 trackCloud**实现点云追踪
第一步:提取上一帧经过校正的点云插入到kdtree方便寻找最近邻点
cpp
PointICloudPtr corner_points_last = boost::make_shared<PointICloud>(prev_cloud_feature.find("corner_points_less_sharp")->second);
PointICloudPtr surf_points_last = boost::make_shared<PointICloud>(prev_cloud_feature.find("surf_points_less_flat")->second);
kdtree_corner_last->setInputCloud(corner_points_last);
kdtree_surf_last->setInputCloud(surf_points_last);
第二步:获取当前current time的特征点对应的点云数据
cpp
PointICloudPtr corner_points_sharp = boost::make_shared<PointICloud>(cur_cloud_feature.find("corner_points_sharp")->second);
PointICloudPtr surf_points_flat = boost::make_shared<PointICloud>(cur_cloud_feature.find("surf_points_flat")->second);
第三步:通过非线性最小二乘通过最小化planar facto and corner factor实现initial motion estimation (state)(帧到帧的姿态估计)
cpp
f_extract_.matchCornerFromScan(kdtree_corner_last, *corner_points_last, *corner_points_sharp, pose_local, corner_scan_features);
实现基于scan扫描线的corner feature matching
matchCornerFromScan 用来从扫描线中匹配角点特征
流程:将corner_points_sharp(current time)的角点特征通过TransformToStart 函数,转换到t~k-1~帧坐标系中,与corner_points_last上一帧经过校正的角点特征存放如kdtree_corner_from_scan中(这里在trackCloud函数里面已经实现了将prev_cloud_feature存放入kdtree_corner_last),然后通过nearestKSearch 函数寻找投影点point_sel 的最近邻点因为是edge factor 所以最终需要找到两个nearest factor,通过判断** if (min_point_ind2 >= 0) 如果存在两个nearest point 那么存储两个点的系数到coeff**,保存feature(idx_,point_,coeffs_)三类信息。
cpp
void FeatureExtract::matchCornerFromScan(const typename pcl::KdTreeFLANN<PointType>::Ptr &kdtree_corner_from_scan,
const typename pcl::PointCloud<PointType> &cloud_scan,
const typename pcl::PointCloud<PointType> &cloud_data,
const Pose &pose_local,
std::vector<PointPlaneFeature> &features)
{
if (!pcl::traits::has_field<PointType, pcl::fields::intensity>::value)
{
std::cerr << "[FeatureExtract] Point does not have intensity field!" << std::endl;
exit(EXIT_FAILURE);
}
/*f_extract_.matchCornerFromScan(kdtree_corner_last, *corner_points_last, *corner_points_sharp, pose_local, corner_scan_features); */
size_t cloud_size = cloud_data.points.size();/*corner feature size */
features.resize(cloud_size);
size_t cloud_cnt = 0;
PointType point_sel;
std::vector<int> point_search_ind;
std::vector<float> point_search_sqdis;
for (size_t i = 0; i < cloud_data.points.size(); i++)
{
// not consider distortion
/*转换到tk-1 frame to search the corrected cloud point*/
TransformToStart(cloud_data.points[i], point_sel, pose_local, false, SCAN_PERIOD);
/*from last corner set to find the nearest correspondent for the corner projection*/
kdtree_corner_from_scan->nearestKSearch(point_sel, 1, point_search_ind, point_search_sqdis);
int closest_point_ind = -1, min_point_ind2 = -1;
if (point_search_sqdis[0] < DISTANCE_SQ_THRESHOLD)/*通过DISTANCE_SQ_THRESHOLD判断最近邻是否满足距离要求限制*/
{
closest_point_ind = point_search_ind[0];/*最近扫描点的索引*/
int closest_point_scan_id = int(cloud_scan.points[closest_point_ind].intensity);
/*找对应的投影点的最近邻点的最近点 形成edge feature*/
float min_point_sqdis2 = DISTANCE_SQ_THRESHOLD;
// search in the direction of increasing scan line
for (int j = closest_point_ind + 1; j < (int)cloud_scan.points.size(); j++)
{
// if in the same scan line, continue
if (int(cloud_scan.points[j].intensity) <= closest_point_scan_id)
continue;
// if not in nearby scans, end the loop
if (int(cloud_scan.points[j].intensity) > (closest_point_scan_id + NEARBY_SCAN))
break;
float point_sqdis = sqrSum(cloud_scan.points[j].x - point_sel.x,
cloud_scan.points[j].y - point_sel.y,
cloud_scan.points[j].z - point_sel.z);
if (point_sqdis < min_point_sqdis2)
{
// find nearer point
min_point_sqdis2 = point_sqdis;
min_point_ind2 = j;
}
}
// search in the direction of decreasing scan line
for (int j = closest_point_ind - 1; j >= 0; j--)
{
// if in the same scan line, continue
if (int(cloud_scan.points[j].intensity) >= closest_point_scan_id)
continue;
// if not in nearby scans, end the loop
if (int(cloud_scan.points[j].intensity) < (closest_point_scan_id - NEARBY_SCAN))
break;
float point_sqdis = sqrSum(cloud_scan.points[j].x - point_sel.x,
cloud_scan.points[j].y - point_sel.y,
cloud_scan.points[j].z - point_sel.z);
if (point_sqdis < min_point_sqdis2)
{
// find nearer point
min_point_sqdis2 = point_sqdis;
min_point_ind2 = j;
}
}
}
if (min_point_ind2 >= 0) // both closest_point_ind and min_point_ind2 is valid
{
Eigen::Matrix<double, 6, 1> coeff;
coeff(0) = cloud_scan.points[closest_point_ind].x,
coeff(1) = cloud_scan.points[closest_point_ind].y,
coeff(2) = cloud_scan.points[closest_point_ind].z;
coeff(3) = cloud_scan.points[min_point_ind2].x,
coeff(4) = cloud_scan.points[min_point_ind2].y,
coeff(5) = cloud_scan.points[min_point_ind2].z;
PointPlaneFeature feature;
feature.idx_ = i;
feature.point_ = Eigen::Vector3d{cloud_data.points[i].x, cloud_data.points[i].y, cloud_data.points[i].z};
feature.coeffs_ = coeff;/*corner feature 对应的corner correspondent point {x,y,z}*/
features[cloud_cnt] = feature;
cloud_cnt++;
}
}
features.resize(cloud_cnt);
}
通过**f_extract_.matchSurfFromScan(kdtree_surf_last, *surf_points_last, *surf_points_flat, pose_local, surf_scan_features);**提取面特征(surf feature from scan)
cpp
void FeatureExtract::matchSurfFromScan(const typename pcl::KdTreeFLANN<PointType>::Ptr &kdtree_surf_from_scan,
const typename pcl::PointCloud<PointType> &cloud_scan,
const typename pcl::PointCloud<PointType> &cloud_data,
const Pose &pose_local,
std::vector<PointPlaneFeature> &features)
{
if (!pcl::traits::has_field<PointType, pcl::fields::intensity>::value)
{
std::cerr << "[FeatureExtract] Point does not have intensity field!" << std::endl;
exit(EXIT_FAILURE);
}
features.clear();
PointType point_sel;
std::vector<int> point_search_ind;
std::vector<float> point_search_sqdis;
for (size_t i = 0; i < cloud_data.points.size(); i++)
{
// not consider distortion
/*线投影到初始坐标系*/
TransformToStart(cloud_data.points[i], point_sel, pose_local, false, SCAN_PERIOD);
/*然后通过最近邻搜索具有corner feature 最近邻的cloud point*/
kdtree_surf_from_scan->nearestKSearch(point_sel, 1, point_search_ind, point_search_sqdis);
/*因为是surf point 所以选择3个不共线的点 作为 平面点来分析*/
/*规定 min_point_ind3 在相邻的扫描线上 而min_point_ind2 与closest_point_ind要么在同扫描线要么在相邻扫描线上*/
int closest_point_ind = -1, min_point_ind2 = -1, min_point_ind3 = -1;
if (point_search_sqdis[0] < DISTANCE_SQ_THRESHOLD)
{
closest_point_ind = point_search_ind[0];/*最近点的id*/
// get closest point's scan ID
int closest_point_scan_id = int(cloud_scan.points[closest_point_ind].intensity);
float min_point_sqdis2 = DISTANCE_SQ_THRESHOLD, min_point_sqdis3 = DISTANCE_SQ_THRESHOLD;
#pragma region 在最近点的扫描线增加以及减少的方向上搜索该closest_point_ind的相邻点
// search in the direction of increasing scan line
for (int j = closest_point_ind + 1; j < (int)cloud_scan.points.size(); j++)
{
// if not in nearby scans, end the loop
if (int(cloud_scan.points[j].intensity) > (closest_point_scan_id + NEARBY_SCAN))
break;
float point_sqdis = sqrSum(cloud_scan.points[j].x - point_sel.x,
cloud_scan.points[j].y - point_sel.y,
cloud_scan.points[j].z - point_sel.z);
// if in the same or lower scan line
if (int(cloud_scan.points[j].intensity) <= closest_point_scan_id && point_sqdis < min_point_sqdis2)
{
min_point_sqdis2 = point_sqdis;
min_point_ind2 = j;
}
// if in the higher scan line
else if (int(cloud_scan.points[j].intensity) > closest_point_scan_id && point_sqdis < min_point_sqdis3)
{
min_point_sqdis3 = point_sqdis;
min_point_ind3 = j;
}
}
// search in the direction of decreasing scan line
for (int j = closest_point_ind - 1; j >= 0; j--)
{
// if not in nearby scans, end the loop
if (int(cloud_scan.points[j].intensity) < (closest_point_scan_id - NEARBY_SCAN))
break;
float point_sqdis = sqrSum(cloud_scan.points[j].x - point_sel.x,
cloud_scan.points[j].y - point_sel.y,
cloud_scan.points[j].z - point_sel.z);
// if in the same or higher scan line
if (int(cloud_scan.points[j].intensity) >= closest_point_scan_id && point_sqdis < min_point_sqdis2)
{
min_point_sqdis2 = point_sqdis;
min_point_ind2 = j;
}
else if (int(cloud_scan.points[j].intensity) < closest_point_scan_id && point_sqdis < min_point_sqdis3)
{
// find nearer point
min_point_sqdis3 = point_sqdis;
min_point_ind3 = j;
}
}
#pragma endregion
if (min_point_ind2 >= 0 && min_point_ind3 >= 0)
{
Eigen::Vector3f last_point_j(cloud_scan.points[closest_point_ind].x,
cloud_scan.points[closest_point_ind].y,
cloud_scan.points[closest_point_ind].z);
Eigen::Vector3f last_point_l(cloud_scan.points[min_point_ind2].x,
cloud_scan.points[min_point_ind2].y,
cloud_scan.points[min_point_ind2].z);
Eigen::Vector3f last_point_m(cloud_scan.points[min_point_ind3].x,
cloud_scan.points[min_point_ind3].y,
cloud_scan.points[min_point_ind3].z);
Eigen::Vector3f w = (last_point_j - last_point_l).cross(last_point_j - last_point_m);/*平面的法向量*/
w.normalize();/*法向量变成单位向量*/
float negative_OA_dot_norm = -w.dot(last_point_j);/*D的normalize值*/
float pd2 = -(w.x() * point_sel.x + w.y() * point_sel.y + w.z() * point_sel.z + negative_OA_dot_norm); // distance
float s = 1 - 0.9f * fabs(pd2) / sqrt(sqrSum(point_sel.x, point_sel.y, point_sel.z));
Eigen::Vector4d coeff(w.x(), w.y(), w.z(), negative_OA_dot_norm);
PointPlaneFeature feature;
feature.idx_ = i;
feature.point_ = Eigen::Vector3d{cloud_data.points[i].x, cloud_data.points[i].y, cloud_data.points[i].z};
feature.coeffs_ = coeff;
feature.type_ = 's';
features.push_back(feature);
}
}
}
}
判断线特征以及面特征的数量总和是否满足大于10个特征数量
如果特征数量总和大于10 那么 通过LidarScanPlaneNormFactor 创建plane feature
通过LidarScanEdgeFactorVector 创建corner_scan_features 的corner feature
对于planefeature的AddResidualBlock其通过重载Ealuate实现(自己实现计算jacobians矩阵)
cpp
bool Evaluate(double const *const *param, double *residuals, double **jacobians) const
{
Eigen::Quaterniond q_last_curr(param[0][6], param[0][3], param[0][4], param[0][5]);
Eigen::Vector3d t_last_curr(param[0][0], param[0][1], param[0][2]);
/*插值*/
q_last_curr = Eigen::Quaterniond::Identity().slerp(s_, q_last_curr);/*四元数相对姿态的球面插值*/
t_last_curr = s_ * t_last_curr;
/*在matchCornerFromScan中计算的feature向量*/
Eigen::Vector3d w(coeff_(0), coeff_(1), coeff_(2));
double d = coeff_(3);
double a = w.dot(q_last_curr * point_ + t_last_curr) + d;/*对应点到面的距离*/
residuals[0] = a;/*残差*/
if (jacobians)
{
Eigen::Matrix3d R = q_last_curr.toRotationMatrix();
if (jacobians[0])
{
Eigen::Map<Eigen::Matrix<double, 1, 7, Eigen::RowMajor>> jacobian_pose(jacobians[0]);
Eigen::Matrix<double, 1, 6> jaco; // [dy/dt, dy/dR, 1]
jaco.setZero();
jaco.leftCols<3>() = w.transpose();
jaco.rightCols<3>() = -w.transpose() * R * Utility::skewSymmetric(point_);
jacobian_pose.setZero();
jacobian_pose.leftCols<6>() = jaco;
}
}
return true;
}
对于EdgeFactor加入重载evaluate构建残差 最终 在论文中明确了edge residual 不再充当scalars标量 而是represented as a vector 能够allowing us to multiply a 3*3 covariance matrix 但是貌似代码中并没有类似的实例列举??
cpp
bool Evaluate(double const *const *param, double *residuals, double **jacobians) const
{
/*set q_last_curr and t_last_curr as the states*/
Eigen::Quaterniond q_last_curr(param[0][6], param[0][3], param[0][4], param[0][5]);
Eigen::Vector3d t_last_curr(param[0][0], param[0][1], param[0][2]);
/*数据插值*/
q_last_curr = Eigen::Quaterniond::Identity().slerp(s_, q_last_curr);
t_last_curr = s_ * t_last_curr;
/*last point a and last point b and last point (projection to the tk-1)*/
Eigen::Vector3d lpa(coeff_(0), coeff_(1), coeff_(2));
Eigen::Vector3d lpb(coeff_(3), coeff_(4), coeff_(5));
Eigen::Vector3d lp = q_last_curr * point_ + t_last_curr;
Eigen::Vector3d nu = (lp - lpa).cross(lp - lpb);/*四边形面积*/
Eigen::Vector3d de = lpa - lpb;/*四边形底边长*/
residuals[0] = nu.x() / de.norm();/*高对应的向量(距离的投影)*/
residuals[1] = nu.y() / de.norm();
residuals[2] = nu.z() / de.norm();
/*edge residual represented as vectors allowing us to multiply a 3*3 covariance matrix*/
// residuals[0] = nu.norm / de.norm();
if (jacobians)
{
Eigen::Matrix3d R = q_last_curr.toRotationMatrix();
if (jacobians[0])
{
Eigen::Map<Eigen::Matrix<double, 3, 7, Eigen::RowMajor>> jacobian_pose(jacobians[0]);
Eigen::Matrix<double, 3, 6> jaco; // [dy/dt, dy/dq, 1]
double eta = 1.0 / de.norm();
jaco.leftCols<3>() = -eta * Utility::skewSymmetric(lpa - lpb);
jaco.rightCols<3>() = eta * Utility::skewSymmetric(lpa - lpb) * R * Utility::skewSymmetric(point_);
jacobian_pose.setZero();
jacobian_pose.leftCols<6>() = jaco;
}
}
return true;
}
**摄制完成相应的edge factor 以及 surf factor后 执行optimization **
cpp
// step 3: optimization
TicToc t_solver;
ceres::Solver::Options options;
options.linear_solver_type = ceres::DENSE_SCHUR;
options.max_num_iterations = 4;
// options.max_solver_time_in_seconds = 0.005;
options.minimizer_progress_to_stdout = false;
// options.check_gradients = false;
// options.gradient_check_relative_precision = 1e-4;
ceres::Solver::Summary summary;
ceres::Solve(options, &problem, &summary);
// std::cout << summary.BriefReport() << std::endl;
// std::cout << summary.FullReport() << std::endl;
// printf("solver time %f ms \n", t_solver.toc());
至此:pose_laser_cur_对于帧与帧之间的特征匹配优化实现完成
然后通过pose_prev_cur 获取优化后的位姿 然后返回帧与帧之间匹配的优化位姿 。
(达到当前增量位姿优化更新的目的) 更新的是增量位姿
cpp
Pose pose_prev_cur(Eigen::Quaterniond(para_pose[6], para_pose[3], para_pose[4], para_pose[5]),
Eigen::Vector3d(para_pose[0], para_pose[1], para_pose[2]));
return pose_prev_cur;
通过pose_laser_cur_将IDX_REF;主雷达的quaternion以及 transformation 的优化增量姿态传递给Qs_与Ts_ ,
经过帧与帧之间的特征匹配实现的增量位姿优化
cpp
Qs_[cir_buf_cnt_] = pose_laser_cur_[IDX_REF].q_;
Ts_[cir_buf_cnt_] = pose_laser_cur_[IDX_REF].t_;
Header_[cir_buf_cnt_].stamp = ros::Time(cur_feature_.first);
针对于当前帧的特征 实现体素滤波器对滤波进行降采样
cpp
for (size_t n = 0; n < NUM_OF_LASER; n++)
{
/*针对于corner point feature 实现down size 降采样 */
PointICloud &corner_points = cur_feature_.second[n]["corner_points_less_sharp"];
down_size_filter_corner_.setInputCloud(boost::make_shared<PointICloud>(corner_points));
down_size_filter_corner_.filter(corner_points_stack_[n][cir_buf_cnt_]);
corner_points_stack_size_[n][cir_buf_cnt_] = corner_points_stack_[n][cir_buf_cnt_].size();/*第N个雷达点云的corner point size */
/*同理 对surf point 面点实现降采样*/
PointICloud &surf_points = cur_feature_.second[n]["surf_points_less_flat"];/*surf feature*/
down_size_filter_surf_.setInputCloud(boost::make_shared<PointICloud>(surf_points));/*input cloud feature*/
down_size_filter_surf_.filter(surf_points_stack_[n][cir_buf_cnt_]);/*down size filter 降采样*/
surf_points_stack_size_[n][cir_buf_cnt_] = surf_points_stack_[n][cir_buf_cnt_].size();/**/
}
因为需要自动标定外参 所以 solver_flag_一直设置为初始化 执行case:INITIAL,通过slideWindow()函数不断的将基于帧与帧之间的优化位姿 通过滑动窗口 实现数据存入
cpp
case INITIAL:/*set initial*/
{
printf("[INITIAL]\n");
/*将Q T surf_point corner_point push into the */
slideWindow();/*是不是实现了两次连续的插入first*/
if (cir_buf_cnt_ < WINDOW_SIZE)
{
cir_buf_cnt_++;
if (cir_buf_cnt_ == WINDOW_SIZE)
{
slideWindow(); /*等到窗口尺寸的时候 仍然进行一次滑动窗口 将第一个重复的数据进行剔除*/
}
}
然后根据雷达的数量执行针对于当前优化位姿的特征点信息降采样,然后通过判断solver_flag_实现对于当前帧对应的特征数据 作为下一个帧的特征数据的输入实现插入,通过滑动窗口,存入corner feature ,surf feature 以及Qs_和Ts_的增量优化位姿后 在对帧到帧匹配用到的弱面点以及弱角点信息进行存储 这些信息来自于未被降采样的cur_feature_数据存储器
cpp
prev_time_ = cur_time_;
prev_feature_.first = prev_time_;
prev_feature_.second.clear();
prev_feature_.second.resize(NUM_OF_LASER);
for (size_t n = 0; n < NUM_OF_LASER; n++)
{
prev_feature_.second[n].insert(make_pair("corner_points_less_sharp",
cur_feature_.second[n].find("corner_points_less_sharp")->second));
prev_feature_.second[n].insert(make_pair("surf_points_less_flat",
cur_feature_.second[n].find("surf_points_less_flat")->second));
}
对于经过一系列运动后==cloud point已经存在由于LiDAR旋转切片扫描而引入的运动畸变 ,对应的处理策略是:在解算完成增量运动信息后,我们将这些cloud point 信息transforming into the last frame 然后进行correct **
这里实现为:
首先通过 pose_laser_cur存储获取当前的rotation quaternion以及trasformation information 然后 pose_undist用来存储经过帧与帧之间的优化位姿匹配得到的增量位姿信息**通过pose_undist[IDX_REF] = pose_laser_prev_.inverse() * pose_laser_cur;将增量位姿从body对应的坐标系转换到主雷达对应的laser坐标系,下面的for 循环同样实现将每个雷达测量得到的incremental motion信息转换到主雷达对应的坐标系中的增量 然后到主雷达坐标系中的投影 ==,最后通过**undistortMeasurements(pose_undist);将当前帧对应的cloud point 点云信息 transform成为[t~k-1~,t~k~]对应的last (start time t~k-1~)**最终实现畸变校正
cpp
/*如果存在运动畸变*/
if (DISTORTION)
{
/*对应当前点的位姿*/
Pose pose_laser_cur = Pose(Qs_[cir_buf_cnt_ - 1], Ts_[cir_buf_cnt_ - 1]);/*当前的pose*/
std::vector<Pose> pose_undist = pose_rlt_;/*pose_rlt_存储优化后的增量雷达信息*/
pose_undist[IDX_REF] = pose_laser_prev_.inverse() * pose_laser_cur;
for (size_t n = 0; n < NUM_OF_LASER; n++)
{
/*body 2 laser frame*/
Pose pose_ext(qbl_[n], tbl_[n]);/*pose extrinsic parameter*/
/*pose_ext.inverse() -> laser 2 body frame
* pose_rlt_ in body frame
* pose_ext -> body 2 laser frame
* finally :pose_undist is in laser frame
* */
pose_undist[n] = pose_ext.inverse() * pose_rlt_[IDX_REF] * pose_ext;
/*转变为在雷达坐标系表示的运动增量(incremental motion)在laser 坐标系的投影(in laser projection)*/
}
undistortMeasurements(pose_undist);
pose_laser_prev_ = pose_laser_cur;
}
首先在每次执行完成motion estimation and obtain the optimized incremental motion information基于帧到帧的匹配后 通过initial_extrinsics_.addPose(pose_rlt_) 向initial_extrinsics_中增加外部运动信息 直到满足所需要的足够的运动movement后 执行calibrating extrinsic param
执行的addPose函数 主要是:通过调用checkScrewMotion函数检查主雷达与辅雷达之间运动存在的点云运动畸变,返回满足小于rotation 以及 transformation 畸变阈值的优化pose state通过pq_pose_.push()将满足条件的增量pose state 数据 存储到pq_pose_中
这里设置有一个小技巧:pq_pose_.size()来代表插入的满足条件的增量点云pose_laser的索引
同样,只有在满足if (initial_extrinsics_.addPose(pose_rlt_) && (cir_buf_cnt_ == WINDOW_SIZE))的时候,这保证了在具有sufficient optimized relative movement 实现 calibrating extrinsic parameter
下面对于达到sufficient movement 后进行calibrating extrinsic param (外参校正基于手眼标定 ,对应Artical Analysis的第二部分 2. Calibration of Multi-LiDAR System )
针对于每一个辅助雷达基于手眼标定进行外部参数标定 返回标定结果到calib_result
并输出标定后的外参矩阵QBL[n] = calib_result.q_; TBL[n] = calib_result.t_;
cpp
for (size_t n = 0; n < NUM_OF_LASER; n++)
{
if (initial_extrinsics_.cov_rot_state_[n]) continue;
Pose calib_result;
/*calibrate the rotation*/
if (initial_extrinsics_.calibExRotation(IDX_REF, n, calib_result))
{/*calibrate the translation*/
if (initial_extrinsics_.calibExTranslation(IDX_REF, n, calib_result))
{
/*输出多个传感器雷达校正完成后的初始参数(initial extrinsic of laser)
* 其中包括
* quaternion body to laser
* translation body to laser
*
* */
std::cout << common::YELLOW << "Initial extrinsic of laser_" << n << ": " << calib_result
<< common::RESET << std::endl;
qbl_[n] = calib_result.q_;
tbl_[n] = calib_result.t_;
// tdbl_[n] = calib_result.td_;
QBL[n] = calib_result.q_;
TBL[n] = calib_result.t_;
// TDBL[n] = calib_result.td_;
}
}
}
最后判断是否所有的雷达全部完成了rotation以及translation的外参估计 如果全部完成 那么
最终将ESTIMATE_EXTRINSIC改成1
cpp
if ((initial_extrinsics_.full_cov_rot_state_) && (initial_extrinsics_.full_cov_pos_state_))
{
std::cout << common::YELLOW << "All initial extrinsic rotation calib success" << common::RESET << std::endl;
ESTIMATE_EXTRINSIC = 1;
initial_extrinsics_.saveStatistics();
}
**注意在基于corner feature 构建MLE问题的时候 涉及了CHECK_JACOBIAN **
反对称矩阵
向量求导
向量求导