moveit_calibration(humble)使用记录

前言

针对基于Movit2的手眼标定(Eye-in-Hand)方法,本文对该文献进行了补充与优化。 https://blog.csdn.net/qq_70851292/article/details/154538798https://blog.csdn.net/qq_70851292/article/details/154538798

一、安装与修改

1、安装

2、修改

2.1修复 handeye_target_aruco.cpp

文件路径:src/moveit_calibration/moveit_calibration_plugins/handeye_calibration_target/src/handeye_target_aruco.cpp。请将里面的 createTargetImage 和 detectTargetPose 这两个函数整体替换为以下内容:

复制代码
bool HandEyeArucoTarget::createTargetImage(cv::Mat& image) const
{
  cv::Size image_size;
  image_size.width = markers_x_ * (marker_size_ + separation_) - separation_ + 2 * separation_;
  image_size.height = markers_y_ * (marker_size_ + separation_) - separation_ + 2 * separation_;

  try
  {
    cv::aruco::Dictionary dictionary = cv::aruco::getPredefinedDictionary(dictionary_id_);
    cv::aruco::GridBoard board(cv::Size(markers_x_, markers_y_), float(marker_size_), float(separation_), dictionary);
    board.generateImage(image_size, image, separation_, border_bits_);
  }
  catch (const cv::Exception& e)
  {
    RCLCPP_ERROR_STREAM(LOGGER_CALIBRATION_TARGET, "Aruco target image creation exception: " << e.what());
    return false;
  }

  return true;
}

bool HandEyeArucoTarget::detectTargetPose(cv::Mat& image)
{
  std::lock_guard<std::mutex> base_lock(base_mutex_);
  try
  {
    // Detect aruco board
    aruco_mutex_.lock();
    cv::aruco::Dictionary dictionary = cv::aruco::getPredefinedDictionary(dictionary_id_);
    // 手动包装为指针以适配旧版检测 API
    cv::Ptr<cv::aruco::Dictionary> dictionary_ptr = cv::makePtr<cv::aruco::Dictionary>(dictionary);

    cv::Ptr<cv::aruco::GridBoard> board_ptr = cv::makePtr<cv::aruco::GridBoard>(
        cv::Size(markers_x_, markers_y_), marker_size_real_, marker_separation_real_, dictionary);
    aruco_mutex_.unlock();

    cv::Ptr<cv::aruco::DetectorParameters> params_ptr(new cv::aruco::DetectorParameters());
#if CV_MAJOR_VERSION == 3 && CV_MINOR_VERSION == 2
    params_ptr->doCornerRefinement = true;
#else
    params_ptr->cornerRefinementMethod = cv::aruco::CORNER_REFINE_NONE;
#endif

    std::vector<int> marker_ids;
    std::vector<std::vector<cv::Point2f>> marker_corners;
    // 必须传入 dictionary_ptr
    cv::aruco::detectMarkers(image, dictionary_ptr, marker_corners, marker_ids, params_ptr);
    if (marker_ids.empty())
    {
      RCLCPP_DEBUG_STREAM_THROTTLE(LOGGER_CALIBRATION_TARGET, clock, LOG_THROTTLE_PERIOD, "No aruco marker detected.");
      return false;
    }

    // Refine markers borders (传入 board_ptr)
    std::vector<std::vector<cv::Point2f>> rejected_corners;
    cv::aruco::refineDetectedMarkers(image, board_ptr, marker_corners, marker_ids, rejected_corners, camera_matrix_,
                                     distortion_coeffs_);

    // Estimate aruco board pose (传入 board_ptr)
    int valid = cv::aruco::estimatePoseBoard(marker_corners, marker_ids, board_ptr, camera_matrix_, distortion_coeffs_,
                                             rotation_vect_, translation_vect_);

    if (valid == 0)
    {
      RCLCPP_WARN_STREAM_THROTTLE(LOGGER_CALIBRATION_TARGET, clock, LOG_THROTTLE_PERIOD,
                                  "Cannot estimate aruco board pose.");
      return false;
    }

    if (std::log10(std::fabs(rotation_vect_[0])) > 10 || std::log10(std::fabs(rotation_vect_[1])) > 10 ||
        std::log10(std::fabs(rotation_vect_[2])) > 10 || std::log10(std::fabs(translation_vect_[0])) > 10 ||
        std::log10(std::fabs(translation_vect_[1])) > 10 || std::log10(std::fabs(translation_vect_[2])) > 10)
    {
      RCLCPP_WARN_STREAM_THROTTLE(LOGGER_CALIBRATION_TARGET, clock, LOG_THROTTLE_PERIOD,
                                  "Invalid target pose, please check CameraInfo msg.");
      return false;
    }

    cv::Mat image_rgb;
    cv::cvtColor(image, image_rgb, cv::COLOR_GRAY2RGB);
    cv::aruco::drawDetectedMarkers(image_rgb, marker_corners);
    drawAxis(image_rgb, camera_matrix_, distortion_coeffs_, rotation_vect_, translation_vect_, 0.1);
    image = image_rgb;
  }
  catch (const cv::Exception& e)
  {
    RCLCPP_ERROR_STREAM_THROTTLE(LOGGER_CALIBRATION_TARGET, clock, LOG_THROTTLE_PERIOD,
                                 "Aruco target detection exception: " << e.what());
    return false;
  }

  return true;
}

2.2 修复 handeye_target_charuco.cpp

文件路径:src/moveit_calibration/moveit_calibration_plugins/handeye_calibration_target/src/handeye_target_charuco.cpp。同样,将里面的 createTargetImage 和 detectTargetPose 函数整体替换:

复制代码
bool HandEyeCharucoTarget::createTargetImage(cv::Mat& image) const
{
  if (!target_params_ready_)
    return false;
  cv::Size image_size;
  image_size.width = squares_x_ * square_size_pixels_ + 2 * margin_size_pixels_;
  image_size.height = squares_y_ * square_size_pixels_ + 2 * margin_size_pixels_;

  try
  {
    // 补齐遗失的 square_size_meters 变量
    float square_size_meters = board_size_meters_ / std::max(squares_x_, squares_y_);
    cv::aruco::Dictionary dictionary = cv::aruco::getPredefinedDictionary(dictionary_id_);
    cv::aruco::CharucoBoard board(cv::Size(squares_x_, squares_y_), square_size_meters, marker_size_meters_, dictionary);
    board.generateImage(image_size, image, margin_size_pixels_, border_size_bits_);
  }
  catch (const cv::Exception& e)
  {
    RCLCPP_ERROR_STREAM(LOGGER_CALIBRATION_TARGET, "ChArUco target image creation exception: " << e.what());
    return false;
  }

  return true;
}

bool HandEyeCharucoTarget::detectTargetPose(cv::Mat& image)
{
  if (!target_params_ready_)
    return false;
  std::lock_guard<std::mutex> base_lock(base_mutex_);
  try
  {
    charuco_mutex_.lock();
    float square_size_meters = board_size_meters_ / std::max(squares_x_, squares_y_);
    cv::aruco::Dictionary dictionary = cv::aruco::getPredefinedDictionary(dictionary_id_);
    // 手动包装为指针以适配旧版检测 API
    cv::Ptr<cv::aruco::Dictionary> dictionary_ptr = cv::makePtr<cv::aruco::Dictionary>(dictionary);
    cv::Ptr<cv::aruco::CharucoBoard> board_ptr = cv::makePtr<cv::aruco::CharucoBoard>(
        cv::Size(squares_x_, squares_y_), square_size_meters, marker_size_meters_, dictionary);
    charuco_mutex_.unlock();

    cv::Ptr<cv::aruco::DetectorParameters> params_ptr(new cv::aruco::DetectorParameters());
#if CV_MAJOR_VERSION == 3 && CV_MINOR_VERSION == 2
    params_ptr->doCornerRefinement = true;
#else
    params_ptr->cornerRefinementMethod = cv::aruco::CORNER_REFINE_NONE;
#endif

    std::vector<int> marker_ids;
    std::vector<std::vector<cv::Point2f>> marker_corners;
    // 必须传入 dictionary_ptr
    cv::aruco::detectMarkers(image, dictionary_ptr, marker_corners, marker_ids, params_ptr);
    if (marker_ids.empty())
    {
      RCLCPP_DEBUG_STREAM_THROTTLE(LOGGER_CALIBRATION_TARGET, clock, 1,
                                   "No aruco marker detected. Dictionary ID: " << dictionary_id_);
      return false;
    }

    // Find ChArUco corners (传入 board_ptr)
    std::vector<cv::Point2f> charuco_corners;
    std::vector<int> charuco_ids;
    cv::aruco::interpolateCornersCharuco(marker_corners, marker_ids, image, board_ptr, charuco_corners, charuco_ids,
                                         camera_matrix_, distortion_coeffs_);

    // Estimate aruco board pose (传入 board_ptr)
    bool valid = cv::aruco::estimatePoseCharucoBoard(charuco_corners, charuco_ids, board_ptr, camera_matrix_,
                                                     distortion_coeffs_, rotation_vect_, translation_vect_);

    if (!valid)
    {
      RCLCPP_WARN_STREAM_THROTTLE(LOGGER_CALIBRATION_TARGET, clock, 1, "Cannot estimate aruco board pose.");
      return false;
    }

    if (cv::norm(rotation_vect_) > 3.2 || std::log10(std::fabs(translation_vect_[0])) > 4 ||
        std::log10(std::fabs(translation_vect_[1])) > 4 || std::log10(std::fabs(translation_vect_[2])) > 4)
    {
      RCLCPP_WARN_STREAM_THROTTLE(LOGGER_CALIBRATION_TARGET, clock, 1,
                                  "Invalid target pose, please check CameraInfo msg.");
      return false;
    }

    cv::Mat image_rgb;
    cv::cvtColor(image, image_rgb, cv::COLOR_GRAY2RGB);
    cv::aruco::drawDetectedMarkers(image_rgb, marker_corners);
    drawAxis(image_rgb, camera_matrix_, distortion_coeffs_, rotation_vect_, translation_vect_, 0.1);
    image = image_rgb;
  }
  catch (const cv::Exception& e)
  {
    RCLCPP_ERROR_STREAM_THROTTLE(LOGGER_CALIBRATION_TARGET, clock, 1,
                                 "ChArUco target detection exception: " << e.what());
    return false;
  }

  return true;
}

2.3修改handeye_calibration_rviz_plugin/CMakeLists.txt CMakeLists.txt

打开 /src/moveit_calibration/moveit_calibration_gui/handeye_calibration_rviz_plugin/CMakeLists.txt,用下面代码全量替换:

复制代码
set(MOVEIT_LIB_NAME moveit_handeye_calibration_rviz_plugin)
set(HEADERS
  include/moveit/handeye_calibration_rviz_plugin/handeye_calibration_display.h
  include/moveit/handeye_calibration_rviz_plugin/handeye_calibration_frame.h
  include/moveit/handeye_calibration_rviz_plugin/handeye_context_widget.h
  include/moveit/handeye_calibration_rviz_plugin/handeye_control_widget.h
  include/moveit/handeye_calibration_rviz_plugin/handeye_target_widget.h
)
set(SOURCE_FILES_CORE
  src/handeye_calibration_display.cpp
  src/handeye_calibration_frame.cpp
  src/handeye_context_widget.cpp
  src/handeye_control_widget.cpp
  src/handeye_target_widget.cpp
)
set(SOURCE_FILES_PLUGINS
  src/plugin_init.cpp
)

find_package(Qt5 ${rviz_QT_VERSION} REQUIRED Core Widgets)
set(QT_LIBRARIES Qt5::Widgets)
macro(qt_wrap_ui)
  qt5_wrap_ui(${ARGN})
endmacro()

set(CMAKE_AUTOMOC ON)
set(CMAKE_AUTOUIC ON)
set(CMAKE_AUTORCC ON)
set(CMAKE_INCLUDE_CURRENT_DIR ON)
add_definitions(-DQT_NO_KEYWORDS)

# ----------------- Core library -----------------
add_library(${MOVEIT_LIB_NAME}_core SHARED ${SOURCE_FILES_CORE} ${HEADERS})
set_target_properties(${MOVEIT_LIB_NAME}_core PROPERTIES VERSION "${${PROJECT_NAME}_VERSION}")
target_link_libraries(${MOVEIT_LIB_NAME}_core ${OpenCV_LIBS})
target_include_directories(${MOVEIT_LIB_NAME}_core PUBLIC
  $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/include>
  $<INSTALL_INTERFACE:include>)

ament_target_dependencies(${MOVEIT_LIB_NAME}_core
  ${THIS_PACKAGE_INCLUDE_DEPENDS}
)

# ----------------- Plugin library -----------------
add_library(${MOVEIT_LIB_NAME} SHARED ${SOURCE_FILES_PLUGINS})
set_target_properties(${MOVEIT_LIB_NAME} PROPERTIES VERSION "${${PROJECT_NAME}_VERSION}")
target_link_libraries(${MOVEIT_LIB_NAME} ${MOVEIT_LIB_NAME}_core)
target_include_directories(${MOVEIT_LIB_NAME} PUBLIC
  $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/include>
  $<INSTALL_INTERFACE:include>)

ament_target_dependencies(${MOVEIT_LIB_NAME}
  pluginlib
  ${THIS_PACKAGE_INCLUDE_DEPENDS}
)

install(DIRECTORY include/ DESTINATION include)
install(
  TARGETS ${MOVEIT_LIB_NAME}_core ${MOVEIT_LIB_NAME}
  EXPORT export_${PROJECT_NAME}
  ARCHIVE DESTINATION lib
  LIBRARY DESTINATION lib
  RUNTIME DESTINATION bin
)

ament_export_include_directories(include)
ament_export_libraries(${MOVEIT_LIB_NAME}_core ${MOVEIT_LIB_NAME})

2.4把 Rviz 插件相关的头文件改回 .hpp

复制代码
# 1. 进入 rviz 插件代码目录
cd ~/工作空间/src/moveit_calibration/moveit_calibration_gui/handeye_calibration_rviz_plugin

# 2. 批量将 cv_bridge.hpp 替换为 cv_bridge.h
find . -type f -exec sed -i 's|cv_bridge/cv_bridge.hpp|cv_bridge/cv_bridge.h|g' {} +

# 3. 批量将 pinhole_camera_model.hpp 替换为 pinhole_camera_model.h
find . -type f -exec sed -i 's|image_geometry/pinhole_camera_model.hpp|image_geometry/pinhole_camera_model.h|g' {} +

# 4. 批量将 moveit/ 下所有依赖库的 .hpp 替换为 .h
find . -type f -exec sed -i 's|moveit/\(.*\)\.hpp|moveit/\1.h|g' {} +

# 5. 把 planning_scene_rviz_plugin 目录下的头文件改回 .hpp
find . -type f -exec sed -i 's|moveit/planning_scene_rviz_plugin/\(.*\)\.h>|moveit/planning_scene_rviz_plugin/\1.hpp>|g' {} +

# 6. 把 rviz_plugin_render_tools 目录下的头文件也改回 .hpp(如果有的话)
find . -type f -exec sed -i 's|moveit/rviz_plugin_render_tools/\(.*\)\.h>|moveit/rviz_plugin_render_tools/\1.hpp>|g' {} +

2.5让 cv_bridge 基于你的 OpenCV 4.10.0 重新进行源码编译,以保证内存接口一致

第一步:在网页端下载指定分支的源码

  1. 在浏览器中打开链接:https://github.com/ros-perception/vision_opencv

  2. 关键操作:切换分支。 网页左上角有一个显示当前分支的下拉按钮(默认显示的可能是 rollingros2)。点击它,并在下拉列表中选择 humble 分支(对应你的 ROS 2 Humble 版本)。

  3. 确认页面刷新并显示处于 humble 分支后,点击绿色的 "Code" 按钮,在弹出的菜单中选择 "Download ZIP"

第二步:解压并放入工作空间

  1. 将下载好的 ZIP 文件(文件名通常为 vision_opencv-humble.zip)移动到你的 Jetson 板子上。

  2. 将其放入你之前报错终端所在的工作空间 src 目录下并解压

(注:解压后会生成一个类似 vision_opencv-humble 的文件夹,文件夹名字带后缀完全没关系,ROS 2 编译工具 colcon 是通过读取里面的 package.xml 来识别包的,不依赖文件夹名称。)

第三步:重新编译 cv_bridge

回到工作空间根目录,然后使用自定义的 OpenCV 4.10 路径参数单独编译 cv_bridge

第四步:刷新环境变量

编译成功后,非常重要的一步 是刷新当前终端的环境变量,让系统优先使用刚刚基于 OpenCV 4.10 编译出的这个 cv_bridge,而不是系统自带的

复制代码
source install/setup.bash
相关推荐
Linux猿2 小时前
YOLO车辆数据集,目标检测|附数据集下载
人工智能·yolo·目标检测·目标检测数据集·车辆数据集·yolo目标检测·yolo目标检测数据集
纤纡.2 小时前
基于计算机视觉的人脸智能分析系统:疲劳检测、表情识别与年龄性别预测
人工智能·opencv·计算机视觉
Dev7z2 小时前
基于深度学习的香梨产量预测系统设计与实现(UI界面+数据集+训练代码)
人工智能·深度学习·yolo12·产量预测·香梨
阿达_优阅达2 小时前
让合规更高效:Fin AI × Sumsub 五大智能流程优化实践
人工智能·智能客服·企业数字化转型·intercom·finai
IT_陈寒2 小时前
JavaScript开发实战:从入门到精通
前端·人工智能·后端
编码小哥2 小时前
OpenCV图像算术运算:加减乘除与位运算实战
人工智能·opencv·计算机视觉
前端双越老师2 小时前
为什么说 OpenClaw 应该装在自己的电脑上
人工智能·agent·全栈
Flamingˢ2 小时前
基于 FPGA 的帧间差分运动检测
人工智能·目标跟踪·fpga开发
小陈工2 小时前
2026年4月5日技术资讯洞察:AI商业模式变革、知识管理革命与开源生态反击
开发语言·人工智能·python·安全·oracle·开源