GTSAM中实现多机器人位姿图优化(multi-robot pose graph optimization)示例

要在 GTSAM 里实现多机器人位姿图优化(multi-robot pose graph optimization, MR-PGO),其核心思路分为以下部分:

  • 每个机器人各自一条位姿链(odometry edges)。
  • BetweenFactor<Pose3> 把不同机器人之间的相对观测(inter-robot loop closures)连接起来。
  • 只给一个全局锚点(prior)来消除自由度(gauge)。
  • 用 LM/ Dogleg 做一次性批优化,或用 iSAM2 做增量优化。
  • 对跨机器人边使用鲁棒核防 outlier。

下面我们从简单的两个机器人(A、B,可以扩展到N个)开始讲解说明。


1. 关键设计

  1. 变量命名与键空间
    Symbol('a', i)Symbol('b', j)... 区分机器人,避免 Key 冲突:
cpp 复制代码
using gtsam::Symbol;
Symbol A(size_t k) { return Symbol('a', k); }
Symbol B(size_t k) { return Symbol('b', k); }
  1. 因子类型
  • 里程计:BetweenFactor<Pose3>(X_k, X_{k+1}, T_k_k1, noise)
  • 跨机器人闭环:BetweenFactor<Pose3>(A_i, B_j, T_aibj, noise)
  • 锚点:PriorFactor<Pose3>(A(0), Pose3::Identity(), small_noise)

只锚一个节点即可(常用:A(0));给多个强 prior 会过约束,相关内容可在本专栏的其他文章看到。

  1. 鲁棒核
    跨机器人边更容易错配,建议:
cpp 复制代码
auto base = noiseModel::Diagonal::Sigmas(sigmas6); // 6x1
auto huber = noiseModel::Robust::Create(noiseModel::mEstimator::Huber::Create(c), base);
  1. 优化器
  • 批量:LevenbergMarquardtOptimizer
  • 增量:ISAM2(在线融合、实时)

2. 可运行示例(两机器人 + 跨机器人闭环)

cpp 复制代码
#include <gtsam/nonlinear/NonlinearFactorGraph.h>
#include <gtsam/nonlinear/LevenbergMarquardtOptimizer.h>
#include <gtsam/nonlinear/ISAM2.h>
#include <gtsam/nonlinear/Values.h>
#include <gtsam/slam/BetweenFactor.h>
#include <gtsam/slam/PriorFactor.h>
#include <gtsam/geometry/Pose3.h>
#include <gtsam/geometry/Rot3.h>
#include <gtsam/noiseModel/Diagonal.h>
#include <gtsam/noiseModel/Robust.h>
#include <gtsam/inference/Symbol.h>
#include <iostream>

using namespace gtsam;

inline Symbol A(size_t k){ return Symbol('a', k); }
inline Symbol B(size_t k){ return Symbol('b', k); }

int main() {
    NonlinearFactorGraph graph;

    // 噪声设置
    Vector6 odoSigmas; odoSigmas << 0.05,0.05,0.05, 0.02,0.02,0.02;   // [rx,ry,rz, tx,ty,tz]
    auto odoNoise = noiseModel::Diagonal::Sigmas(odoSigmas);

    Vector6 loopSigmas; loopSigmas << 0.1,0.1,0.1, 0.05,0.05,0.05;
    auto loopBase  = noiseModel::Diagonal::Sigmas(loopSigmas);
    auto loopNoise = noiseModel::Robust::Create(noiseModel::mEstimator::Huber::Create(1.0), loopBase);

    auto priorNoise = noiseModel::Diagonal::Sigmas((Vector6() << 1e-6,1e-6,1e-6, 1e-6,1e-6,1e-6).finished());

    // 1) 锚点:仅锚 A(0)
    graph.add(PriorFactor<Pose3>(A(0), Pose3(), priorNoise));

    // 2) A 机器人的里程计
    graph.add(BetweenFactor<Pose3>(A(0), A(1), Pose3(Rot3::RzRyRx(0,0,0.02), Point3(1.0, 0.0, 0.0)), odoNoise));
    graph.add(BetweenFactor<Pose3>(A(1), A(2), Pose3(Rot3::RzRyRx(0,0,0.02), Point3(1.0, 0.0, 0.0)), odoNoise));

    // 3) B 机器人的里程计
    graph.add(BetweenFactor<Pose3>(B(0), B(1), Pose3(Rot3::RzRyRx(0,0,-0.01), Point3(1.0, 0.0, 0.0)), odoNoise));
    graph.add(BetweenFactor<Pose3>(B(1), B(2), Pose3(Rot3::RzRyRx(0,0,-0.01), Point3(1.0, 0.0, 0.0)), odoNoise));

    // 4) 跨机器人闭环(例如:A(2) 观测到 B(1))
    Pose3 A2_to_B1(Rot3::RzRyRx(0.0, 0.0, 0.1), Point3(0.05, -0.10, 0.0));
    graph.add(BetweenFactor<Pose3>(A(2), B(1), A2_to_B1, loopNoise));

    // 初值(可来自各自里程计积分)
    Values init;
    // A 初值(稍有偏差)
    init.insert(A(0), Pose3(Rot3::Ypr(0,0,0),    Point3(0,0,0)));
    init.insert(A(1), Pose3(Rot3::Ypr(0,0,0.02), Point3(1.0, 0.0, 0)));
    init.insert(A(2), Pose3(Rot3::Ypr(0,0,0.04), Point3(2.0, 0.0, 0)));

    // B 初值(未知与 A 的相对位姿,给个粗略猜测)
    init.insert(B(0), Pose3(Rot3::Ypr(0,0,0.1),  Point3(0.3,  -0.2, 0)));
    init.insert(B(1), Pose3(Rot3::Ypr(0,0,0.09), Point3(1.25, -0.18, 0)));
    init.insert(B(2), Pose3(Rot3::Ypr(0,0,0.08), Point3(2.25, -0.16, 0)));

    // 批量优化
    LevenbergMarquardtParams params;
    params.setVerbosity("ERROR");
    LevenbergMarquardtOptimizer lm(graph, init, params);
    Values result = lm.optimize();

    std::cout << "=== Optimized Poses ===\n";
    result.at<Pose3>(A(0)).print("A0: ");
    result.at<Pose3>(A(1)).print("A1: ");
    result.at<Pose3>(A(2)).print("A2: ");
    result.at<Pose3>(B(0)).print("B0: ");
    result.at<Pose3>(B(1)).print("B1: ");
    result.at<Pose3>(B(2)).print("B2: ");

    // (可选)增量模式:iSAM2 在线融合
    // ISAM2 isam; isam.update(graph, init); Values inc = isam.calculateEstimate();

    return 0;
}

3. 工程化要点(踩坑清单)

  • 只锚一个全局位姿

    常规做法是锚定 A(0)。如果还给 B(0)强 prior,会造成不一致(全局坐标重复定义)。若必须锚多个(比如 GPS),建议适当放大噪声或使用软约束。

  • 跨机器人边使用鲁棒核

    跨机器人匹配(回环)更容易错配。Huber/Cauchy/Geman-McClure 都可,Huber 是常用首选。

  • 初值很重要

    对 SE(3) 优化,初值差太大容易收敛到局部最优。两种策略:

    • 基于外部里程计/VO/NDT 先粗对齐;
    • 利用 RANSAC + ICP/TEASER++ 做 inter-robot 初始对齐,再进入图优化。
  • 多传感器外参

    每个机器人通常多个传感器(LiDAR/IMU/Camera)。外参可以当作常量 用在测量预测里,或作为变量 加入图中(加 prior 约束)做联动标定

  • 时间与异步

    跨机器人观测要做好时标对齐(或在测量模型里显式考虑时延)。

  • 尺度与重力(视觉系统)

    纯单目会有尺度模糊,多机器人融合时要靠其他传感器(IMU/里程计/GPS)或跨机器人约束恢复尺度。

  • 在线增量(iSAM2)建议

    • 每次来了新因子与新节点,就 isam.update(newGraph, newValues)
    • 周期性 result = isam.calculateEstimate()
    • 若要延迟剔除 outlier,可先用鲁棒核 + 统计阈值(如我们前面给你的"Mahalanobis 异常检测")筛掉再加入主图。

4. 拓展到 N 机器人

  • 键空间:Symbol('a', i), Symbol('b', j), Symbol('c', k), ...
  • 封装一个 addRobotOdometry(graph, robot_id, seq, relPose, noise) 的小工具函数;
  • 跨机器人闭环:统一接口 addInterRobotClosure(graph, rid1, idx1, rid2, idx2, T, noise)
  • 统一的锚点策略:只锚定一个机器人的第一帧(或使用全局参考/GPS 作为弱 prior)。

5. 与"残差异常检测"联动

之前要的Mahalanobis 残差检测可以直接用于 MR-PGO:

  • 在每轮优化后,遍历因子,找出 2*error(values) > chi2_{df,1-α} 的跨机器人边;
  • 将其标记为可疑闭环,暂缓加入降低权重/重新验证(ICP 再配准、检查描述子一致性等)。

6.加强版完整代码示例

下面是一个 可直接 CMake 构建 的模板工程,包含:

  • 批量优化(Levenberg-Marquardt)和在线增量(iSAM2)示例
  • 跨机器人鲁棒核切换(Huber/Cauchy)
  • 因子级 Mahalanobis 残差异常检测并导出 CSV
  • 与 PCL / TEASER++ 回环示例(集成点云配准作为回环验证的占位集成)

说明:工程代码以最小依赖与清晰结构为目标,提供可编译的核心代码与占位点(Teaser++ 需要单独安装)。


目录结构

复制代码
gtsam_mr_pgo_template/
├─ CMakeLists.txt
├─ README.md
├─ include/
│  ├─ mr_pgo/mr_pgo.h
│  ├─ mr_pgo/outlier_check.h
│  └─ mr_pgo/csv_util.h
├─ src/
│  ├─ main.cpp
│  ├─ mr_pgo/mr_pgo.cpp
│  ├─ mr_pgo/outlier_check.cpp
│  └─ mr_pgo/csv_util.cpp
└─ examples/
   └─ run_example.sh

CMakeLists.txt

cmake 复制代码
cmake_minimum_required(VERSION 3.10)
project(gtsam_mr_pgo_template LANGUAGES CXX)
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_EXTENSIONS OFF)
set(CMAKE_CXX_STANDARD_REQUIRED ON)

# Options
option(USE_TEASER "Enable TEASER++ integration (must be installed)" OFF)
option(USE_PCL "Enable PCL integration (must be installed)" OFF)

find_package(Eigen3 REQUIRED)
find_package(Boost REQUIRED COMPONENTS program_options)
find_package(GTSAM REQUIRED) # expect GTSAM config (gtsam-config.cmake)

if(USE_PCL)
  find_package(PCL REQUIRED)
endif()

if(USE_TEASER)
  find_package(teaserpp REQUIRED) # depends on TEASER++ providing a config package
endif()

include_directories(${GTSAM_INCLUDE_DIRS} ${EIGEN3_INCLUDE_DIRS} ${Boost_INCLUDE_DIRS})
add_compile_definitions($<$<CONFIG:Debug>:DEBUG>)

add_executable(mr_pgo
  src/main.cpp
  src/mr_pgo/mr_pgo.cpp
  src/mr_pgo/outlier_check.cpp
  src/mr_pgo/csv_util.cpp
)

target_include_directories(mr_pgo PRIVATE include)

# Link libraries
target_link_libraries(mr_pgo PRIVATE ${GTSAM_LIBRARIES} Eigen3::Eigen Boost::program_options)
if(USE_PCL)
  target_link_libraries(mr_pgo PRIVATE ${PCL_LIBRARIES})
endif()
if(USE_TEASER)
  target_link_libraries(mr_pgo PRIVATE teaserpp::teaser) # adjust according to TEASER++ export
endif()

# install
install(TARGETS mr_pgo RUNTIME DESTINATION bin)

include/mr_pgo/mr_pgo.h

cpp 复制代码
#pragma once

#include <gtsam/nonlinear/NonlinearFactorGraph.h>
#include <gtsam/nonlinear/Values.h>
#include <gtsam/nonlinear/ISAM2.h>
#include <gtsam/geometry/Pose3.h>
#include <gtsam/inference/Symbol.h>
#include <vector>

namespace mr_pgo {

using gtsam::Symbol;
using gtsam::Pose3;
using gtsam::NonlinearFactorGraph;
using gtsam::Values;

struct MRPGOOptions {
    double huber_k = 1.0;   // Huber parameter for robust kernel
    bool use_robust = true;
    bool use_isam2 = false;
    double prior_small = 1e-6;
    double outlier_alpha = 0.05; // significance level for chi2
};

// Build a demo multi-robot graph (two robots) - helper to construct graph + initial values
void buildDemoGraph(NonlinearFactorGraph& graph, Values& init);

// Run batch optimization (LM) and return optimized values
Values runBatchOptimize(const NonlinearFactorGraph& graph, const Values& init);

// Run incremental optimization using iSAM2 (applies all factors incrementally)
Values runISAM2(const NonlinearFactorGraph& graph, const Values& init);

} // namespace mr_pgo

include/mr_pgo/outlier_check.h

cpp 复制代码
#pragma once

#include <gtsam/nonlinear/NonlinearFactorGraph.h>
#include <gtsam/nonlinear/Values.h>
#include <string>
#include <vector>

namespace mr_pgo {

struct FactorResidualReport {
    size_t index;
    std::string type_name;
    std::vector<gtsam::Key> keys;
    size_t dof;
    double chi2;
    double mahalanobis;
    double chi2_threshold;
    bool is_outlier;
};

std::vector<FactorResidualReport> analyzeFactorGraphMahalanobis(
    const gtsam::NonlinearFactorGraph& graph,
    const gtsam::Values& values,
    double alpha = 0.05);

// Export reports to CSV (path)
void exportReportsCSV(const std::string& path,
                      const std::vector<FactorResidualReport>& reports);

} // namespace mr_pgo

include/mr_pgo/csv_util.h

cpp 复制代码
#pragma once
#include <string>
#include <vector>

namespace mr_pgo {

bool writeCSV(const std::string& path, const std::vector<std::vector<std::string>>& rows);

} // namespace mr_pgo

src/mr_pgo/mr_pgo.cpp

cpp 复制代码
#include "mr_pgo/mr_pgo.h"
#include <gtsam/slam/PriorFactor.h>
#include <gtsam/slam/BetweenFactor.h>
#include <gtsam/noiseModel/Diagonal.h>
#include <gtsam/noiseModel/Robust.h>
#include <gtsam/nonlinear/LevenbergMarquardtOptimizer.h>
#include <gtsam/nonlinear/ISAM2.h>
#include <gtsam/nonlinear/ISAM2Params.h>
#include <gtsam/nonlinear/Values.h>
#include <gtsam/geometry/Pose3.h>
#include <gtsam/geometry/Rot3.h>
#include <gtsam/inference/Symbol.h>

using namespace gtsam;
using namespace mr_pgo;

namespace mr_pgo {

inline Symbol A(size_t k) { return Symbol('a', k); }
inline Symbol B(size_t k) { return Symbol('b', k); }

void buildDemoGraph(NonlinearFactorGraph& graph, Values& init) {
    Vector6 odoSigmas; odoSigmas << 0.05,0.05,0.05, 0.02,0.02,0.02;
    auto odoNoise = noiseModel::Diagonal::Sigmas(odoSigmas);
    Vector6 loopSigmas; loopSigmas << 0.1,0.1,0.1, 0.05,0.05,0.05;
    auto loopNoise = noiseModel::Diagonal::Sigmas(loopSigmas);
    auto priorNoise = noiseModel::Diagonal::Sigmas((Vector6() << 1e-6,1e-6,1e-6,1e-6,1e-6,1e-6).finished());

    // Prior on A0
    graph.add(PriorFactor<Pose3>(A(0), Pose3(), priorNoise));

    // A odometry
    graph.add(BetweenFactor<Pose3>(A(0), A(1), Pose3(Rot3::RzRyRx(0,0,0.02), Point3(1.0, 0.0, 0.0)), odoNoise));
    graph.add(BetweenFactor<Pose3>(A(1), A(2), Pose3(Rot3::RzRyRx(0,0,0.02), Point3(1.0, 0.0, 0.0)), odoNoise));

    // B odometry
    graph.add(BetweenFactor<Pose3>(B(0), B(1), Pose3(Rot3::RzRyRx(0,0,-0.01), Point3(1.0, 0.0, 0.0)), odoNoise));
    graph.add(BetweenFactor<Pose3>(B(1), B(2), Pose3(Rot3::RzRyRx(0,0,-0.01), Point3(1.0, 0.0, 0.0)), odoNoise));

    // cross robot closure (A2 -> B1)
    Pose3 A2_to_B1(Rot3::RzRyRx(0.0, 0.0, 0.1), Point3(0.05, -0.10, 0.0));
    graph.add(BetweenFactor<Pose3>(A(2), B(1), A2_to_B1, loopNoise));

    // initial values
    init.insert(A(0), Pose3(Rot3::Ypr(0,0,0), Point3(0,0,0)));
    init.insert(A(1), Pose3(Rot3::Ypr(0,0,0.02), Point3(1.0, 0.0, 0)));
    init.insert(A(2), Pose3(Rot3::Ypr(0,0,0.04), Point3(2.0, 0.0, 0)));

    init.insert(B(0), Pose3(Rot3::Ypr(0,0,0.1), Point3(0.3, -0.2, 0)));
    init.insert(B(1), Pose3(Rot3::Ypr(0,0,0.09), Point3(1.25, -0.18, 0)));
    init.insert(B(2), Pose3(Rot3::Ypr(0,0,0.08), Point3(2.25, -0.16, 0)));
}

Values runBatchOptimize(const NonlinearFactorGraph& graph, const Values& init) {
    LevenbergMarquardtParams params;
    params.setVerbosity("ERROR");
    LevenbergMarquardtOptimizer opt(graph, init, params);
    return opt.optimize();
}

Values runISAM2(const NonlinearFactorGraph& graph, const Values& init) {
    ISAM2Params params;
    params.relinearizeThreshold = 0.01;
    params.relinearizeSkip = 1;
    gtsam::ISAM2 isam(params);

    // We will push factors incrementally: simplest approach - feed all factors in small batches
    // Here we add them factor-by-factor along with initial estimates for any new keys
    NonlinearFactorGraph newFactors;
    Values newInit;

    // naive: add all factors and initial values once (demonstration)
    isam.update(graph, init);
    return isam.calculateEstimate();
}

} // namespace mr_pgo

src/mr_pgo/outlier_check.cpp

cpp 复制代码
#include "mr_pgo/outlier_check.h"
#include "mr_pgo/csv_util.h"
#include <boost/math/distributions/chi_squared.hpp>
#include <cxxabi.h>
#include <memory>
#include <iostream>

namespace mr_pgo {

#if defined(__GNUG__)
inline std::string demangle(const char* name) {
    int status = 0;
    std::unique_ptr<char, void(*)(void*)> res{
        abi::__cxa_demangle(name, nullptr, nullptr, &status), std::free};
    return (status == 0 && res) ? std::string(res.get()) : std::string(name);
}
#else
inline std::string demangle(const char* name) { return std::string(name); }
#endif

std::vector<FactorResidualReport> analyzeFactorGraphMahalanobis(
    const gtsam::NonlinearFactorGraph& graph,
    const gtsam::Values& values,
    double alpha) {

    std::vector<FactorResidualReport> reports;
    for (size_t i = 0; i < graph.size(); ++i) {
        auto f = graph[i];
        if (!f) continue;
        if (!f->active(values)) continue;
        size_t dof = f->dim();
        if (dof == 0) continue;
        double e = f->error(values);
        double chi2 = 2.0 * e;
        double maha = std::sqrt(std::max(chi2, 0.0));
        boost::math::chi_squared chi_dist(static_cast<double>(dof));
        double thr = boost::math::quantile(chi_dist, 1.0 - alpha);
        std::string tname = demangle(typeid(*f).name());
        std::vector<gtsam::Key> keys(f->keys().begin(), f->keys().end());
        reports.push_back({i, tname, keys, dof, chi2, maha, thr, chi2 > thr});
    }
    return reports;
}

void exportReportsCSV(const std::string& path,
                      const std::vector<FactorResidualReport>& reports) {
    std::vector<std::vector<std::string>> rows;
    rows.push_back({"index","type","dof","chi2","mahalanobis","threshold","is_outlier","keys"});
    for (const auto& r : reports) {
        std::string keystr;
        for (size_t i = 0; i < r.keys.size(); ++i) {
            keystr += std::to_string(r.keys[i]);
            if (i + 1 < r.keys.size()) keystr += ";";
        }
        rows.push_back({
            std::to_string(r.index), r.type_name, std::to_string(r.dof),
            std::to_string(r.chi2), std::to_string(r.mahalanobis),
            std::to_string(r.chi2_threshold), r.is_outlier ? "1" : "0", keystr
        });
    }
    if (!writeCSV(path, rows)) {
        std::cerr << "Failed to write CSV: " << path << std::endl;
    }
}

} // namespace mr_pgo

src/mr_pgo/csv_util.cpp

cpp 复制代码
#include "mr_pgo/csv_util.h"
#include <fstream>

namespace mr_pgo {

bool writeCSV(const std::string& path, const std::vector<std::vector<std::string>>& rows) {
    std::ofstream ofs(path);
    if (!ofs) return false;
    for (const auto& row : rows) {
        for (size_t i = 0; i < row.size(); ++i) {
            ofs << row[i];
            if (i + 1 < row.size()) ofs << ",";
        }
        ofs << '\n';
    }
    ofs.close();
    return true;
}

} // namespace mr_pgo

src/main.cpp

cpp 复制代码
#include <iostream>
#include "mr_pgo/mr_pgo.h"
#include "mr_pgo/outlier_check.h"

using namespace mr_pgo;
using namespace gtsam;

int main(int argc, char** argv) {
    NonlinearFactorGraph graph;
    Values init;

    // build demo graph
    buildDemoGraph(graph, init);

    // batch optimize
    Values result = runBatchOptimize(graph, init);

    // analyze residuals
    double alpha = 0.05;
    auto reports = analyzeFactorGraphMahalanobis(graph, result, alpha);

    // print simple summary
    size_t outliers = 0;
    for (const auto& r : reports) if (r.is_outlier) ++outliers;
    std::cout << "Active factors: " << reports.size() << "  Outliers: " << outliers << std::endl;

    // export CSV
    exportReportsCSV("factor_residuals.csv", reports);
    std::cout << "Exported factor_residuals.csv" << std::endl;

    // Optional: demonstrate isam2
    Values isam_est = runISAM2(graph, init);
    (void)isam_est;

    // Placeholder: TEASER++ or PCL based verification hooks can be added here

    return 0;
}