Ubuntu20.04安装运行DynaSLAM

目录

一、安装Anaconda

二、相关依赖库安装

1、boost安装

[2、Eigen 3安装](#2、Eigen 3安装)

3、opencv安装

4、Pangolin安装

三、配置Mask_RCNN环境

四、DynaSLAM编译

五、DynaSLAM运行


一、安装Anaconda

打开以下链接:

Index of /

下载和自己系统匹配的安装包。这里下载的是Anaconda3-2024.02-1-Linux-x86_64.sh。

在下载安装包的目录下打开终端,执行以下命令进行安装。

复制代码
bash Anaconda3-2024.02-1-Linux-x86_64.sh

出现以下画面,按提示进行回车:

一直按回车,或按q直接跳过,直到出现"Do you accept the license terms", 输入"yes" 。

默认位置已经足够,按ENTER确认安装位置。

出现是否要运行conda init,输入yes。

出现以下画面,Anaconda安装成功:

安装好后,source一下环境或者重新打开一个终端。

复制代码
source ~/.bashrc 

会多一个(base)的前缀。

附:如果希望 conda 的基础环境在启动终端时不被激活,将 auto_activate_base 参数设置为 false:

复制代码
conda config --set auto_activate_base false

后面想要再进入conda的base环境,只需要使用conda指令激活:

复制代码
conda activate base

二、相关依赖库安装

1、boost安装

通过以下链接下载boost1.71:

https://archives.boost.io/release/1.71.0/source/boost_1_71_0.tar.gz

将下载的源码解压缩至home下,在boost目录下打开终端,依次执行以下命令,等待安装完成即可:

复制代码
sudo ./bootstrap.sh
sudo ./b2 install

2、Eigen 3安装

在home下打开终端,终端执行以下命令下载Eigen3源码:

复制代码
git clone https://github.com/eigenteam/eigen-git-mirror

然后依次执行以下命令进行安装:

复制代码
cd eigen-git-mirror
mkdir build && cd build
cmake ..
sudo make install

3、opencv安装

版本一定要对应,不然编译过程中会报各种错误。

通过以下链接下载opencv3.4.5:

https://github.com/opencv/opencv/archive/refs/tags/3.4.5.tar.gz

终端执行以下命令,安装所需的依赖项:

复制代码
sudo apt-get install build-essential libgtk2.0-dev libjpeg-dev  libtiff5-dev libopenexr-dev libtbb-dev

sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libgtk-3-dev libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev pkg-config

在opencv目录下打开终端,执行以下命令编译安装:

复制代码
mkdir build && cd build
cmake ..
sudo make -j4
sudo make install

终端执行以下命令,修改/etc/ld.so.conf文件:

复制代码
sudo gedit /etc/ld.so.conf

在打开的文件中加上一行 include /usr/local/lib,/usr/local是opencv的默认安装路径,这样告诉系统以后去lib目录下找opencv的库文件。

复制代码
include /usr/local/lib

执行以下,命令使得conf生效:

复制代码
sudo ldconfig

终端执行以下命令,修改bash.bashrc文件:

复制代码
sudo gedit /etc/bash.bashrc 

在文件末尾加上以下内容:

复制代码
PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/local/lib/pkgconfig
export PKG_CONFIG_PATH

然后source使得bash生效:

复制代码
source /etc/bash.bashrc

输入以下命令查看opencv版本信息:

复制代码
pkg-config opencv --modversion

显示版本,表示安装成功!

4、Pangolin安装

Pangolin一定要v0.5版本的,其他版本会报错。

由于我这里之前安装过Pangolin,因此需要卸载之前的版本。

(1)删除库和头文件

复制代码
cd Pangolin/build
make clean
sudo make uninstall

(2)删除源代码

复制代码
cd ../..
sudo rm -r Pangolin

(3)删除残留文件夹

复制代码
sudo updatedb
locate pangolin
sudo rm -r /usr/local/include/pangolin

通过以下链接下载Pangolinv0.5源码:

https://github.com/stevenlovegrove/Pangolin/archive/refs/tags/v0.5.tar.gz

将其解压缩到主目录home下,打开Pangolin文件夹,打开终端执行以下命令:

复制代码
mkdir build && cd build
cmake ..
sudo make -j4
sudo make install

安装时会出现报错,因此需要对文件进行修改:

在/Pangolin/CMakeModules/FindFFMPEG.cmake中63,64行

复制代码
        sizeof(AVFormatContext::max_analyze_duration2);
      }" HAVE_FFMPEG_MAX_ANALYZE_DURATION2

换成

复制代码
        sizeof(AVFormatContext::max_analyze_duration);
      }" HAVE_FFMPEG_MAX_ANALYZE_DURATION

/Pangolin/src/video/drivers/ffmpeg.cpp中第37行 namespace pangolin上面加上

复制代码
#define CODEC_FLAG_GLOBAL_HEADER AV_CODEC_FLAG_GLOBAL_HEADER

第78,79行

复制代码
TEST_PIX_FMT_RETURN(XVMC_MPEG2_MC); 
TEST_PIX_FMT_RETURN(XVMC_MPEG2_IDCT);

改为

复制代码
#ifdef FF_API_XVMC
    TEST_PIX_FMT_RETURN(XVMC_MPEG2_MC);
    TEST_PIX_FMT_RETURN(XVMC_MPEG2_IDCT);
#endif

第101-105行

复制代码
    TEST_PIX_FMT_RETURN(VDPAU_H264);
    TEST_PIX_FMT_RETURN(VDPAU_MPEG1);
    TEST_PIX_FMT_RETURN(VDPAU_MPEG2);
    TEST_PIX_FMT_RETURN(VDPAU_WMV3);
    TEST_PIX_FMT_RETURN(VDPAU_VC1);

改为

复制代码
#ifdef FF_API_VDPAU
    TEST_PIX_FMT_RETURN(VDPAU_H264);
    TEST_PIX_FMT_RETURN(VDPAU_MPEG1);
    TEST_PIX_FMT_RETURN(VDPAU_MPEG2);
    TEST_PIX_FMT_RETURN(VDPAU_WMV3);
    TEST_PIX_FMT_RETURN(VDPAU_VC1);
#endif

第127行

复制代码
	TEST_PIX_FMT_RETURN(VDPAU_MPEG4);

改为

复制代码
#ifdef FF_API_VDPAU
    TEST_PIX_FMT_RETURN(VDPAU_MPEG4);
#endif

在Pangolin/include/pangolin/video/drivers/ffmpeg.h开头加上

复制代码
#define AV_CODEC_FLAG_GLOBAL_HEADER (1 << 22)
#define CODEC_FLAG_GLOBAL_HEADER AV_CODEC_FLAG_GLOBAL_HEADER
#define AVFMT_RAWPICTURE 0x0020

更改完后,重新安装即可。

三、配置Mask_RCNN环境

在Anaconda虚拟环境下配置,终端依次执行以下命令:

复制代码
# 创建一个虚拟环境
conda create -n MaskRCNN python=2.7
conda activate MaskRCNN
# 这一步可能报错,多尝试几次
pip install tensorflow==1.14.0
pip install keras==2.0.9
pip install scikit-image
pip install pycocotools

在安装pycocotools时报错:

通过以下命令安装pycocotools:

复制代码
conda install -c conda-forge pycocotools

下载DynaSLAM并测试环境,终端执行以下命令下载DynaSLAM源码:

复制代码
git clone  https://github.com/BertaBescos/DynaSLAM.git

通过以下链接,下载mask_rcnn_coco.h5文件:

https://github.com/matterport/Mask_RCNN/releases/download/v1.0/mask_rcnn_coco.h5

然后把mask_rcnn_coco.h5文件放在DynaSLAM/src/python/文件夹下。

在DynaSLAM目录下打开终端,然后执行以下命令(需在刚刚创建的MaskRCNN环境下):

复制代码
python src/python/Check.py

出现如下:Mask R-CNN is correctly working

则说明Mask_RCNN环境配置成功了。

四、DynaSLAM编译

编译之前需要对部分代码进行修改。

(1)DynaSLAM/CMakeLists.txt

C++11改为C++14

复制代码
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS}  -Wall  -O3 ")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall   -O3 ")
# set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS}  -Wall  -O3 -march=native ")
# set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall   -O3 -march=native")
......................
#find_package(OpenCV 2.4.11 QUIET)
#if(NOT OpenCV_FOUND)
#    message("OpenCV > 2.4.11 not found.")
#    find_package(OpenCV 3.0 QUIET)
#    if(NOT OpenCV_FOUND)
#        message(FATAL_ERROR "OpenCV > 3.0 not found.")
#    endif()
#endif()

find_package(OpenCV 3.4 QUIET)
if(NOT OpenCV_FOUND)
    find_package(OpenCV 2.4 QUIET)
    if(NOT OpenCV_FOUND)
        message(FATAL_ERROR "OpenCV > 2.4.x not found.")
    endif()
endif()
......................
set(Python_ADDITIONAL_VERSIONS "2.7")
#This is to avoid detecting python 3
find_package(PythonLibs 2.7 EXACT REQUIRED)
if (NOT PythonLibs_FOUND)
    message(FATAL_ERROR "PYTHON LIBS not found.")
else()
    message("PYTHON LIBS were found!")
    message("PYTHON LIBS DIRECTORY: " ${PYTHON_LIBRARY} ${PYTHON_INCLUDE_DIRS})
endif()
......................
#find_package(Eigen3 3.1.0 REQUIRED)
find_package(Eigen3 3 REQUIRED)
......................
# add_executable(mono_carla
# Examples/Monocular/mono_carla.cc)
# target_link_libraries(mono_carla ${PROJECT_NAME})

(2)DynaSLAM/Thirdparty/DBoW/CMakeLists.txt

复制代码
#set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS}  -Wall  -O3 -march=native ")
#set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall  -O3 -march=native")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS}  -Wall  -O3 ")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall  -O3 ")
......................
# find_package(OpenCV 3.0 QUIET)
find_package(OpenCV 3.4 QUIET)

(3)DynaSLAM/Thirdparty/g2o/CMakeLists.txt

复制代码
#SET(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} -O3 -march=native") 
#SET(CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE} -O3 -march=native")
SET(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} -O3 ") 
SET(CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE} -O3 ")
......................
#FIND_PACKAGE(Eigen3 3.1.0 REQUIRED)
FIND_PACKAGE(Eigen3 3 REQUIRED)

(4)DynaSLAM/include/Conversion.h

复制代码
// cv::Mat toMat(const PyObject* o);
   cv::Mat toMat(PyObject* o);

(5)DynaSLAM/src/Conversion.cc

复制代码
/**
 * This file is part of DynaSLAM.
 * Copyright (C) 2018 Berta Bescos <bbescos at unizar dot es> (University of Zaragoza)
 * For more information see <https://github.com/bertabescos/DynaSLAM>.
 *
 */

#include "Conversion.h"
#include <iostream>

namespace DynaSLAM
{

    static void init()
    {
        import_array();
    }

    static int failmsg(const char *fmt, ...)
    {
        char str[1000];

        va_list ap;
        va_start(ap, fmt);
        vsnprintf(str, sizeof(str), fmt, ap);
        va_end(ap);

        PyErr_SetString(PyExc_TypeError, str);
        return 0;
    }

    class PyAllowThreads
    {
    public:
        PyAllowThreads() : _state(PyEval_SaveThread()) {}
        ~PyAllowThreads()
        {
            PyEval_RestoreThread(_state);
        }

    private:
        PyThreadState *_state;
    };

    class PyEnsureGIL
    {
    public:
        PyEnsureGIL() : _state(PyGILState_Ensure()) {}
        ~PyEnsureGIL()
        {
            // std::cout << "releasing"<< std::endl;
            PyGILState_Release(_state);
        }

    private:
        PyGILState_STATE _state;
    };

    using namespace cv;

    static PyObject *failmsgp(const char *fmt, ...)
    {
        char str[1000];

        va_list ap;
        va_start(ap, fmt);
        vsnprintf(str, sizeof(str), fmt, ap);
        va_end(ap);

        PyErr_SetString(PyExc_TypeError, str);
        return 0;
    }

    class NumpyAllocator : public MatAllocator
    {
    public:
#if (CV_MAJOR_VERSION < 3)
        NumpyAllocator()
        {
        }
        ~NumpyAllocator() {}

        void allocate(int dims, const int *sizes, int type, int *&refcount,
                      uchar *&datastart, uchar *&data, size_t *step)
        {

            // PyEnsureGIL gil;

            int depth = CV_MAT_DEPTH(type);
            int cn = CV_MAT_CN(type);

            const int f = (int)(sizeof(size_t) / 8);
            int typenum = depth == CV_8U ? NPY_UBYTE : depth == CV_8S ? NPY_BYTE
                                                   : depth == CV_16U  ? NPY_USHORT
                                                   : depth == CV_16S  ? NPY_SHORT
                                                   : depth == CV_32S  ? NPY_INT
                                                   : depth == CV_32F  ? NPY_FLOAT
                                                   : depth == CV_64F  ? NPY_DOUBLE
                                                                      : f * NPY_ULONGLONG + (f ^ 1) * NPY_UINT;
            int i;

            npy_intp _sizes[CV_MAX_DIM + 1];
            for (i = 0; i < dims; i++)
            {
                _sizes[i] = sizes[i];
            }

            if (cn > 1)
            {
                _sizes[dims++] = cn;
            }
            PyObject *o = PyArray_SimpleNew(dims, _sizes, typenum);
            if (!o)
            {

                CV_Error_(CV_StsError, ("The numpy array of typenum=%d, ndims=%d can not be created", typenum, dims));
            }
            refcount = refcountFromPyObject(o);

            npy_intp *_strides = PyArray_STRIDES(o);
            for (i = 0; i < dims - (cn > 1); i++)
                step[i] = (size_t)_strides[i];

            datastart = data = (uchar *)PyArray_DATA(o);
        }

        void deallocate(int *refcount, uchar *, uchar *)
        {
            // PyEnsureGIL gil;
            if (!refcount)
                return;
            PyObject *o = pyObjectFromRefcount(refcount);
            Py_INCREF(o);
            Py_DECREF(o);
        }
#else

        NumpyAllocator()
        {
            stdAllocator = Mat::getStdAllocator();
        }
        ~NumpyAllocator()
        {
        }

        UMatData *allocate(PyObject *o, int dims, const int *sizes, int type,
                           size_t *step) const
        {
            UMatData *u = new UMatData(this);
            u->data = u->origdata = (uchar *)PyArray_DATA((PyArrayObject *)o);
            npy_intp *_strides = PyArray_STRIDES((PyArrayObject *)o);
            for (int i = 0; i < dims - 1; i++)
                step[i] = (size_t)_strides[i];
            step[dims - 1] = CV_ELEM_SIZE(type);
            u->size = sizes[0] * step[0];
            u->userdata = o;
            return u;
        }

        UMatData *allocate(int dims0, const int *sizes, int type, void *data,
                           size_t *step, int flags, UMatUsageFlags usageFlags) const
        {
            if (data != 0)
            {
                CV_Error(Error::StsAssert, "The data should normally be NULL!");
                // probably this is safe to do in such extreme case
                return stdAllocator->allocate(dims0, sizes, type, data, step, flags,
                                              usageFlags);
            }
            PyEnsureGIL gil;

            int depth = CV_MAT_DEPTH(type);
            int cn = CV_MAT_CN(type);
            const int f = (int)(sizeof(size_t) / 8);
            int typenum =
                depth == CV_8U ? NPY_UBYTE : depth == CV_8S ? NPY_BYTE
                                         : depth == CV_16U  ? NPY_USHORT
                                         : depth == CV_16S  ? NPY_SHORT
                                         : depth == CV_32S  ? NPY_INT
                                         : depth == CV_32F  ? NPY_FLOAT
                                         : depth == CV_64F  ? NPY_DOUBLE
                                                            : f * NPY_ULONGLONG + (f ^ 1) * NPY_UINT;
            int i, dims = dims0;
            cv::AutoBuffer<npy_intp> _sizes(dims + 1);
            for (i = 0; i < dims; i++)
                _sizes[i] = sizes[i];
            if (cn > 1)
                _sizes[dims++] = cn;
            PyObject *o = PyArray_SimpleNew(dims, _sizes, typenum);
            if (!o)
                CV_Error_(Error::StsError,
                          ("The numpy array of typenum=%d, ndims=%d can not be created", typenum, dims));
            return allocate(o, dims0, sizes, type, step);
        }

        bool allocate(UMatData *u, int accessFlags,
                      UMatUsageFlags usageFlags) const
        {
            return stdAllocator->allocate(u, accessFlags, usageFlags);
        }

        void deallocate(UMatData *u) const
        {
            if (u)
            {
                PyEnsureGIL gil;
                PyObject *o = (PyObject *)u->userdata;
                Py_XDECREF(o);
                delete u;
            }
        }

        const MatAllocator *stdAllocator;
#endif
    };

    NumpyAllocator g_numpyAllocator;

    NDArrayConverter::NDArrayConverter() { init(); }

    void NDArrayConverter::init()
    {
        import_array();
    }

    cv::Mat NDArrayConverter::toMat(PyObject *o)
    {
        cv::Mat m;

        if (!o || o == Py_None)
        {
            if (!m.data)
                m.allocator = &g_numpyAllocator;
        }

        if (!PyArray_Check(o))
        {
            failmsg("toMat: Object is not a numpy array");
        }

        int typenum = PyArray_TYPE(o);
        int type = typenum == NPY_UBYTE ? CV_8U : typenum == NPY_BYTE                     ? CV_8S
                                              : typenum == NPY_USHORT                     ? CV_16U
                                              : typenum == NPY_SHORT                      ? CV_16S
                                              : typenum == NPY_INT || typenum == NPY_LONG ? CV_32S
                                              : typenum == NPY_FLOAT                      ? CV_32F
                                              : typenum == NPY_DOUBLE                     ? CV_64F
                                                                                          : -1;

        if (type < 0)
        {
            failmsg("toMat: Data type = %d is not supported", typenum);
        }

        int ndims = PyArray_NDIM(o);

        if (ndims >= CV_MAX_DIM)
        {
            failmsg("toMat: Dimensionality (=%d) is too high", ndims);
        }

        int size[CV_MAX_DIM + 1];
        size_t step[CV_MAX_DIM + 1], elemsize = CV_ELEM_SIZE1(type);
        const npy_intp *_sizes = PyArray_DIMS(o);
        const npy_intp *_strides = PyArray_STRIDES(o);
        bool transposed = false;

        for (int i = 0; i < ndims; i++)
        {
            size[i] = (int)_sizes[i];
            step[i] = (size_t)_strides[i];
        }

        if (ndims == 0 || step[ndims - 1] > elemsize)
        {
            size[ndims] = 1;
            step[ndims] = elemsize;
            ndims++;
        }

        if (ndims >= 2 && step[0] < step[1])
        {
            std::swap(size[0], size[1]);
            std::swap(step[0], step[1]);
            transposed = true;
        }

        if (ndims == 3 && size[2] <= CV_CN_MAX && step[1] == elemsize * size[2])
        {
            ndims--;
            type |= CV_MAKETYPE(0, size[2]);
        }

        if (ndims > 2)
        {
            failmsg("toMat: Object has more than 2 dimensions");
        }

        m = Mat(ndims, size, type, PyArray_DATA(o), step);

        if (m.data)
        {
#if (CV_MAJOR_VERSION < 3)
            m.refcount = refcountFromPyObject(o);
            m.addref(); // protect the original numpy array from deallocation
                        // (since Mat destructor will decrement the reference counter)
#else
            m.u = g_numpyAllocator.allocate(o, ndims, size, type, step);
            m.addref();
            Py_INCREF(o);
            // m.u->refcount = *refcountFromPyObject(o);
#endif
        };
        m.allocator = &g_numpyAllocator;

        if (transposed)
        {
            Mat tmp;
            tmp.allocator = &g_numpyAllocator;
            transpose(m, tmp);
            m = tmp;
        }
        return m;
    }

    PyObject *NDArrayConverter::toNDArray(const cv::Mat &m)
    {
        if (!m.data)
            Py_RETURN_NONE;
        Mat temp;
        Mat *p = (Mat *)&m;
#if (CV_MAJOR_VERSION < 3)
        if (!p->refcount || p->allocator != &g_numpyAllocator)
        {
            temp.allocator = &g_numpyAllocator;
            m.copyTo(temp);
            p = &temp;
        }
        p->addref();
        return pyObjectFromRefcount(p->refcount);
#else
        if (!p->u || p->allocator != &g_numpyAllocator)
        {
            temp.allocator = &g_numpyAllocator;
            m.copyTo(temp);
            p = &temp;
        }
        // p->addref();
        // return pyObjectFromRefcount(&p->u->refcount);
        PyObject *o = (PyObject *)p->u->userdata;
        Py_INCREF(o);
        return o;
#endif
    }
}

终端执行以下命令进行编译:

复制代码
conda activate MaskRCNN
cd DynaSLAM
chmod +x build.sh
./build.sh

若报出以下错误:

终端执行以下命令安装:

复制代码
sudo apt-get install python2.7-dev

若还有其他的错误,按照对应的包即可解决。

安装完后,重新编译即可。重新编译时把前一次编译生成的​build文件删除。

五、DynaSLAM运行

运行时需要先激活创建的虚拟环境:

复制代码
conda activate MaskRCNN

创建dataset文件夹,将数据集文件rgbd_dataset_freiburg3_walking_xyz放在dataset下,如果不知道如何生成associations.txt关联文件,以下这两篇均有提到:

使用evo工具/rgbd_benchmark_tools评估TUM RGB-D数据集_orb-slam3 评测工具evo-CSDN博客

YOLO_ORB_SLAM3编译运行_yolo-orb-slam3-CSDN博客

在DynaSLAM目录下打开终端,执行以下命令:

复制代码
./Examples/RGB-D/rgbd_tum Vocabulary/ORBvoc.txt ./Examples/RGB-D/TUM3.yaml ./dataset/rgbd_dataset_freiburg3_walking_xyz/ ./dataset/rgbd_dataset_freiburg3_walking_xyz/associations.txt ./dataset/mask ./dataset/output

如果运行时报以下错误:Light Tracking not working because Tracking is not initialized...

Geometry not working.

可以将DynaSLAM/Examples/RGB-D/TUM3.yaml文件里的ORBextractor.nFeatures参数值由1000改为3000。或再检查一下需要修改的文件内容有没有全部修改。

运行成功画面如下:

相关推荐
lastriches几秒前
基于Python的selenium入门超详细教程(第2章)--单元测试框架unittest
自动化测试·软件测试·python·selenium·单元测试·web测试·unittest
FL162386312911 分钟前
使用yolov8+flask实现精美登录界面+图片视频摄像头检测系统
python·yolo·flask
计算机软件程序设计26 分钟前
Django中的查询条件封装总结
后端·python·django
咖啡调调。28 分钟前
Django连接MySQL
python·mysql·django
嵌入式-老费34 分钟前
Linux上位机开发实战(qt编译之谜)
linux·运维·服务器
翱翔-蓝天34 分钟前
CentOS 上扩展 Swap 分区的大小
linux·运维·centos
ahisky39 分钟前
Centos内核升级
linux·运维·centos
周Echo周1 小时前
2、操作系统之软件基础
linux·运维·服务器·c++·windows·ubuntu·centos
niuTaylor1 小时前
Linux驱动开发和FreeRTOS路线总体规划
linux·运维·服务器·驱动开发
栀子清茶1 小时前
Towards Universal Soccer Video Understanding——论文学习(足球类)
论文阅读·人工智能·深度学习·学习·算法·计算机视觉·论文笔记