Ubuntu14.04(indigo)實現RGBDSLAMv2(數據集和實時Kinect v2)
一、在.bag數據集上跑RGBDSLAMv2
RGBDSLAMv2指的是Felix Endres大神在2014年發表論文,實現的基於RGB-D 攝像頭的SLAM系統,用於創建三維點雲或者八叉樹地圖。
安裝步驟重點參考原gitbub網址:https://github.com/felixendres/rgbdslam_v2
說明本人台式機硬件配置:
Intel(R)Core(TM)i5-6500 CPU @ 3.20GHz 3.20GHz;
RAM: 16.0GB;
GPU: NVIDIA GeForce GTX 1060 6GB。
1. 在Ubuntu14.04中安裝ROS Indigo,參考網址:http://wiki.ros.org/cn/indigo/Installation/Ubuntu
2. 安裝opencv2.4.9,參考網址:http://www.samontab.com/web/2014/06/installing-opencv-2-4-9-in-ubuntu-14-04-lts/
http://blog.csdn.net/baoke485800/article/details/51236198
系統更新
sudo apt-get update sudo apt-get upgrade
安裝相關依賴包
sudo apt-get install build-essential libgtk2.0-dev libjpeg-dev libtiff4-dev libjasper-dev libopenexr-dev cmake python-dev python-numpy python-tk libtbb-dev libeigen3-dev yasm libfaac-dev libopencore-amrnb-dev libopencore-amrwb-dev libtheora-dev libvorbis-dev libxvidcore-dev libx264-dev libqt4-dev libqt4-opengl-dev sphinx-common texlive-latex-extra libv4l-dev libdc1394-22-dev libavcodec-dev libavformat-dev libswscale-dev default-jdk ant libvtk5-qt4-dev
利用wget獲得Opencv2.4.9源文件,等下載完成后解壓
wget http://sourceforge.net/projects/opencvlibrary/files/opencv-unix/2.4.9/opencv-2.4.9.zip
unzip opencv-2.4.9.zipcd
opencv-2.4.9
cmake編譯安裝opencv源文件包
mkdir buildcd
build
cmake -D WITH_TBB=ON -D BUILD_NEW_PYTHON_SUPPORT=ON -D WITH_V4L=ON -D INSTALL_C_EXAMPLES=ON -D INSTALL_PYTHON_EXAMPLES=ON -D BUILD_EXAMPLES=ON -D WITH_QT=ON -D WITH_OPENGL=ON -D WITH_VTK=ON ..
make -j4
sudo make install
配置opencv相關
sudo gedit /etc/ld.so.conf.d/opencv.conf
在打開的文件中(空文件也可)添加如下代碼並保存
/usr/local/lib
執行以下代碼
sudo ldconfig
打開另外一個文件
sudo gedit /etc/bash.bashrc
在文件末尾添加如下並保存退出
PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/local/lib/pkgconfig
export PKG_CONFIG_PATH
檢查opencv是否安裝成功
cd ~/opencv-2.4.9/samples/c chmod +x build_all.sh ./build_all.sh
老版本的C語言接口
./facedetect --cascade="/usr/local/share/OpenCV/haarcascades/haarcascade_frontalface_alt.xml"--scale=1.5 lena.jpg
./facedetect --cascade="/usr/local/share/OpenCV/haarcascades/haarcascade_frontalface_alt.xml"--nested-cascade="/usr/local/share/OpenCV/haarcascades/haarcascade_eye.xml" --scale=1.5 lena.jpg
新的C++接口
~/opencv-2.4.9/build/bin/cpp-example-grabcut ~/opencv-2.4.9/samples/cpp/lena.jpg
OK,測試成功。(更多測試詳見上面參考網址)
opencv官網unix版本下載地址:https://sourceforge.net/projects/opencvlibrary/files/opencv-unix/
3. 安裝pcl-1.7.2,使用github源碼安裝,地址:https://github.com/PointCloudLibrary/pcl
4. 創建catkin工作空間:
#為rgbdslam單獨創建一個catkin工作空間
mkdir rgbdslam_catkin_ws
cd rgbdslam_catkin_ws
mkdir src
cd ~/rgbdslam_catkin_ws/src
#將其初始化為catkin工作空間的源碼存放文件夾
catkin_init_workspace
#進入catkin工作空間目錄
cd ~/rgbdslam_catkin_ws/
#編譯新建的catkin工作空間,生成build、devel文件夾,形成完整的catkin工作空間
catkin_make
#調用終端配置文件
source devel/setup.bash
5. 源碼安裝g2o, 參考原gitbub網址:https://github.com/felixendres/rgbdslam_v2
6. 編譯安裝RGBDSLAMv2
#進入catkin工作空間的源碼存放文件夾
cd ~/rgbdslam_catkin_ws/src
#下載github上對應ROS Indigo版本的rgbdslam源碼
wget -q http://github.com/felixendres/rgbdslam_v2/archive/indigo.zip
#解壓
unzip -q indigo.zip
#進入catkin工作空間目錄
cd ~/rgbdslam_catkin_ws/
#ROS依賴包更新
rosdep update
yuanlibin@yuanlibin:~/rgbdslam_catkin_ws$ rosdep update reading in sources list data from /etc/ros/rosdep/sources.list.d Hit https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/osx-homebrew.yaml Hit https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/base.yaml Hit https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/python.yaml Hit https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/ruby.yaml Hit https://raw.githubusercontent.com/ros/rosdistro/master/releases/fuerte.yaml Query rosdistro index https://raw.githubusercontent.com/ros/rosdistro/master/index.yaml Add distro "groovy" Add distro "hydro" Add distro "indigo" Add distro "jade" Add distro "kinetic" Add distro "lunar" updated cache in /home/yuanlibin/.ros/rosdep/sources.cache
#安裝rgbdslam依賴包
rosdep install rgbdslam
正確運行后顯示:#All required rosdeps installed successfully
#編譯rgbdslam
catkin_make
正確運行后顯示:[100%] Built target rgbdslam
source devel/setup.bash
最后運行
roslaunch rgbdslam rgbdslam.launch
會出現錯誤:
NODES / rgbdslam (rgbdslam/rgbdslam) ROS_MASTER_URI=http://localhost:11311 core service [/rosout] found ERROR: cannot launch node of type [rgbdslam/rgbdslam]: rgbdslam ROS path [0]=/opt/ros/indigo/share/ros ROS path [1]=/opt/ros/indigo/share ROS path [2]=/opt/ros/indigo/stacks No processes to monitor shutting down processing monitor... ... shutting down processing monitor complete
解決方法是將工作空間的路徑加到 .bashrc 文件中,如本電腦示例::
echo "source /home/yuanlibin/rgbdslam_catkin_ws/devel/setup.bash" >> ~/.bashrc source ~/.bashrc
至此,RGBDSLAMv2已編譯安裝完成。
7. 下載TUM的.bag數據集文件,下載地址:https://vision.in.tum.de/data/datasets/rgbd-dataset/download
例如:rgbd_dataset_freiburg1_xyz.bag
查看.bag數據集的信息:
終端1
roscore
終端2
rosbag play rgbd_dataset_freiburg1_xyz.bag
終端3
rostopic info
最后的命令不要按enter鍵按tab鍵進行查看
yuanlibin@yuanlibin:~$ rostopic info / /camera/depth/camera_info /cortex_marker_array /camera/depth/image /imu /camera/rgb/camera_info /rosout /camera/rgb/image_color /rosout_agg /clock /tf yuanlibin@yuanlibin:~$
然后修改路徑:/home/yuanlibin/rgbdslam_catkin_ws/src/rgbdslam_v2-indigo/launch下的rgbdslam.launch文件
其中第8、9行的輸入數據設置
<param name="config/topic_image_mono" value="/camera/rgb/image_color"/> <param name="config/topic_image_depth" value="/camera/depth_registered/sw_registered/image_rect_raw"/>
需要修改為上述數據集相應的信息,修改如下:
<param name="config/topic_image_mono" value="/camera/rgb/image_color"/> <param name="config/topic_image_depth" value="/camera/depth/image"/>
在該文件中可以修改系統使用的特征:
SIFT, SIFTGPU, SURF, SURF128 (extended SURF), ORB.
8. 在數據集上跑RGBDSLAMv2
終端1
roscore
終端2
rosbag play rgbd_dataset_freiburg1_xyz.bag
終端3
roslaunch rgbdslam rgbdslam.launch
最后,就可以看到在數據集上運行RGBDSLAMv2重建的三維點雲圖了。
二、基於Kinect v1實時運行RGBDSLAMv2
1. 進行ROS indigo下Kinect v1的驅動安裝與調試,可參考:http://www.cnblogs.com/yuanlibin/p/8608190.html
2. 在終端執行以下命令:
終端1
roscore
終端2
roslaunch rgbdslam openni+rgbdslam.launch
3. 移動Kinect v1,就可以看到實時重建的三維點雲了。
三、基於Kinect v2實時運行RGBDSLAMv2
1. 運行Kinect v2 查看其輸出數據信息:
終端1
roslaunch kinect2_bridge kinect2_bridge.launch
終端2(輸入命令rostopic info后,不要按enter,要按table鍵進行查看)

yuanlibin@yuanlibin:~$ rostopic info / /kinect2/bond /kinect2/hd/camera_info /kinect2/hd/image_color /kinect2/hd/image_color/compressed /kinect2/hd/image_color_rect /kinect2/hd/image_color_rect/compressed /kinect2/hd/image_depth_rect /kinect2/hd/image_depth_rect/compressed /kinect2/hd/image_mono /kinect2/hd/image_mono/compressed /kinect2/hd/image_mono_rect /kinect2/hd/image_mono_rect/compressed /kinect2/hd/points /kinect2/qhd/camera_info /kinect2/qhd/image_color /kinect2/qhd/image_color/compressed /kinect2/qhd/image_color_rect /kinect2/qhd/image_color_rect/compressed /kinect2/qhd/image_depth_rect /kinect2/qhd/image_depth_rect/compressed /kinect2/qhd/image_mono /kinect2/qhd/image_mono/compressed /kinect2/qhd/image_mono_rect --More--
2. 在路徑/home/yuanlibin/rgbdslam_catkin_ws/src/rgbdslam_v2-indigo/launch下新建一個rgbdslam_kinect2.launch文件,內容如下:

<launch> <node pkg="rgbdslam" type="rgbdslam" name="rgbdslam" cwd="node" required="true" output="screen"> <!-- Input data settings--> <param name="config/topic_image_mono" value="/kinect2/qhd/image_color_rect"/> <param name="config/camera_info_topic" value="/kinect2/qhd/camera_info"/> <param name="config/topic_image_depth" value="/kinect2/qhd/image_depth_rect"/> <param name="config/topic_points" value=""/> <!--if empty, poincloud will be reconstructed from image and depth --> <!-- These are the default values of some important parameters --> <param name="config/feature_extractor_type" value="ORB"/><!-- also available: SIFT, SIFTGPU, SURF, SURF128 (extended SURF), ORB. --> <param name="config/feature_detector_type" value="ORB"/><!-- also available: SIFT, SURF, GFTT (good features to track), ORB. --> <param name="config/detector_grid_resolution" value="3"/><!-- detect on a 3x3 grid (to spread ORB keypoints and parallelize SIFT and SURF) --> <param name="config/optimizer_skip_step" value="15"/><!-- optimize only every n-th frame --> <param name="config/cloud_creation_skip_step" value="2"/><!-- subsample the images' pixels (in both, width and height), when creating the cloud (and therefore reduce memory consumption) --> <param name="config/backend_solver" value="csparse"/><!-- pcg is faster and good for continuous online optimization, cholmod and csparse are better for offline optimization (without good initial guess)--> <param name="config/pose_relative_to" value="first"/><!-- optimize only a subset of the graph: "largest_loop" = Everything from the earliest matched frame to the current one. Use "first" to optimize the full graph, "inaffected" to optimize only the frames that were matched (not those inbetween for loops) --> <param name="config/maximum_depth" value="2"/> <param name="config/subscriber_queue_size" value="20"/> <param name="config/min_sampled_candidates" value="30"/><!-- Frame-to-frame comparisons to random frames (big loop closures) --> <param name="config/predecessor_candidates" value="20"/><!-- Frame-to-frame comparisons to sequential frames--> <param name="config/neighbor_candidates" value="20"/><!-- Frame-to-frame comparisons to graph neighbor frames--> <param name="config/ransac_iterations" value="140"/> <param name="config/g2o_transformation_refinement" value="1"/> <param name="config/icp_method" value="gicp"/> <!-- icp, gicp ... --> <!-- <param name="config/max_rotation_degree" value="20"/> <param name="config/max_translation_meter" value="0.5"/> <param name="config/min_matches" value="30"/> <param name="config/min_translation_meter" value="0.05"/> <param name="config/min_rotation_degree" value="3"/> <param name="config/g2o_transformation_refinement" value="2"/> <param name="config/min_rotation_degree" value="10"/> <param name="config/matcher_type" value="ORB"/> --> </node> </launch>
注意第3、4、5、7行的輸入數據設置,應與上面查看到的信息一致。
在該文件中可以修改系統使用的特征:
SIFT, SIFTGPU, SURF, SURF128 (extended SURF), ORB.
3. 最后基於Kinect v2的實時運行RGBDSLAMv2
終端1
roslaunch rgbdslam rgbdslam_kinect2.launch
終端2
roslaunch kinect2_bridge kinect2_bridge.launch
緩慢移動Kinect v2,就可以看到實時重建的三維點雲了。自己實現的三維點雲截圖如下:
圖1所示為實驗室工位的全景三維點雲圖;
圖2所示為全景圖中紅點處的側視圖。
圖1. 實驗室工位的全景三維點雲圖
圖2. 全景圖中紅點處的側視圖