PCL關鍵點(1)


關鍵點也稱為興趣點,它是2D圖像或是3D點雲或者曲面模型上,可以通過定義檢測標准來獲取的具有穩定性,區別性的點集,從技術上來說,關鍵點的數量相比於原始點雲或圖像的數據量減小很多,與局部特征描述子結合在一起,組成關鍵點描述子常用來形成原始數據的表示,而且不失代表性和描述性,從而加快了后續的識別,追蹤等對數據的處理了速度,故而,關鍵點技術成為在2D和3D 信息處理中非常關鍵的技術

 NARF(Normal Aligned Radial Feature)關鍵點是為了從深度圖像中識別物體而提出的,對NARF關鍵點的提取過程有以下要求:

      a) 提取的過程考慮邊緣以及物體表面變化信息在內;

      b)在不同視角關鍵點可以被重復探測;

      c)關鍵點所在位置有足夠的支持區域,可以計算描述子和進行唯一的估計法向量。

      其對應的探測步驟如下

      (1) 遍歷每個深度圖像點,通過尋找在近鄰區域有深度變化的位置進行邊緣檢測。

      (2) 遍歷每個深度圖像點,根據近鄰區域的表面變化決定一測度表面變化的系數,及變化的主方向。

      (3) 根據step(2)找到的主方向計算興趣點,表征該方向和其他方向的不同,以及該處表面的變化情況,即該點有多穩定。

      (4) 對興趣值進行平滑濾波。

      (5) 進行無最大值壓縮找到的最終關鍵點,即為NARF關鍵點。

 關於NARF的更為具體的描述請查看這篇博客www.cnblogs.com/ironstark/p/5051533.html。

PCL中keypoints模塊及類的介紹

(1)class pcl::Keypoint<PointInT,PointOutT>  類keypoint是所有關鍵點檢測相關類的基類,定義基本接口,具體實現由子類來完成,其繼承關系時下圖:

         

具體介紹:

Public Member Functions

virtual void  setSearchSurface (const PointCloudInConstPtr &cloud)
  設置搜索時所用搜索點雲,cloud為指向點雲對象的指針引用
void  setSearchMethod (const KdTreePtr &tree)  設置內部算法實現時所用的搜索對象,tree為指向kdtree或者octree對應的指針
void  setKSearch (int k)   設置K近鄰搜索時所用的K參數
void  setRadiusSearch (double radius)   設置半徑搜索的半徑的參數
int  searchForNeighbors (int index, double parameter, std::vector< int > &indices, std::vector< float > &distances) const
 

采用setSearchMethod設置搜索對象,以及setSearchSurface設置搜索點雲,進行近鄰搜索,返回近鄰在點雲中的索引向量,

indices以及對應的距離向量distance其中為查詢點的索引,parameter為搜索時所用的參數半徑或者K

 

(2)class  pcl::HarrisKeypoint2D<PointInT,PointOutT,IntensityT>

     類HarrisKeypoint2D實現基於點雲的強度字段的harris關鍵點檢測子,其中包括多種不同的harris關鍵點檢測算法的變種,其關鍵函數的說明如下:

Public Member Functions

  HarrisKeypoint2D (ResponseMethod method=HARRIS, int window_width=3, int window_height=3, int min_distance=5, float threshold=0.0)
  重構函數,method需要設置采樣哪種關鍵點檢測方法,有HARRIS,NOBLE,LOWE,WOMASI四種方法,默認為HARRIS,window_width  window_height為檢測窗口的寬度和高度min_distance 為兩個關鍵點之間 容許的最小距離,threshold為判斷是否為關鍵點的感興趣程度的閥值,小於該閥值的點忽略,大於則認為是關鍵點
 
void  setMethod (ResponseMethod type)設置檢測方式
void  setWindowWidth (int window_width)  設置檢測窗口的寬度
void  setWindowHeight (int window_height)  設置檢測窗口的高度
void  setSkippedPixels (int skipped_pixels)  設置在檢測時每次跳過的像素的數目
void  setMinimalDistance (int min_distance)   設置候選關鍵點之間的最小距離
void  setThreshold (float threshold)  設置感興趣的閥值
void  setNonMaxSupression (bool=false)  設置是否對小於感興趣閥值的點進行剔除,如果是true則剔除,否則返回這個點
void  setRefine (bool do_refine)設置是否對所得的關鍵點結果進行優化,
void  setNumberOfThreads (unsigned int nr_threads=0)  設置該算法如果采用openMP並行機制,能夠創建線程數目

(3)pcl::HarrisKeypoint3D< PointInT, PointOutT, NormalT >

     類HarrisKeypoint3D和HarrisKeypoint2D類似,但是沒有在點雲的強度空間檢測關鍵點,而是利用點雲的3D空間的信息表面法線向量來進行關鍵點檢測,關於HarrisKeypoint3D的類與HarrisKeypoint2D相似,除了

HarrisKeypoint3D (ResponseMethod method=HARRIS, float radius=0.01f, float threshold=0.0f) 

重構函數,method需要設置采樣哪種關鍵點檢測方法,有HARRIS,NOBLE,LOWE,WOMASI四種方法,默認為HARRIS,radius為法線估計的搜索半徑threshold為判斷是否為關鍵點的感興趣程度的閥值,小於該閥值的點忽略,大於則認為是關鍵點。

(4)pcl::HarrisKeypoint6D< PointInT, PointOutT, NormalT >

    類HarrisKeypoint6D和HarrisKeypoint2D類似,只是利用了歐式空間域XYZ或者強度域來候選關鍵點,或者前兩者的交集,即同時滿足XYZ域和強度域的關鍵點為候選關鍵點,

HarrisKeypoint6D (float radius=0.01, float threshold=0.0)  重構函數,此處並沒有方法選擇的參數,而是默認采用了Tomsai提出的方法實現關鍵點的檢測,radius為法線估計的搜索半徑threshold為判斷是否為關鍵點的感興趣程度的閥值,小於該閥值的點忽略,大於則認為是關鍵點。

(5)pcl::SIFTKeypoint< PointInT, PointOutT >

    類SIFTKeypoint是將二維圖像中的SIFT算子調整后移植到3D空間的SIFT算子的實現,輸入帶有XYZ坐標值和強度的點雲,輸出為點雲中的SIFT關鍵點,其關鍵函數的說明如下:

void  setScales (float min_scale, int nr_octaves, int nr_scales_per_octave)
  設置搜索時與尺度相關的參數,min_scale在點雲體素尺度空間中標准偏差,點雲對應的體素柵格中的最小尺寸
int nr_octaves是檢測關鍵點時體素空間尺度的數目,nr_scales_per_octave為在每一個體素空間尺度下計算高斯空間的尺度所需要的參數
void  setMinimumContrast (float min_contrast)   設置候選關鍵點對應的對比度下限

 

(6)還有很多不再一一介紹

 實例分析

實驗實現提取NARF關鍵點,並且用圖像和3D顯示的方式進行可視化,可以直觀的觀察關鍵點的位置和數量 narf_feature_extraction.cpp:

#include <iostream>

#include <boost/thread/thread.hpp>
#include <pcl/range_image/range_image.h>
#include <pcl/io/pcd_io.h>
#include <pcl/visualization/range_image_visualizer.h>
#include <pcl/visualization/pcl_visualizer.h>
#include <pcl/features/range_image_border_extractor.h>
#include <pcl/keypoints/narf_keypoint.h>
#include <pcl/features/narf_descriptor.h>
#include <pcl/console/parse.h>

typedef pcl::PointXYZ PointType;

// --------------------
// -----Parameters-----
// --------------------
float angular_resolution = 0.5f;           ////angular_resolution為模擬的深度傳感器的角度分辨率,即深度圖像中一個像素對應的角度大小
float support_size = 0.2f;                 //點雲大小的設置
pcl::RangeImage::CoordinateFrame coordinate_frame = pcl::RangeImage::CAMERA_FRAME;     //設置坐標系
bool setUnseenToMaxRange = false;
bool rotation_invariant = true;

// --------------
// -----Help-----
// --------------
void 
printUsage (const char* progName)
{
  std::cout << "\n\nUsage: "<<progName<<" [options] <scene.pcd>\n\n"
            << "Options:\n"
            << "-------------------------------------------\n"
            << "-r <float>   angular resolution in degrees (default "<<angular_resolution<<")\n"
            << "-c <int>     coordinate frame (default "<< (int)coordinate_frame<<")\n"
            << "-m           Treat all unseen points to max range\n"
            << "-s <float>   support size for the interest points (diameter of the used sphere - "
                                                                  "default "<<support_size<<")\n"
            << "-o <0/1>     switch rotational invariant version of the feature on/off"
            <<               " (default "<< (int)rotation_invariant<<")\n"
            << "-h           this help\n"
            << "\n\n";
}

void 
setViewerPose (pcl::visualization::PCLVisualizer& viewer, const Eigen::Affine3f& viewer_pose)  //設置視口的位姿
{
  Eigen::Vector3f pos_vector = viewer_pose * Eigen::Vector3f (0, 0, 0);   //視口的原點pos_vector
  Eigen::Vector3f look_at_vector = viewer_pose.rotation () * Eigen::Vector3f (0, 0, 1) + pos_vector;  //旋轉+平移look_at_vector
  Eigen::Vector3f up_vector = viewer_pose.rotation () * Eigen::Vector3f (0, -1, 0);   //up_vector
  viewer.setCameraPosition (pos_vector[0], pos_vector[1], pos_vector[2],      //設置照相機的位姿
                            look_at_vector[0], look_at_vector[1], look_at_vector[2],
                            up_vector[0], up_vector[1], up_vector[2]);
}

// --------------
// -----Main-----
// --------------
int 
main (int argc, char** argv)
{
  // --------------------------------------
  // -----Parse Command Line Arguments-----
  // --------------------------------------
  if (pcl::console::find_argument (argc, argv, "-h") >= 0)
  {
    printUsage (argv[0]);
    return 0;
  }
  if (pcl::console::find_argument (argc, argv, "-m") >= 0)
  {
    setUnseenToMaxRange = true;
    cout << "Setting unseen values in range image to maximum range readings.\n";
  }
  if (pcl::console::parse (argc, argv, "-o", rotation_invariant) >= 0)
    cout << "Switching rotation invariant feature version "<< (rotation_invariant ? "on" : "off")<<".\n";
  int tmp_coordinate_frame;
  if (pcl::console::parse (argc, argv, "-c", tmp_coordinate_frame) >= 0)
  {
    coordinate_frame = pcl::RangeImage::CoordinateFrame (tmp_coordinate_frame);
    cout << "Using coordinate frame "<< (int)coordinate_frame<<".\n";
  }
  if (pcl::console::parse (argc, argv, "-s", support_size) >= 0)
    cout << "Setting support size to "<<support_size<<".\n";
  if (pcl::console::parse (argc, argv, "-r", angular_resolution) >= 0)
    cout << "Setting angular resolution to "<<angular_resolution<<"deg.\n";
  angular_resolution = pcl::deg2rad (angular_resolution);
  
  // ------------------------------------------------------------------
  // -----Read pcd file or create example point cloud if not given-----
  // ------------------------------------------------------------------
  pcl::PointCloud<PointType>::Ptr point_cloud_ptr (new pcl::PointCloud<PointType>);
  pcl::PointCloud<PointType>& point_cloud = *point_cloud_ptr;
  pcl::PointCloud<pcl::PointWithViewpoint> far_ranges;
  Eigen::Affine3f scene_sensor_pose (Eigen::Affine3f::Identity ());
  std::vector<int> pcd_filename_indices = pcl::console::parse_file_extension_argument (argc, argv, "pcd");
  if (!pcd_filename_indices.empty ())
  {
    std::string filename = argv[pcd_filename_indices[0]];
    if (pcl::io::loadPCDFile (filename, point_cloud) == -1)
    {
      cerr << "Was not able to open file \""<<filename<<"\".\n";
      printUsage (argv[0]);
      return 0;
    }
    scene_sensor_pose = Eigen::Affine3f (Eigen::Translation3f (point_cloud.sensor_origin_[0], //場景傳感器的位置
                                                               point_cloud.sensor_origin_[1],
                                                               point_cloud.sensor_origin_[2])) *
                        Eigen::Affine3f (point_cloud.sensor_orientation_);
    std::string far_ranges_filename = pcl::getFilenameWithoutExtension (filename)+"_far_ranges.pcd";
    if (pcl::io::loadPCDFile (far_ranges_filename.c_str (), far_ranges) == -1)
      std::cout << "Far ranges file \""<<far_ranges_filename<<"\" does not exists.\n";
  }
  else
  {
    setUnseenToMaxRange = true;
    cout << "\nNo *.pcd file given => Genarating example point cloud.\n\n";
    for (float x=-0.5f; x<=0.5f; x+=0.01f)
    {
      for (float y=-0.5f; y<=0.5f; y+=0.01f)
      {
        PointType point;  point.x = x;  point.y = y;  point.z = 2.0f - y;
        point_cloud.points.push_back (point);
      }
    }
    point_cloud.width = (int) point_cloud.points.size ();  point_cloud.height = 1;
  }
  
  // -----------------------------------------------
  // -----Create RangeImage from the PointCloud-----
  // -----------------------------------------------
  float noise_level = 0.0;
  float min_range = 0.0f;
  int border_size = 1;
  boost::shared_ptr<pcl::RangeImage> range_image_ptr (new pcl::RangeImage);
  pcl::RangeImage& range_image = *range_image_ptr;   
  range_image.createFromPointCloud (point_cloud, angular_resolution, pcl::deg2rad (360.0f), pcl::deg2rad (180.0f),
                                   scene_sensor_pose, coordinate_frame, noise_level, min_range, border_size);
  range_image.integrateFarRanges (far_ranges);
  if (setUnseenToMaxRange)
    range_image.setUnseenToMaxRange ();
  
  // --------------------------------------------
  // -----Open 3D viewer and add point cloud-----
  // --------------------------------------------
  pcl::visualization::PCLVisualizer viewer ("3D Viewer");
  viewer.setBackgroundColor (1, 1, 1);
  pcl::visualization::PointCloudColorHandlerCustom<pcl::PointWithRange> range_image_color_handler (range_image_ptr, 0, 0, 0);
  viewer.addPointCloud (range_image_ptr, range_image_color_handler, "range image");
  viewer.setPointCloudRenderingProperties (pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 1, "range image");
  //viewer.addCoordinateSystem (1.0f, "global");
  //PointCloudColorHandlerCustom<PointType> point_cloud_color_handler (point_cloud_ptr, 150, 150, 150);
  //viewer.addPointCloud (point_cloud_ptr, point_cloud_color_handler, "original point cloud");
  viewer.initCameraParameters ();
  setViewerPose (viewer, range_image.getTransformationToWorldSystem ());
  
  // --------------------------
  // -----Show range image-----
  // --------------------------
  pcl::visualization::RangeImageVisualizer range_image_widget ("Range image");
  range_image_widget.showRangeImage (range_image);
  /*********************************************************************************************************
   創建RangeImageBorderExtractor對象,它是用來進行邊緣提取的,因為NARF的第一步就是需要探測出深度圖像的邊緣,
   
   *********************************************************************************************************/
  // --------------------------------
  // -----Extract NARF keypoints-----
  // --------------------------------
  pcl::RangeImageBorderExtractor range_image_border_extractor;   //用來提取邊緣
  pcl::NarfKeypoint narf_keypoint_detector;      //用來檢測關鍵點
  narf_keypoint_detector.setRangeImageBorderExtractor (&range_image_border_extractor);   //
  narf_keypoint_detector.setRangeImage (&range_image);
  narf_keypoint_detector.getParameters ().support_size = support_size;    //設置NARF的參數
  
  pcl::PointCloud<int> keypoint_indices;
  narf_keypoint_detector.compute (keypoint_indices);
  std::cout << "Found "<<keypoint_indices.points.size ()<<" key points.\n";

  // ----------------------------------------------
  // -----Show keypoints in range image widget-----
  // ----------------------------------------------
  //for (size_t i=0; i<keypoint_indices.points.size (); ++i)
    //range_image_widget.markPoint (keypoint_indices.points[i]%range_image.width,
                                  //keypoint_indices.points[i]/range_image.width);
  
  // -------------------------------------
  // -----Show keypoints in 3D viewer-----
  // -------------------------------------
  pcl::PointCloud<pcl::PointXYZ>::Ptr keypoints_ptr (new pcl::PointCloud<pcl::PointXYZ>);

  pcl::PointCloud<pcl::PointXYZ>& keypoints = *keypoints_ptr;

  keypoints.points.resize (keypoint_indices.points.size ());
  for (size_t i=0; i<keypoint_indices.points.size (); ++i)

    keypoints.points[i].getVector3fMap () = range_image.points[keypoint_indices.points[i]].getVector3fMap ();
  pcl::visualization::PointCloudColorHandlerCustom<pcl::PointXYZ> keypoints_color_handler (keypoints_ptr, 0, 255, 0);
  viewer.addPointCloud<pcl::PointXYZ> (keypoints_ptr, keypoints_color_handler, "keypoints");
  viewer.setPointCloudRenderingProperties (pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 7, "keypoints");
  
  // ------------------------------------------------------
  // -----Extract NARF descriptors for interest points-----
  // ------------------------------------------------------
  std::vector<int> keypoint_indices2;
  keypoint_indices2.resize (keypoint_indices.points.size ());
  for (unsigned int i=0; i<keypoint_indices.size (); ++i) // This step is necessary to get the right vector type
    keypoint_indices2[i]=keypoint_indices.points[i];
  pcl::NarfDescriptor narf_descriptor (&range_image, &keypoint_indices2);
  narf_descriptor.getParameters ().support_size = support_size;
  narf_descriptor.getParameters ().rotation_invariant = rotation_invariant;
  pcl::PointCloud<pcl::Narf36> narf_descriptors;
  narf_descriptor.compute (narf_descriptors);
  cout << "Extracted "<<narf_descriptors.size ()<<" descriptors for "
                      <<keypoint_indices.points.size ()<< " keypoints.\n";
  
  //--------------------
  // -----Main loop-----
  //--------------------
  while (!viewer.wasStopped ())
  {
    range_image_widget.spinOnce ();  // process GUI events
    viewer.spinOnce ();
    pcl_sleep(0.01);
  }
}

運行結果:

 

未完待續**********************88888

有興趣這可以掃描下面的二維碼關注公眾號與我交流,

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM