OpenCV中的SURF算法介紹


SURF:speed up robust feature,翻譯為快速魯棒特征。首先就其中涉及到的特征點和描述符做一些簡單的介紹:

  • 特征點和描述符  

  特征點分為兩類:狹義特征點和廣義特征點。狹義特征點的位置本身具有常規的屬性意義,比如角點、交叉點等等。而廣義特征點是基於區域定義的,它本身的位置不具備特征意義,只代表滿足一定特征條件的特征區域的位置。廣義特征點可以是某特征區域的任一相對位置。這種特征可以不是物理意義上的特征,只要滿足一定的數學描述就可以,因而有時是抽象的。因此,從本質上說,廣義特征點可以認為是一個抽象的特征區域,它的屬性就是特征區域具備的屬性;稱其為點,是將其抽象為一個位置概念。

  特征點既是一個點的位置標識,同時也說明它的局部鄰域具有一定的模式特征。事實上,特征點是一個具有一定特征的局部區域的位置標識,稱其為點,是將其抽象為一個位置概念,以便於確定兩幅圖像中同一個位置點的對應關系而進行圖像匹配。所以在特征匹配過程中是以該特征點為中心,將鄰域的局部特征進行匹配。也就是說在進行特征匹配時首先要為這些特征點(狹義和廣義)建立特征描述,這種特征描述通常稱之為描述符。 

  一個好的特征點需要有一個好的描述方法將其表現出來,它涉及到的是圖像匹配的一個准確性。因此在基於特征點的圖像拼接和圖像配准技術中,特征點和描述符同樣重要。

更多內容可參考:http://blog.sina.com.cn/s/blog_4b146a9c0100rb18.html

  • OpenCv中SURF的demo
  1 #include <stdio.h>
  2 #include <iostream>
  3 #include "opencv2/core/core.hpp"
  4 #include "opencv2/features2d/features2d.hpp"
  5 #include "opencv2/highgui/highgui.hpp"
  6 #include "opencv2/calib3d/calib3d.hpp"
  7 
  8 using namespace cv;
  9 
 10 void readme();
 11 
 12 /** @function main */
 13 int main( int argc, char** argv )
 14 {
 15   if( argc != 3 )
 16   { readme(); return -1; }
 17 
 18   Mat img_object = imread( argv[1], CV_LOAD_IMAGE_GRAYSCALE );
 19   Mat img_scene = imread( argv[2], CV_LOAD_IMAGE_GRAYSCALE );
 20 
 21   if( !img_object.data || !img_scene.data )
 22   { std::cout<< " --(!) Error reading images " << std::endl; return -1; }
 23 
 24   //-- Step 1: Detect the keypoints using SURF Detector
 25   int minHessian = 400;
 26 
 27   SurfFeatureDetector detector( minHessian );
 28 
 29   std::vector<KeyPoint> keypoints_object, keypoints_scene;
 30 
 31   detector.detect( img_object, keypoints_object );
 32   detector.detect( img_scene, keypoints_scene );
 33 
 34   //-- Step 2: Calculate descriptors (feature vectors)
 35   SurfDescriptorExtractor extractor;
 36 
 37   Mat descriptors_object, descriptors_scene;
 38 
 39   extractor.compute( img_object, keypoints_object, descriptors_object );
 40   extractor.compute( img_scene, keypoints_scene, descriptors_scene );
 41 
 42   //-- Step 3: Matching descriptor vectors using FLANN matcher
 43   FlannBasedMatcher matcher;
 44   std::vector< DMatch > matches;
 45   matcher.match( descriptors_object, descriptors_scene, matches );
 46 
 47   double max_dist = 0; double min_dist = 100;
 48 
 49   //-- Quick calculation of max and min distances between keypoints
 50   for( int i = 0; i < descriptors_object.rows; i++ )
 51   { double dist = matches[i].distance;
 52     if( dist < min_dist ) min_dist = dist;
 53     if( dist > max_dist ) max_dist = dist;
 54   }
 55 
 56   printf("-- Max dist : %f \n", max_dist );
 57   printf("-- Min dist : %f \n", min_dist );
 58 
 59   //-- Draw only "good" matches (i.e. whose distance is less than 3*min_dist )
 60   std::vector< DMatch > good_matches;
 61 
 62   for( int i = 0; i < descriptors_object.rows; i++ )
 63   { if( matches[i].distance < 3*min_dist )
 64      { good_matches.push_back( matches[i]); }
 65   }
 66 
 67   Mat img_matches;
 68   drawMatches( img_object, keypoints_object, img_scene, keypoints_scene,
 69                good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
 70                vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
 71 
 72   //-- Localize the object
 73   std::vector<Point2f> obj;
 74   std::vector<Point2f> scene;
 75 
 76   for( int i = 0; i < good_matches.size(); i++ )
 77   {
 78     //-- Get the keypoints from the good matches
 79     obj.push_back( keypoints_object[ good_matches[i].queryIdx ].pt );
 80     scene.push_back( keypoints_scene[ good_matches[i].trainIdx ].pt );
 81   }
 82 
 83   Mat H = findHomography( obj, scene, CV_RANSAC );
 84 
 85   //-- Get the corners from the image_1 ( the object to be "detected" )
 86   std::vector<Point2f> obj_corners(4);
 87   obj_corners[0] = cvPoint(0,0); obj_corners[1] = cvPoint( img_object.cols, 0 );
 88   obj_corners[2] = cvPoint( img_object.cols, img_object.rows ); obj_corners[3] = cvPoint( 0, img_object.rows );
 89   std::vector<Point2f> scene_corners(4);
 90 
 91   perspectiveTransform( obj_corners, scene_corners, H);
 92 
 93   //-- Draw lines between the corners (the mapped object in the scene - image_2 )
 94   line( img_matches, scene_corners[0] + Point2f( img_object.cols, 0), scene_corners[1] + Point2f( img_object.cols, 0), Scalar(0, 255, 0), 4 );
 95   line( img_matches, scene_corners[1] + Point2f( img_object.cols, 0), scene_corners[2] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
 96   line( img_matches, scene_corners[2] + Point2f( img_object.cols, 0), scene_corners[3] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
 97   line( img_matches, scene_corners[3] + Point2f( img_object.cols, 0), scene_corners[0] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
 98 
 99   //-- Show detected matches
100   imshow( "Good Matches & Object detection", img_matches );
101 
102   waitKey(0);
103   return 0;
104   }
105 
106   /** @function readme */
107   void readme()
108   { std::cout << " Usage: ./SURF_descriptor <img1> <img2>" << std::endl; }
View Code

有了對特征點和描述符的簡單認識后,對上述代碼就能有更好的理解了。

代碼來源:http://www.opencv.org.cn/opencvdoc/2.3.2/html/doc/tutorials/features2d/feature_homography/feature_homography.html#feature-homography

  • SURF算法的具體實現過程

整理了網上的一些資料:

  1. surf算法原理,有一些簡單介紹(1)

  http://blog.csdn.net/andkobe/article/details/5778739

     2.  surf算法原理,有一些簡單介紹(2)

    http://wuzizhang.blog.163.com/blog/static/78001208201138102648854/  

     3 . 特征點檢測學習_2(surf算法)

      http://www.cnblogs.com/tornadomeet/archive/2012/08/17/2644903.html

  • 其他
1 // DMatch function
2 DMatch(int queryIdx, int trainIdx, float distance)

其中 queryIdx 和 trainIdx 對應的特征點索引由match 函數決定,例如:

1 // 按如下順序使用
2 match(descriptor_for_keypoints1, descriptor_for_keypoints2, matches)

queryIdx 和 trainIdx分別對應keypoints1和keypoints2。

 

 2013-11-05


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM