java opencv使用相關


Using OpenCV Java with Eclipse

http://docs.opencv.org/2.4/doc/tutorials/introduction/java_eclipse/java_eclipse.html

Since version 2.4.4 OpenCV supports Java. In this tutorial I will explain how to setup development environment for using OpenCV Java with Eclipse in Windows, so you can enjoy the benefits of garbage collected, very refactorable (rename variable, extract method and whatnot) modern language that enables you to write code with less effort and make less mistakes. Here we go.

Configuring Eclipse

First, obtain a fresh release of OpenCV from download page and extract it under a simple location like C:\OpenCV-2.4.6\. I am using version 2.4.6, but the steps are more or less the same for other versions.

Now, we will define OpenCV as a user library in Eclipse, so we can reuse the configuration for any project. Launch Eclipse and select Window –> Preferencesfrom the menu.

Eclipse preferences

Navigate under Java –> Build Path –> User Libraries and click New....

Creating a new library

Enter a name, e.g. OpenCV-2.4.6, for your new library.

Naming the new library

Now select your new user library and click Add External JARs....

Adding external jar

Browse through C:\OpenCV-2.4.6\build\java\ and select opencv-246.jar. After adding the jar, extend the opencv-246.jar and select Native library location and pressEdit....

Selecting native library location 1

Select External Folder... and browse to select the folder C:\OpenCV-2.4.6\build\java\x64. If you have a 32-bit system you need to select the x86 folder instead ofx64.

Selecting native library location 2

Your user library configuration should look like this:

Selecting native library location 2

Testing the configuration on a new Java project

Now start creating a new Java project.

Creating new Java project

On the Java Settings step, under Libraries tab, select Add Library... and select OpenCV-2.4.6, then click Finish.

Adding user defined library 1 Adding user defined library 2

Libraries should look like this:

Adding user defined library

Now you have created and configured a new Java project it is time to test it. Create a new java file. Here is a starter code for your convenience:

import org.opencv.core.Core; import org.opencv.core.CvType; import org.opencv.core.Mat; public class Hello { public static void main( String[] args ) { System.loadLibrary( Core.NATIVE_LIBRARY_NAME ); Mat mat = Mat.eye( 3, 3, CvType.CV_8UC1 ); System.out.println( "mat = " + mat.dump() ); } } 

When you run the code you should see 3x3 identity matrix as output.

Adding user defined library

That is it, whenever you start a new project just add the OpenCV user library that you have defined to your project and you are good to go. Enjoy your powerful, less painful development environment :)

http://www.cnblogs.com/lidabo/p/3501285.html

Opencv3.1.0+opencv_contrib配置及使用SIFT測試

因為需要用到一些比較新的跟蹤算法,這兩天裝了opencv3.1並配置了opencv_contrib,並使用了SIFT算法測試是否配置成功。 
1.opencv3.1安裝與配置 
這里不多言,不熟悉的可以參考淺墨的博客:http://blog.csdn.net/poem_qianmo/article/details/19809337 
2.opencv_contrib安裝與配置 
從opencv3以來,一些比較新的功能都挪到了“opencv_contrib”庫里。配置這個庫需要重新編譯OpenCV,關於此部分可以參考教程:http://blog.csdn.net/linshuhe1/article/details/51221015 
關於此教程需要補充兩點:A,使用cmake編譯的過程中經常會失敗,因為國內網絡問題ippicv_windows_20151201.zip 文件下載失敗導致,可以直接從這里下載:http://download.csdn.net/detail/qjj2857/9495013 B.教程最后配置包含目錄、庫目錄時沒有提及添加環境變量,這里也是同樣需要的。還有一切配置完成后別忘了重啟電腦喲。 
3.寫個程序測試一下配置是否成功吧 
opencv3.1中SIFT匹配是在opencv_contrib庫中的,這里我們就用它來做一個簡單的測試。 
參考: 
1. cv::xfeatures2d::SIFT Class Reference:http://docs.opencv.org/3.1.0/d5/d3c/classcv_1_1xfeatures2d_1_1SIFT.html#gsc.tab=0 
2. OpenCV3.1 xfeatures2d::SIFT 使用:http://blog.csdn.net/lijiang1991/article/details/50855279 
程序:

#include <iostream> #include <opencv2/opencv.hpp> //頭文件 #include <opencv2/xfeatures2d.hpp> using namespace cv; //包含cv命名空間 using namespace std; int main() { //Create SIFT class pointer Ptr<Feature2D> f2d = xfeatures2d::SIFT::create(); //讀入圖片 Mat img_1 = imread("1.jpg"); Mat img_2 = imread("2.jpg"); //Detect the keypoints vector<KeyPoint> keypoints_1, keypoints_2; f2d->detect(img_1, keypoints_1); f2d->detect(img_2, keypoints_2); //Calculate descriptors (feature vectors) Mat descriptors_1, descriptors_2; f2d->compute(img_1, keypoints_1, descriptors_1); f2d->compute(img_2, keypoints_2, descriptors_2); //Matching descriptor vector using BFMatcher BFMatcher matcher; vector<DMatch> matches; matcher.match(descriptors_1, descriptors_2, matches); //繪制匹配出的關鍵點 Mat img_matches; drawMatches(img_1, keypoints_1, img_2, keypoints_2, matches, img_matches); imshow("【match圖】", img_matches); //等待任意按鍵按下 waitKey(0); }
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32

原始圖片: 
這里寫圖片描述 
這里寫圖片描述 
匹配結果: 
這里寫圖片描述

———————————-2016/8/12——————————— 
1.關於Ubuntu下opencv3.1及opencv_contrib的安裝與配置可參考: 
官網:Installation in Linux 
http://www.cnblogs.com/asmer-stone/p/5089764.html 
上博文中有兩點需要注意: 
A.按上文參考所述,第3步build文件夾需建在~/opencv/opencv文件夾中;且cmake時按照作者示例OPENCV_EXTRA_MODULES_PATH=~/opencv/opencv_contrib/modules ,注意”<>”需要去掉;末尾的.. 表示opencv源碼在上一級目錄中。當然如果你了解cmake的使用方法cmake [optional] <opencv source directory>, 可以任意設置文件夾目錄。 
B.與在windows下相同,cmake時會因為“ippicv_linux_20151201.tgz 無法下載”而導致失敗。我們可以從http://download.csdn.net/download/lx928525166/9479919下載,並放入相應文件夾中。 
2. 好,現在假設你已經安裝配置好了。由於在windows下我們習慣了用一個IDE來編程,這里在Ubuntu下我選擇使用eclipse來作為編程環境,下邊簡單說一下怎么在eclipse中配置opencv。 
首先參考官網: http://docs.opencv.org/3.1.0/d7/d16/tutorial_linux_eclipse.html 你就應該能配置的差不多了,或者其他類似的吧網上一大堆。 
但是中間可能會出現一些小問題,我個人配置的時候出現了兩個小問題: 
A. 錯誤 undefined reference to symbol ‘_ZN2cv6imreadERKNS_6StringEi’ ,參考:http://answers.opencv.org/question/46755/first-example-code-error/ 
B. 錯誤 error while loading shared libraries: libopencv_core.so.3.0: cannot open shared object file: No such file or directory ,參考:http://stackoverflow.com/questions/27907343/error-while-loading-shared-libraries-libopencv-core-so-3-0 
3. 所有配置均完成后,在上述windows下的代碼可以在這里直接運行。見下圖: 
這里寫圖片描述
4.關於在ubuntu下運行其他samples程序。這里以cpp為例,直接找到opencv/samples/cpp/example_cmake,這里有一個示例已經提供了Makefile文件,make一下即可生成可執行文件。其他cpp示例文件類似。

 

 

 

OpenCV3如何使用SIFT和SURF Where did SIFT and SURF go in OpenCV 3?

If you’ve had a chance to play around with OpenCV 3 (and do a lot of work with keypoint 

If you’ve had a chance to play around with OpenCV 3 (and do a lot of work with keypoint detectors and feature descriptors) you may have noticed that the SIFT and SURF implementations are no longer included in the OpenCV 3 library by default.

Unfortunately, you probably learned this lesson the hard way by opening up a terminal, importing OpenCV, and then trying to instantiate your favorite keypoint detector, perhaps using code like the following:

Oh no! There is no longer a cv2.FeatureDetector_create  method!

The same is true for our cv2.DescriptorExtractor_create  function as well:

Furthermore, cv2.SIFT_create  and cv2.SURF_create  will fail as well:

I’ll be honest — this had me scratching my head at first. How am I supposed to access SIFT, SURF, and my other favorite keypoint detectors and local invariant descriptors ifcv2.FeatureDetector_create  and cv2.DescriptorExtractor_create  have been removed?

The cv2.FeatureDetector_create  and cv2.DescriptorExtractor_create  were (and still are) methods I used all the time. And personally, I really liked the OpenCV 2.4.X implementation. All you needed to do was pass in a string and the factory method would build the instantiation for you. You could then tune the parameters using the getter and setter methods of the keypoint detector or feature descriptor.

Furthermore, these methods have been part of OpenCV 2.4.X for many years. Why in the world were they removed from the default install? And where were they moved to?

In the remainder of this blog post, I’ll detail why certain keypoint detectors and local invariant descriptors were removed from OpenCV 3.0 by default. And I’ll also show you where you can find SIFT, SURF, and other detectors and descriptors in the new version of OpenCV.

Why were SIFT and SURF removed from the default install of OpenCV 3.0?

SIFT and SURF are examples of algorithms that OpenCV calls “non-free” modules. These algorithms are patented by their respective creators, and while they are free to use in academic and research settings, you should technically be obtaining a license/permission from the creators if you are using them in a commercial (i.e. for-profit) application.

With OpenCV 3 came a big push to move many of these “non-free” modules out of the default OpenCV install and into the opencv_contrib package. The opencv_contrib  packages contains implementations of algorithms that are either patented or in experimental development.

The algorithms and associated implementations in  opencv_contrib  are not installed by default and you need to explicitly enable them when compiling and installing OpenCV to obtain access to them.

Personally, I’m not too crazy about this move.

Yes, I understand including patented algorithms inside an open source library may raise a few eyebrows. But algorithms such as SIFT and SURF are pervasive across much of computer vision. And more importantly, the OpenCV implementations of SIFT and SURF are used by academics and researchers daily to evaluate new image classification, Content-Based Image Retrieval, etc. algorithms. By not including these algorithms by default, more harm than good is done (at least in my opinion).

How do I get access to SIFT and SURF in OpenCV 3?

To get access to the original SIFT and SURF implementations found in OpenCV 2.4.X, you’ll need to pull down both the opencv and opencv_contrib repositories from GitHub and then compile and install OpenCV 3 from source.

Luckily, compiling OpenCV from source is easier than it used to be. I have gathered install instructions for Python and OpenCV for many popular operating systems over on the OpenCV 3 Tutorials, Resources, and Guides page — just scroll down the Install OpenCV 3 and Pythonsection and find the appropriate Python version (either Python 2.7+ or Python 3+) for your operating system.

How do I use SIFT and SURF with OpenCV 3?

So now that you have installed OpenCV 3 with the opencv_contrib  package, you should have access to the original SIFT and SURF implementations from OpenCV 2.4.X, only this time they’ll be in the xfeatures2d  sub-module through the cv2.SIFT_create  andcv2.SURF_create  functions.

To confirm this, open up a shell, import OpenCV, and execute the following commands (assuming you have an image named test_image.jpg  in your current directory, of course):

If all goes well, you should be able to instantiate the SIFT and SURF keypoint detectors and local invariant descriptors without error.

It’s also important to note that by using opencv_contrib  you will not be interfering with any of the other keypoint detectors and local invariant descriptors included in OpenCV 3. You’ll still be able to access KAZE, AKAZE, BRISK, etc. without an issue:

 

Summary

In this blog post we learned that OpenCV has removed the cv2.FeatureDetector_create  andcv2.DescriptorExtractor_create  functions from the library. Furthermore, the SIFT and SURF implementations have also been removed from the default OpenCV 3 install.

The reason for SIFT and SURF removal is due to what OpenCV calls “non-free” algorithms. Both SIFT and SURF are patented algorithms, meaning that you should technically be getting permission to use them in commercial algorithms (they are free to use for academic and research purposes though).

Because of this, OpenCV has made the decision to move patented algorithms (along with experimental implementations) to the opencv_contrib package. This means that to obtain access to SIFT and SURF, you’ll need to compile and install OpenCV 3 from source withopencv_contrib  support enabled. Luckily, this isn’t too challenging with the help of myOpenCV 3 install guides.

Once you have installed OpenCV 3 with opencv_contrib  support you’ll be able to find your favorite SIFT and SURF implementations in the xfeatures2d  package through thecv2.xfeatures2d.SIFT_create()  and cv2.xfeatures2d.SURF_create()  functions.

from: If you’ve had a chance to play around with OpenCV 3 (and do a lot of work with keypoint 




http://www.pyimagesearch.com/2015/07/16/where-did-sift-and-surf-go-in-opencv-3/

If you’ve had a chance to play around with OpenCV 3 (and do a lot of work with keypoint 

 

http://blog.csdn.net/garfielder007/article/details/51260087

 

 

opencv java api提取圖片sift特征 - anexplore

import org.opencv.core.Core;
import org.opencv.core.Mat; import org.opencv.core.MatOfKeyPoint; import org.opencv.highgui.Highgui; import org.opencv.features2d.*; public class ExtractSIFT { public static void main( String[] args ) { System.loadLibrary( Core.NATIVE_LIBRARY_NAME ); Mat test_mat = Highgui.imread("pfau.jpg"); Mat desc = new Mat(); FeatureDetector fd = FeatureDetector.create(FeatureDetector.SIFT); MatOfKeyPoint mkp =new MatOfKeyPoint(); fd.detect(test_mat, mkp); DescriptorExtractor de = DescriptorExtractor.create(DescriptorExtractor.SIFT); de.compute(test_mat,mkp,desc );//提取sift特征 System.out.println(desc.cols()); System.out.println(desc.rows()); } }



學習OpenCV——KeyPoint Matching 優化方式

今天讀Mastering OpenCV with Practical Computer Vision Projects 中的第三章里面講到了幾種特征點匹配的優化方式,在此記錄。

在圖像特征點檢測完成后(特征點檢測參考:學習OpenCV——BOW特征提取函數(特征點篇)),就會進入Matching  procedure。

 

 

1. OpenCV提供了兩種Matching方式

• Brute-force matcher (cv::BFMatcher) 

• Flann-based matcher (cv::FlannBasedMatcher)

Brute-force matcher就是用暴力方法找到點集一中每個descriptor在點集二中距離最近的descriptor;

Flann-based matcher 使用快速近似最近鄰搜索算法尋找(用快速的第三方庫近似最近鄰搜索算法)

一般把點集一稱為 train set (訓練集)對應模板圖像,點集二稱為 query set(查詢集)對應查找模板圖的目標圖像。

為了提高檢測速度,你可以調用matching函數前,先訓練一個matcher。訓練過程可以首先使用cv::FlannBasedMatcher來優化,為descriptor建立索引樹,這種操作將在匹配大量數據時發揮巨大作用(比如在上百幅圖像的數據集中查找匹配圖像)。而Brute-force matcher在這個過程並不進行操作,它只是將train descriptors保存在內存中。

 

 

2. 在matching過程可以使用cv::DescriptorMatcher的如下功能來進行匹配:

 

  • 簡單查找最優匹配:void match(const Mat& queryDescriptors, vector<DMatch>& matches,const vector<Mat>& masks=vector<Mat>() );
  • 為每個descriptor查找K-nearest-matches:void knnMatch(const Mat& queryDescriptors, vector<vector<DMatch> >& matches, int k,const vector<Mat>&masks=vector<Mat>(),bool compactResult=false );
  • 查找那些descriptors間距離小於特定距離的匹配:void radiusMatch(const Mat& queryDescriptors, vector<vector<DMatch> >& matches, maxDistance, const vector<Mat>& masks=vector<Mat>(), bool compactResult=false );
 

 

3. matching結果包含許多錯誤匹配,錯誤的匹配分為兩種:

 

  • False-positive matches: 將非對應特征點檢測為匹配(我們可以對他做文章,盡量消除它)
  • False-negative matches: 未將匹配的特征點檢測出來(無法處理,因為matching算法拒絕)
為了消除False-positive matches采用如下兩種方式:
  • Cross-match filter:
在OpenCV中 cv::BFMatcher class已經支持交叉驗證,建立  cv::BFMatcher將第二參數聲明為true
cv::Ptr<cv::DescriptorMatcher> matcher(new cv::BFMatcher(cv::NORM_HAMMING,true));
經過 Cross-match filter的結果:
  • Ratio test
使用KNN-matching算法,令K=2。則每個match得到兩個最接近的descriptor,然后計算最接近距離和次接近距離之間的比值,當比值大於既定值時,才作為最終match。

 

[cpp]  view plain  copy
 
 print ? 在CODE上查看代碼片 派生到我的代碼片
  1. void PatternDetector::getMatches(const cv::Mat& queryDescriptors,  std::vector<cv::DMatch>& matches)  
  2. {  
  3.     matches.clear();  
  4.     if (enableRatioTest)  
  5.     {  
  6.         // To avoid NaNs when best match has   
  7.         // zero distance we will use inverse ratio.   
  8.         const float minRatio = 1.f / 1.5f;  
  9.         // KNN match will return 2 nearest   
  10.         // matches for each query descriptor  
  11.         m_matcher->knnMatch(queryDescriptors, m_knnMatches, 2);  
  12.         for (size_t i=0; i<m_knnMatches.size(); i++)  
  13.         {  
  14.             const cv::DMatch& bestMatch = m_knnMatches[i][0];  
  15.             const cv::DMatch& betterMatch = m_knnMatches[i][1];  
  16.             float distanceRatio = bestMatch.distance /   
  17.                 betterMatch.distance;  
  18.             // Pass only matches where distance ratio between   
  19.             // nearest matches is greater than 1.5   
  20.             // (distinct criteria)  
  21.             if (distanceRatio < minRatio)  
  22.             {  
  23.                 matches.push_back(bestMatch);  
  24.             }  
  25.         }  
  26.     }  
  27.     else  
  28.     {  
  29.         // Perform regular match  
  30.         m_matcher->match(queryDescriptors, matches);  
  31.     }  
  32. }   


 
4. Homography estimation

 

為了進一步提升匹配精度,可以采用隨機樣本一致性(RANSAC)方法。
因為我們是使用一幅圖像(一個平面物體),我們可以將它定義為剛性的,可以在pattern image和query image的特征點之間找到單應性變換(homography transformation )。使用cv::findHomography找到這個單應性變換,使用RANSAC找到最佳單應性矩陣。(由於這個函數使用的特征點同時包含正確和錯誤匹配點,因此計算的單應性矩陣依賴於二次投影的准確性)
[cpp]  view plain  copy
 
 print ? 在CODE上查看代碼片 派生到我的代碼片
  1. bool PatternDetector::refineMatchesWithHomography  
  2. (  
  3. const std::vector<cv::KeyPoint>& queryKeypoints,  
  4. const std::vector<cv::KeyPoint>& trainKeypoints,   
  5. float reprojectionThreshold,  
  6. std::vector<cv::DMatch>& matches,  
  7. cv::Mat& homography  
  8. )  
  9. {  
  10. const int minNumberMatchesAllowed = 8;  
  11. if (matches.size() < minNumberMatchesAllowed)  
  12. return false;  
  13. // Prepare data for cv::findHomography  
  14. std::vector<cv::Point2f> srcPoints(matches.size());  
  15. std::vector<cv::Point2f> dstPoints(matches.size());  
  16. for (size_t i = 0; i < matches.size(); i++)  
  17. {  
  18. srcPoints[i] = trainKeypoints[matches[i].trainIdx].pt;  
  19. dstPoints[i] = queryKeypoints[matches[i].queryIdx].pt;  
  20. }  
  21. // Find homography matrix and get inliers mask  
  22. std::vector<unsigned char> inliersMask(srcPoints.size());  
  23. homography = cv::findHomography(srcPoints,   
  24. dstPoints,   
  25. CV_FM_RANSAC,   
  26. reprojectionThreshold,   
  27. inliersMask);  
  28. std::vector<cv::DMatch> inliers;  
  29. for (size_t i=0; i<inliersMask.size(); i++)  
  30. {  
  31. if (inliersMask[i])  
  32. inliers.push_back(matches[i]);  
  33. }  
  34. matches.swap(inliers);  
  35. return matches.size() > minNumberMatchesAllowed;  
  36. }   
 
經過單應性變換的過濾結果

 

今天讀Mastering OpenCV with Practical Computer Vision Projects 中的第三章里面講到了幾種特征點匹配的優化方式,在此記錄。

在圖像特征點檢測完成后(特征點檢測參考:學習OpenCV——BOW特征提取函數(特征點篇)),就會進入Matching  procedure。

 

 

1. OpenCV提供了兩種Matching方式

• Brute-force matcher (cv::BFMatcher) 

• Flann-based matcher (cv::FlannBasedMatcher)

Brute-force matcher就是用暴力方法找到點集一中每個descriptor在點集二中距離最近的descriptor;

Flann-based matcher 使用快速近似最近鄰搜索算法尋找(用快速的第三方庫近似最近鄰搜索算法)

一般把點集一稱為 train set (訓練集)對應模板圖像,點集二稱為 query set(查詢集)對應查找模板圖的目標圖像。

為了提高檢測速度,你可以調用matching函數前,先訓練一個matcher。訓練過程可以首先使用cv::FlannBasedMatcher來優化,為descriptor建立索引樹,這種操作將在匹配大量數據時發揮巨大作用(比如在上百幅圖像的數據集中查找匹配圖像)。而Brute-force matcher在這個過程並不進行操作,它只是將train descriptors保存在內存中。

 

 

2. 在matching過程可以使用cv::DescriptorMatcher的如下功能來進行匹配:

 

  • 簡單查找最優匹配:void match(const Mat& queryDescriptors, vector<DMatch>& matches,const vector<Mat>& masks=vector<Mat>() );
  • 為每個descriptor查找K-nearest-matches:void knnMatch(const Mat& queryDescriptors, vector<vector<DMatch> >& matches, int k,const vector<Mat>&masks=vector<Mat>(),bool compactResult=false );
  • 查找那些descriptors間距離小於特定距離的匹配:void radiusMatch(const Mat& queryDescriptors, vector<vector<DMatch> >& matches, maxDistance, const vector<Mat>& masks=vector<Mat>(), bool compactResult=false );
 

 

3. matching結果包含許多錯誤匹配,錯誤的匹配分為兩種:

 

  • False-positive matches: 將非對應特征點檢測為匹配(我們可以對他做文章,盡量消除它)
  • False-negative matches: 未將匹配的特征點檢測出來(無法處理,因為matching算法拒絕)
為了消除False-positive matches采用如下兩種方式:
  • Cross-match filter:
在OpenCV中 cv::BFMatcher class已經支持交叉驗證,建立  cv::BFMatcher將第二參數聲明為true
cv::Ptr<cv::DescriptorMatcher> matcher(new cv::BFMatcher(cv::NORM_HAMMING,true));
經過 Cross-match filter的結果:
  • Ratio test
使用KNN-matching算法,令K=2。則每個match得到兩個最接近的descriptor,然后計算最接近距離和次接近距離之間的比值,當比值大於既定值時,才作為最終match。

 

[cpp]  view plain  copy
 
 print ? 在CODE上查看代碼片 派生到我的代碼片
  1. void PatternDetector::getMatches(const cv::Mat& queryDescriptors,  std::vector<cv::DMatch>& matches)  
  2. {  
  3.     matches.clear();  
  4.     if (enableRatioTest)  
  5.     {  
  6.         // To avoid NaNs when best match has   
  7.         // zero distance we will use inverse ratio.   
  8.         const float minRatio = 1.f / 1.5f;  
  9.         // KNN match will return 2 nearest   
  10.         // matches for each query descriptor  
  11.         m_matcher->knnMatch(queryDescriptors, m_knnMatches, 2);  
  12.         for (size_t i=0; i<m_knnMatches.size(); i++)  
  13.         {  
  14.             const cv::DMatch& bestMatch = m_knnMatches[i][0];  
  15.             const cv::DMatch& betterMatch = m_knnMatches[i][1];  
  16.             float distanceRatio = bestMatch.distance /   
  17.                 betterMatch.distance;  
  18.             // Pass only matches where distance ratio between   
  19.             // nearest matches is greater than 1.5   
  20.             // (distinct criteria)  
  21.             if (distanceRatio < minRatio)  
  22.             {  
  23.                 matches.push_back(bestMatch);  
  24.             }  
  25.         }  
  26.     }  
  27.     else  
  28.     {  
  29.         // Perform regular match  
  30.         m_matcher->match(queryDescriptors, matches);  
  31.     }  
  32. }   


 
4. Homography estimation

 

為了進一步提升匹配精度,可以采用隨機樣本一致性(RANSAC)方法。
因為我們是使用一幅圖像(一個平面物體),我們可以將它定義為剛性的,可以在pattern image和query image的特征點之間找到單應性變換(homography transformation )。使用cv::findHomography找到這個單應性變換,使用RANSAC找到最佳單應性矩陣。(由於這個函數使用的特征點同時包含正確和錯誤匹配點,因此計算的單應性矩陣依賴於二次投影的准確性)
[cpp]  view plain  copy
 
 print ? 在CODE上查看代碼片 派生到我的代碼片
  1. bool PatternDetector::refineMatchesWithHomography  
  2. (  
  3. const std::vector<cv::KeyPoint>& queryKeypoints,  
  4. const std::vector<cv::KeyPoint>& trainKeypoints,   
  5. float reprojectionThreshold,  
  6. std::vector<cv::DMatch>& matches,  
  7. cv::Mat& homography  
  8. )  
  9. {  
  10. const int minNumberMatchesAllowed = 8;  
  11. if (matches.size() < minNumberMatchesAllowed)  
  12. return false;  
  13. // Prepare data for cv::findHomography  
  14. std::vector<cv::Point2f> srcPoints(matches.size());  
  15. std::vector<cv::Point2f> dstPoints(matches.size());  
  16. for (size_t i = 0; i < matches.size(); i++)  
  17. {  
  18. srcPoints[i] = trainKeypoints[matches[i].trainIdx].pt;  
  19. dstPoints[i] = queryKeypoints[matches[i].queryIdx].pt;  
  20. }  
  21. // Find homography matrix and get inliers mask  
  22. std::vector<unsigned char> inliersMask(srcPoints.size());  
  23. homography = cv::findHomography(srcPoints,   
  24. dstPoints,   
  25. CV_FM_RANSAC,   
  26. reprojectionThreshold,   
  27. inliersMask);  
  28. std::vector<cv::DMatch> inliers;  
  29. for (size_t i=0; i<inliersMask.size(); i++)  
  30. {  
  31. if (inliersMask[i])  
  32. inliers.push_back(matches[i]);  
  33. }  
  34. matches.swap(inliers);  
  35. return matches.size() > minNumberMatchesAllowed;  
  36. }   
 
經過單應性變換的過濾結果

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM