Using OpenCV Java with Eclipse
http://docs.opencv.org/2.4/doc/tutorials/introduction/java_eclipse/java_eclipse.html
Since version 2.4.4 OpenCV supports Java. In this tutorial I will explain how to setup development environment for using OpenCV Java with Eclipse in Windows, so you can enjoy the benefits of garbage collected, very refactorable (rename variable, extract method and whatnot) modern language that enables you to write code with less effort and make less mistakes. Here we go.
Configuring Eclipse
First, obtain a fresh release of OpenCV from download page and extract it under a simple location like C:\OpenCV-2.4.6\
. I am using version 2.4.6, but the steps are more or less the same for other versions.
Now, we will define OpenCV as a user library in Eclipse, so we can reuse the configuration for any project. Launch Eclipse and select Window –> Preferencesfrom the menu.

Navigate under Java –> Build Path –> User Libraries and click New....

Enter a name, e.g. OpenCV-2.4.6
, for your new library.

Now select your new user library and click Add External JARs....

Browse through C:\OpenCV-2.4.6\build\java\
and select opencv-246.jar
. After adding the jar, extend the opencv-246.jar and select Native library location and pressEdit....

Select External Folder... and browse to select the folder C:\OpenCV-2.4.6\build\java\x64
. If you have a 32-bit system you need to select the x86
folder instead ofx64
.

Your user library configuration should look like this:

Testing the configuration on a new Java project
Now start creating a new Java project.

On the Java Settings step, under Libraries tab, select Add Library... and select OpenCV-2.4.6, then click Finish.


Libraries should look like this:

Now you have created and configured a new Java project it is time to test it. Create a new java file. Here is a starter code for your convenience:
import org.opencv.core.Core; import org.opencv.core.CvType; import org.opencv.core.Mat; public class Hello { public static void main( String[] args ) { System.loadLibrary( Core.NATIVE_LIBRARY_NAME ); Mat mat = Mat.eye( 3, 3, CvType.CV_8UC1 ); System.out.println( "mat = " + mat.dump() ); } }
When you run the code you should see 3x3 identity matrix as output.

That is it, whenever you start a new project just add the OpenCV user library that you have defined to your project and you are good to go. Enjoy your powerful, less painful development environment :)
http://www.cnblogs.com/lidabo/p/3501285.html
Opencv3.1.0+opencv_contrib配置及使用SIFT測試
因為需要用到一些比較新的跟蹤算法,這兩天裝了opencv3.1並配置了opencv_contrib,並使用了SIFT算法測試是否配置成功。
1.opencv3.1安裝與配置
這里不多言,不熟悉的可以參考淺墨的博客:http://blog.csdn.net/poem_qianmo/article/details/19809337
2.opencv_contrib安裝與配置
從opencv3以來,一些比較新的功能都挪到了“opencv_contrib”庫里。配置這個庫需要重新編譯OpenCV,關於此部分可以參考教程:http://blog.csdn.net/linshuhe1/article/details/51221015
關於此教程需要補充兩點:A,使用cmake編譯的過程中經常會失敗,因為國內網絡問題ippicv_windows_20151201.zip 文件下載失敗導致,可以直接從這里下載:http://download.csdn.net/detail/qjj2857/9495013 B.教程最后配置包含目錄、庫目錄時沒有提及添加環境變量,這里也是同樣需要的。還有一切配置完成后別忘了重啟電腦喲。
3.寫個程序測試一下配置是否成功吧
opencv3.1中SIFT匹配是在opencv_contrib庫中的,這里我們就用它來做一個簡單的測試。
參考:
1. cv::xfeatures2d::SIFT Class Reference:http://docs.opencv.org/3.1.0/d5/d3c/classcv_1_1xfeatures2d_1_1SIFT.html#gsc.tab=0
2. OpenCV3.1 xfeatures2d::SIFT 使用:http://blog.csdn.net/lijiang1991/article/details/50855279
程序:
#include <iostream> #include <opencv2/opencv.hpp> //頭文件 #include <opencv2/xfeatures2d.hpp> using namespace cv; //包含cv命名空間 using namespace std; int main() { //Create SIFT class pointer Ptr<Feature2D> f2d = xfeatures2d::SIFT::create(); //讀入圖片 Mat img_1 = imread("1.jpg"); Mat img_2 = imread("2.jpg"); //Detect the keypoints vector<KeyPoint> keypoints_1, keypoints_2; f2d->detect(img_1, keypoints_1); f2d->detect(img_2, keypoints_2); //Calculate descriptors (feature vectors) Mat descriptors_1, descriptors_2; f2d->compute(img_1, keypoints_1, descriptors_1); f2d->compute(img_2, keypoints_2, descriptors_2); //Matching descriptor vector using BFMatcher BFMatcher matcher; vector<DMatch> matches; matcher.match(descriptors_1, descriptors_2, matches); //繪制匹配出的關鍵點 Mat img_matches; drawMatches(img_1, keypoints_1, img_2, keypoints_2, matches, img_matches); imshow("【match圖】", img_matches); //等待任意按鍵按下 waitKey(0); }
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
原始圖片:
匹配結果:
———————————-2016/8/12———————————
1.關於Ubuntu下opencv3.1及opencv_contrib的安裝與配置可參考:
官網:Installation in Linux
http://www.cnblogs.com/asmer-stone/p/5089764.html
上博文中有兩點需要注意:
A.按上文參考所述,第3步build文件夾需建在~/opencv/opencv文件夾中;且cmake時按照作者示例OPENCV_EXTRA_MODULES_PATH=~/opencv/opencv_contrib/modules ,注意”<>”需要去掉;末尾的..
表示opencv源碼在上一級目錄中。當然如果你了解cmake的使用方法cmake [optional] <opencv source directory>
, 可以任意設置文件夾目錄。
B.與在windows下相同,cmake時會因為“ippicv_linux_20151201.tgz 無法下載”而導致失敗。我們可以從http://download.csdn.net/download/lx928525166/9479919下載,並放入相應文件夾中。
2. 好,現在假設你已經安裝配置好了。由於在windows下我們習慣了用一個IDE來編程,這里在Ubuntu下我選擇使用eclipse來作為編程環境,下邊簡單說一下怎么在eclipse中配置opencv。
首先參考官網: http://docs.opencv.org/3.1.0/d7/d16/tutorial_linux_eclipse.html 你就應該能配置的差不多了,或者其他類似的吧網上一大堆。
但是中間可能會出現一些小問題,我個人配置的時候出現了兩個小問題:
A. 錯誤 undefined reference to symbol ‘_ZN2cv6imreadERKNS_6StringEi’ ,參考:http://answers.opencv.org/question/46755/first-example-code-error/
B. 錯誤 error while loading shared libraries: libopencv_core.so.3.0: cannot open shared object file: No such file or directory ,參考:http://stackoverflow.com/questions/27907343/error-while-loading-shared-libraries-libopencv-core-so-3-0
3. 所有配置均完成后,在上述windows下的代碼可以在這里直接運行。見下圖:
4.關於在ubuntu下運行其他samples程序。這里以cpp為例,直接找到opencv/samples/cpp/example_cmake,這里有一個示例已經提供了Makefile文件,make一下即可生成可執行文件。其他cpp示例文件類似。
OpenCV3如何使用SIFT和SURF Where did SIFT and SURF go in OpenCV 3?
If you’ve had a chance to play around with OpenCV 3 (and do a lot of work with keypoint
If you’ve had a chance to play around with OpenCV 3 (and do a lot of work with keypoint detectors and feature descriptors) you may have noticed that the SIFT and SURF implementations are no longer included in the OpenCV 3 library by default.
Unfortunately, you probably learned this lesson the hard way by opening up a terminal, importing OpenCV, and then trying to instantiate your favorite keypoint detector, perhaps using code like the following:
1
2
3
4
5
6
|
$ python
>>> import cv2
>>> detector = cv2.FeatureDetector_create("SIFT")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'module' object has no attribute 'FeatureDetector_create'
|
Oh no! There is no longer a cv2.FeatureDetector_create method!
The same is true for our cv2.DescriptorExtractor_create function as well:
1
2
3
4
|
>>> extractor = cv2.DescriptorExtractor_create("SIFT")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'module' object has no attribute 'DescriptorExtractor_create'
|
Furthermore, cv2.SIFT_create and cv2.SURF_create will fail as well:
1
2
3
4
5
6
7
8
|
>>> cv2.SIFT_create()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'module' object has no attribute 'SIFT_create'
>>> cv2.SURF_create()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'module' object has no attribute 'SURF_create'
|
I’ll be honest — this had me scratching my head at first. How am I supposed to access SIFT, SURF, and my other favorite keypoint detectors and local invariant descriptors ifcv2.FeatureDetector_create and cv2.DescriptorExtractor_create have been removed?
The cv2.FeatureDetector_create and cv2.DescriptorExtractor_create were (and still are) methods I used all the time. And personally, I really liked the OpenCV 2.4.X implementation. All you needed to do was pass in a string and the factory method would build the instantiation for you. You could then tune the parameters using the getter and setter methods of the keypoint detector or feature descriptor.
Furthermore, these methods have been part of OpenCV 2.4.X for many years. Why in the world were they removed from the default install? And where were they moved to?
In the remainder of this blog post, I’ll detail why certain keypoint detectors and local invariant descriptors were removed from OpenCV 3.0 by default. And I’ll also show you where you can find SIFT, SURF, and other detectors and descriptors in the new version of OpenCV.
Why were SIFT and SURF removed from the default install of OpenCV 3.0?
SIFT and SURF are examples of algorithms that OpenCV calls “non-free” modules. These algorithms are patented by their respective creators, and while they are free to use in academic and research settings, you should technically be obtaining a license/permission from the creators if you are using them in a commercial (i.e. for-profit) application.
With OpenCV 3 came a big push to move many of these “non-free” modules out of the default OpenCV install and into the opencv_contrib package. The opencv_contrib packages contains implementations of algorithms that are either patented or in experimental development.
The algorithms and associated implementations in opencv_contrib are not installed by default and you need to explicitly enable them when compiling and installing OpenCV to obtain access to them.
Personally, I’m not too crazy about this move.
Yes, I understand including patented algorithms inside an open source library may raise a few eyebrows. But algorithms such as SIFT and SURF are pervasive across much of computer vision. And more importantly, the OpenCV implementations of SIFT and SURF are used by academics and researchers daily to evaluate new image classification, Content-Based Image Retrieval, etc. algorithms. By not including these algorithms by default, more harm than good is done (at least in my opinion).
How do I get access to SIFT and SURF in OpenCV 3?
To get access to the original SIFT and SURF implementations found in OpenCV 2.4.X, you’ll need to pull down both the opencv and opencv_contrib repositories from GitHub and then compile and install OpenCV 3 from source.
Luckily, compiling OpenCV from source is easier than it used to be. I have gathered install instructions for Python and OpenCV for many popular operating systems over on the OpenCV 3 Tutorials, Resources, and Guides page — just scroll down the Install OpenCV 3 and Pythonsection and find the appropriate Python version (either Python 2.7+ or Python 3+) for your operating system.
How do I use SIFT and SURF with OpenCV 3?
So now that you have installed OpenCV 3 with the opencv_contrib package, you should have access to the original SIFT and SURF implementations from OpenCV 2.4.X, only this time they’ll be in the xfeatures2d sub-module through the cv2.SIFT_create andcv2.SURF_create functions.
To confirm this, open up a shell, import OpenCV, and execute the following commands (assuming you have an image named test_image.jpg in your current directory, of course):
1
2
3
4
5
6
7
8
9
10
11
12
|
$ python
>>> import cv2
>>> image = cv2.imread("test_image.jpg")
>>> gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
>>> sift = cv2.xfeatures2d.SIFT_create()
>>> (kps, descs) = sift.detectAndCompute(gray, None)
>>> print("# kps: {}, descriptors: {}".format(len(kps), descs.shape))
# kps: 274, descriptors: (274, 128)
>>> surf = cv2.xfeatures2d.SURF_create()
>>> (kps, descs) = surf.detectAndCompute(gray, None)
>>> print("# kps: {}, descriptors: {}".format(len(kps), descs.shape))
# kps: 393, descriptors: (393, 64)
|
If all goes well, you should be able to instantiate the SIFT and SURF keypoint detectors and local invariant descriptors without error.
It’s also important to note that by using opencv_contrib you will not be interfering with any of the other keypoint detectors and local invariant descriptors included in OpenCV 3. You’ll still be able to access KAZE, AKAZE, BRISK, etc. without an issue:
1
2
3
4
5
6
7
8
9
10
11
12
|
>>> kaze = cv2.KAZE_create()
>>> (kps, descs) = kaze.detectAndCompute(gray, None)
>>> print("# kps: {}, descriptors: {}".format(len(kps), descs.shape))
# kps: 359, descriptors: (359, 64)
>>> akaze = cv2.AKAZE_create()
>>> (kps, descs) = akaze.detectAndCompute(gray, None)
>>> print("# kps: {}, descriptors: {}".format(len(kps), descs.shape))
# kps: 192, descriptors: (192, 61)
>>> brisk = cv2.BRISK_create()
>>> (kps, descs) = brisk.detectAndCompute(gray, None)
>>> print("# kps: {}, descriptors: {}".format(len(kps), descs.shape))
# kps: 361, descriptors: (361, 64)
|
Summary
In this blog post we learned that OpenCV has removed the cv2.FeatureDetector_create andcv2.DescriptorExtractor_create functions from the library. Furthermore, the SIFT and SURF implementations have also been removed from the default OpenCV 3 install.
The reason for SIFT and SURF removal is due to what OpenCV calls “non-free” algorithms. Both SIFT and SURF are patented algorithms, meaning that you should technically be getting permission to use them in commercial algorithms (they are free to use for academic and research purposes though).
Because of this, OpenCV has made the decision to move patented algorithms (along with experimental implementations) to the opencv_contrib package. This means that to obtain access to SIFT and SURF, you’ll need to compile and install OpenCV 3 from source withopencv_contrib support enabled. Luckily, this isn’t too challenging with the help of myOpenCV 3 install guides.
Once you have installed OpenCV 3 with opencv_contrib support you’ll be able to find your favorite SIFT and SURF implementations in the xfeatures2d package through thecv2.xfeatures2d.SIFT_create() and cv2.xfeatures2d.SURF_create() functions.
from: If you’ve had a chance to play around with OpenCV 3 (and do a lot of work with keypoint
http://www.pyimagesearch.com/2015/07/16/where-did-sift-and-surf-go-in-opencv-3/
http://blog.csdn.net/garfielder007/article/details/51260087
opencv java api提取圖片sift特征 - anexplore
import org.opencv.core.Core;
import org.opencv.core.Mat; import org.opencv.core.MatOfKeyPoint; import org.opencv.highgui.Highgui; import org.opencv.features2d.*; public class ExtractSIFT { public static void main( String[] args ) { System.loadLibrary( Core.NATIVE_LIBRARY_NAME ); Mat test_mat = Highgui.imread("pfau.jpg"); Mat desc = new Mat(); FeatureDetector fd = FeatureDetector.create(FeatureDetector.SIFT); MatOfKeyPoint mkp =new MatOfKeyPoint(); fd.detect(test_mat, mkp); DescriptorExtractor de = DescriptorExtractor.create(DescriptorExtractor.SIFT); de.compute(test_mat,mkp,desc );//提取sift特征 System.out.println(desc.cols()); System.out.println(desc.rows()); } }
學習OpenCV——KeyPoint Matching 優化方式
今天讀Mastering OpenCV with Practical Computer Vision Projects 中的第三章里面講到了幾種特征點匹配的優化方式,在此記錄。
在圖像特征點檢測完成后(特征點檢測參考:學習OpenCV——BOW特征提取函數(特征點篇)),就會進入Matching procedure。
1. OpenCV提供了兩種Matching方式:
• Brute-force matcher (cv::BFMatcher)
• Flann-based matcher (cv::FlannBasedMatcher)
Brute-force matcher就是用暴力方法找到點集一中每個descriptor在點集二中距離最近的descriptor;
Flann-based matcher 使用快速近似最近鄰搜索算法尋找(用快速的第三方庫近似最近鄰搜索算法)
一般把點集一稱為 train set (訓練集)對應模板圖像,點集二稱為 query set(查詢集)對應查找模板圖的目標圖像。
為了提高檢測速度,你可以調用matching函數前,先訓練一個matcher。訓練過程可以首先使用cv::FlannBasedMatcher來優化,為descriptor建立索引樹,這種操作將在匹配大量數據時發揮巨大作用(比如在上百幅圖像的數據集中查找匹配圖像)。而Brute-force matcher在這個過程並不進行操作,它只是將train descriptors保存在內存中。
2. 在matching過程中可以使用cv::DescriptorMatcher的如下功能來進行匹配:
- 簡單查找最優匹配:void match(const Mat& queryDescriptors, vector<DMatch>& matches,const vector<Mat>& masks=vector<Mat>() );
- 為每個descriptor查找K-nearest-matches:void knnMatch(const Mat& queryDescriptors, vector<vector<DMatch> >& matches, int k,const vector<Mat>&masks=vector<Mat>(),bool compactResult=false );
- 查找那些descriptors間距離小於特定距離的匹配:void radiusMatch(const Mat& queryDescriptors, vector<vector<DMatch> >& matches, maxDistance, const vector<Mat>& masks=vector<Mat>(), bool compactResult=false );
3. matching結果包含許多錯誤匹配,錯誤的匹配分為兩種:
- False-positive matches: 將非對應特征點檢測為匹配(我們可以對他做文章,盡量消除它)
- False-negative matches: 未將匹配的特征點檢測出來(無法處理,因為matching算法拒絕)
- Cross-match filter:

- Ratio test
為了進一步提升匹配精度,可以采用隨機樣本一致性(RANSAC)方法。

今天讀Mastering OpenCV with Practical Computer Vision Projects 中的第三章里面講到了幾種特征點匹配的優化方式,在此記錄。
在圖像特征點檢測完成后(特征點檢測參考:學習OpenCV——BOW特征提取函數(特征點篇)),就會進入Matching procedure。
1. OpenCV提供了兩種Matching方式:
• Brute-force matcher (cv::BFMatcher)
• Flann-based matcher (cv::FlannBasedMatcher)
Brute-force matcher就是用暴力方法找到點集一中每個descriptor在點集二中距離最近的descriptor;
Flann-based matcher 使用快速近似最近鄰搜索算法尋找(用快速的第三方庫近似最近鄰搜索算法)
一般把點集一稱為 train set (訓練集)對應模板圖像,點集二稱為 query set(查詢集)對應查找模板圖的目標圖像。
為了提高檢測速度,你可以調用matching函數前,先訓練一個matcher。訓練過程可以首先使用cv::FlannBasedMatcher來優化,為descriptor建立索引樹,這種操作將在匹配大量數據時發揮巨大作用(比如在上百幅圖像的數據集中查找匹配圖像)。而Brute-force matcher在這個過程並不進行操作,它只是將train descriptors保存在內存中。
2. 在matching過程中可以使用cv::DescriptorMatcher的如下功能來進行匹配:
- 簡單查找最優匹配:void match(const Mat& queryDescriptors, vector<DMatch>& matches,const vector<Mat>& masks=vector<Mat>() );
- 為每個descriptor查找K-nearest-matches:void knnMatch(const Mat& queryDescriptors, vector<vector<DMatch> >& matches, int k,const vector<Mat>&masks=vector<Mat>(),bool compactResult=false );
- 查找那些descriptors間距離小於特定距離的匹配:void radiusMatch(const Mat& queryDescriptors, vector<vector<DMatch> >& matches, maxDistance, const vector<Mat>& masks=vector<Mat>(), bool compactResult=false );
3. matching結果包含許多錯誤匹配,錯誤的匹配分為兩種:
- False-positive matches: 將非對應特征點檢測為匹配(我們可以對他做文章,盡量消除它)
- False-negative matches: 未將匹配的特征點檢測出來(無法處理,因為matching算法拒絕)
- Cross-match filter:

- Ratio test
為了進一步提升匹配精度,可以采用隨機樣本一致性(RANSAC)方法。
