OpenCV和opencv_contrib的編譯


       在做特征匹配等圖像處理的項目時,需要用到SURF和ORB等特征提取算法,這就需要用到配置xfeatures.hpp頭文件以及相應的庫。但是這一模塊3.0版本以后以opencv_contrib模塊獨立出來,所以在使用這一模塊時,需要對其進行編譯。下面介紹OpenCV及其相應的opencv_contrib的編譯步驟。

       首先進入OpenCV的github網頁:OpenCV · GitHub ,打開界面如下:

                                                                      

          然后,分別點擊opencv和opencv_contrib頁面,並分別選擇下載5.x版本的壓縮包,下載頁面如下:                                                                                                                                                                      

                    

        將兩個下載好的壓縮包解壓到相應的文件夾下,然后新建build文件夾,存放編譯的文件:

       打開cmake,然后在Where is the source code一欄填入opencv-5.x源代碼的目錄,在Where to build the binaries一欄填入新建的opencv-5.x-build目錄,選擇Visual Studio 17 2022,點擊Configure:

                       

        Configure完畢后,出現如下界面,需要在BUILD_opencv_world一欄打勾,在OPENCV_EXTRA_MODULES_PATH一欄填入opencv_contrib-5.x下面modules所在目錄,然后點擊Generate:

         

         Generate完成后,在“目的路徑”----opencv-5.x-build目錄下找到OpenCV.sln文件:

                                  

          選擇VS 2022打開該工程文件,選擇Release x64,點擊 生成 >>>生成解決方案,開始編譯。編譯結束后,結果如下:

                                     

         然后,找到CMakeTargets目錄,在INSTALL選型右鍵,選擇 僅用於項目>>>僅生成INSTALL,即可編譯生成dll相關文件,編譯過程如下:

                        

         之后,就可以在項目屬性里面編輯:在   VC++目錄>>>包含目錄   下填寫:E:\opencv-5.x\opencv-5.x-build\install\include   E:\opencv-5.x\opencv-5.x-build\install\include\opencv2

         在   VC++目錄>>>庫目錄   下填寫:E:\opencv-5.x\opencv-5.x-build\install\x64\vc17\lib

         在   鏈接器>>>輸入>>>附加依賴項   中填寫:opencv_world500.lib,則完成一個項目的配置。

         下面以ORB匹配算法驗證環境配置是否正確,實現ORB匹配算法的代碼如下:

#include <opencv2\opencv.hpp>
#include <opencv2\core\core.hpp>
#include <opencv2\highgui\highgui.hpp>

#include <opencv2\imgproc\imgproc.hpp>
#include <opencv2\imgproc\types_c.h>

#include <opencv2\xfeatures2d.hpp>  
#include <opencv2\xfeatures2d\nonfree.hpp>

#include <cmath>
#include <vector>

using namespace cv;
using namespace std;

void main()
{
    double start = static_cast<double>(getTickCount());

    Mat src1, gray1, src2, gray2;
    src1 = imread("img_0.bmp");
    src2 = imread("img_1.bmp");

    cvtColor(src1, gray1, CV_BGR2GRAY);
    cvtColor(src2, gray2, CV_BGR2GRAY);

    morphologyEx(gray1, gray1, MORPH_GRADIENT, Mat());
    morphologyEx(gray2, gray2, MORPH_GRADIENT, Mat());

    vector<KeyPoint> keypoints1, keypoints2;
    Mat image1_descriptors, image2_descriptors;

    //采用ORB算法提取特征點  
    Ptr<ORB> orb = ORB::create(500);
    orb->setFastThreshold(0);

    orb->detectAndCompute(gray1, Mat(), keypoints1, image1_descriptors);
    orb->detectAndCompute(gray2, Mat(), keypoints2, image2_descriptors);

    BFMatcher matcher(NORM_HAMMING, true);   //漢明距離做為相似度度量  

    vector<DMatch> matches;
    matcher.match(image1_descriptors, image2_descriptors, matches);

    sort(matches.begin(), matches.end());

    Mat match_img;

    //保存匹配對序號  
    vector<int> queryIdxs(matches.size()), trainIdxs(matches.size());
    for (size_t i = 0; i < matches.size(); i++)
    {
        queryIdxs[i] = matches[i].queryIdx;
        trainIdxs[i] = matches[i].trainIdx;
    }

    Mat H12;  //變換矩陣  

    vector<Point2f> points1; KeyPoint::convert(keypoints1, points1, queryIdxs);
    vector<Point2f> points2; KeyPoint::convert(keypoints2, points2, trainIdxs);

    int ransacReprojThreshold = 5;  //拒絕閾值  
    H12 = findHomography(Mat(points1), Mat(points2), cv::RANSAC, ransacReprojThreshold);
    Mat points1t; vector<char> matchesMask(matches.size(), 0);
    perspectiveTransform(Mat(points1), points1t, H12);

    int mask_sum = 0;

    for (size_t i1 = 0; i1 < points1.size(); i1++)  //保存‘內點’  
    {
        if (norm(points2[i1] - points1t.at<Point2f>((int)i1, 0)) <= ransacReprojThreshold) //給內點做標記  
        {
            matchesMask[i1] = 1;
            mask_sum++;
        }
    }

    Mat Mat_img;
    drawMatches(src1, keypoints1, src2, keypoints2, matches, Mat_img, Scalar(0, 0, 255), Scalar::all(-1), matchesMask);

    imshow("result", Mat_img);

    imwrite("result.png", Mat_img);

    double time = ((double)getTickCount() - start) / getTickFrequency();
    cout << "The running time is:" << time << "seconds" << endl;

    cout << "The feature points found in picture 1:" << keypoints1.size() << endl;
    cout << "The feature points found in picture 2:" << keypoints2.size() << endl;
    cout << "The match results found in total:" << matches.size() << endl;
    cout << "The correct match results:" << mask_sum << endl;

    waitKey(0);
}

         現有一幅可見光圖像及其相應的紅外圖像,若要實現兩者之間的配准,運行上述代碼,可得如下匹配結果:

              

       由此可見,配置完全ok!


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM