參考:https://blog.csdn.net/zilanpotou182/article/details/66478915(SIFT、SURF、ORB三種特征點的區別)
1.除了本書介紹的ORB特征點外,你還能找到哪些特征點?請說說SIFT或SURF的原理,並對比它們與ORB之間的優劣
特征點:圖像里一些特別的地方
特征點的種類:SIFT、SURF、ORB、FAST
SIFT算法充分考慮了在圖像變換過程中出現的光照、尺度、旋轉等變化,但是這會帶來極大的計算量。
SURF算法的速度遠快於SIFT,SURF的魯棒性很好,特征點識別率較SIFT高,在視角、光照、尺度變化等情形下,大體上都優於SIFT。
ORB算法運算速度與前兩者相比速度最快,但是這種算法尺度方面效果較差,因為ORB不具備尺度變換。
定量比較:計算速度:ORB>>SURF>>SIFT(各差一個量級)
旋轉魯棒性:SURF>ORB~SIFT(表示差不多)
模糊魯棒性: SURF>ORB~SIFT
尺度變換魯棒性: SURF>SIFT>ORB(ORB並不具備尺度變換性)
綜上,我們在選擇特征點時的依據是如果對計算實時性要求非常高,可選用ORB算法,但基本要保證正對拍攝;如果對實行性要求稍高,可以選擇SURF;基本不用SIFT
2.設計程序調用OpenCV中的其他種類特征點。統計在提取1000個特征點時在你的機器上的運行時間。
首先我們要知道如果要調用opencv中的SIFT和SURF特征點,SIFT和SURF都在nonfree模塊中,所以我們就需要nonfree模塊。
但是在opencv3中,SURF/SIFT 以及其它的一些東西被移動到了獨立的庫(opencv_contrib)中。
所以首先我們需要安裝opencv_contrib:(如果你用的是opencv2可以省略安裝opencv_contrib這一步驟)
安裝步驟見:安裝opencv_contrib(ubuntu16.0)
ORB、SIFT、SURF三種特征點提取方法的代碼如下:
1 #include <iostream> 2 #include <opencv2/core/core.hpp> 3 #include <opencv2/features2d/features2d.hpp> 4 //#include <opencv2/nonfree/features2d.hpp> 5 //#include <opencv2/nonfree/nonfree.hpp>//SIFT 6 #include <opencv2/highgui/highgui.hpp> 7 #include <opencv2/xfeatures2d/nonfree.hpp>//SIFT 8 #include <chrono> 9 10 //using namespace xfeatures2d; 11 using namespace std; 12 using namespace cv; 13 14 int main(int argc,char** argv) 15 { 16 if(argc!=2) 17 { 18 cout<<"usage:feature_extraction img1"<<endl; 19 return 1; 20 } 21 22 //讀取圖像 23 Mat img_1=imread(argv[1],CV_LOAD_IMAGE_COLOR); 24 25 //初始化 26 vector<KeyPoint> keypoints_1;//關鍵點,指特征點在圖像里的位置 27 28 //orb 29 chrono::steady_clock::time_point ORB_t1=chrono::steady_clock::now(); 30 Ptr<ORB> orb=ORB::create(500,1.2f,8,31,0,2,ORB::HARRIS_SCORE,31,20); 31 chrono::steady_clock::time_point ORB_t2=chrono::steady_clock::now(); 32 chrono::duration<double> ORB_time_used=chrono::duration_cast<chrono::duration<double>>(ORB_t2-ORB_t1); 33 cout<<"extract keypoints of ORB costs: "<<ORB_time_used.count()<<" seconds."<<endl; 34 orb->detect(img_1,keypoints_1); 35 36 cout<<"KP1 = "<<keypoints_1.size()<<endl;//特征點的數量 37 Mat outimg1; 38 drawKeypoints(img_1,keypoints_1,outimg1,Scalar::all(-1),DrawMatchesFlags::DEFAULT); 39 imshow("1.png的ORB特征點",outimg1); 40 41 42 // //SIFT 43 // chrono::steady_clock::time_point SIFT_t1=chrono::steady_clock::now(); 44 // Ptr<xfeatures2d::SiftFeatureDetector> siftDetector_1=xfeatures2d::SiftFeatureDetector::create(); 45 // siftDetector_1->detect(img_1,keypoints_1); 46 // chrono::steady_clock::time_point SIFT_t2=chrono::steady_clock::now(); 47 // chrono::duration<double> SIFT_time_used=chrono::duration_cast<chrono::duration<double>>(SIFT_t2-SIFT_t1); 48 // cout<<"extract keypoints of SIFT costs: "<<SIFT_time_used.count()<<" seconds."<<endl; 49 // Mat outImg; 50 // drawKeypoints(img_1,keypoints_1,outImg,Scalar::all(-1),DrawMatchesFlags::DEFAULT); 51 52 // cout<<"KP1 = "<<keypoints_1.size()<<endl; 53 // imshow("1.png的SIFT特征點",outImg); 54 55 // //SURF 56 // chrono::steady_clock::time_point SURF_t1=chrono::steady_clock::now(); 57 // Ptr<xfeatures2d::SurfFeatureDetector> surfDetector_1=xfeatures2d::SurfFeatureDetector::create(); 58 // surfDetector_1->detect(img_1,keypoints_1); 59 // chrono::steady_clock::time_point SURF_t2=chrono::steady_clock::now(); 60 // chrono::duration<double> SURF_time_used=chrono::duration_cast<chrono::duration<double>>(SURF_t2-SURF_t1); 61 // cout<<"extract keypoints of SURF costs: "<<SURF_time_used.count()<<" seconds."<<endl; 62 // Mat outImg; 63 // drawKeypoints(img_1,keypoints_1,outImg,Scalar::all(-1),DrawMatchesFlags::DEFAULT); 64 // cout<<"KP1 = "<<keypoints_1.size()<<endl; 65 // imshow("1.png的SURF特征點",outImg); 66 waitKey(0); 67 68 return 0; 69 }
實驗結果:原始圖像為:
1) ORB
2) SIFT
3) SURF
分析:可見,計算速度是ORB最快,SURF次之,SIFT最慢。
但是為什么基於同一張圖片提取特征點,三種方法提取出的特征點數目不一樣而且相差還很多呢?應該是因為三種方法的原理決定的吧,是因為三種方法提取特征點時選擇什么是特征點的依據不同?
SURF選取特征點的依據:SURF算法先利用Hessian矩陣確定候選點,然后進行非極大抑制。
3.我們發現,OpenCV提供的ORB特征點在圖像當中分布不夠均勻,你是否能找到或提出讓特征點分布更加均勻的方法?
我的想法是對於一張待提取特征點的圖像,我們把它分成一個個小塊,設定一個閾值,對於對個小塊,提取特征點,若特征點的數目超過了閾值就不再提取(這種方式可能速度快一些,但是缺點也很明顯……),或者在所有提取到的特征點中選擇等於閾值的更好的特征點?
利用非極大值抑制取出局部較密集特征點的思想:使用非極大值抑制算法去除臨近位置多個特征點的問題。為每一個特征點計算出其響應大小。計算方式是一個特征點和其周圍n個特征點偏差的絕對值和。在比較臨近的特征點中,保留響應值較大的特征點,刪除其余的特征點。
10.在Ceres中實現PnP和ICP的優化
PnP:
求解一個優化問題,關鍵是是要寫出它的cost Fuction的最小二乘形式。
根據教材的分析,本題的最小二乘問題可寫作:(見P162)
這個誤差項表示的是將像素坐標(觀測到的投影位置)與3D點按照當前估計的位姿進行投影得到的位置相比較得到的誤差。所以,我們可以得到以下信息:
已知的有:3D點——我們用p_3D表示(等下程序中用),其對應得投影坐標記為(u, v)
觀測得到的投影位置的像素坐標——p_p
待估計的變量:R,t
並且在利用ceres優化時,我們還要給出殘差的形式:
residual[0]=u-p_p(0,0)
residual[1]=v-p_p(0,1)
Ceres的用法步驟:1)定義Cost Function模型;2)調用AddResidualBlock將誤差項添加到目標函數中;3)自動求導需要指定誤差項和優化變量得維度;4)定好問題后,調用solve函數求解。
理清楚整個問題的流程,我們就可以利用Ceres優化庫來實現PnP優化了。同時優化相機的位姿(即內參,R,t)和外參,代碼如下:參考https://www.jianshu.com/p/34cb21e00264
1 #include <iostream> 2 #include <opencv2/core/core.hpp> 3 #include <opencv2/features2d/features2d.hpp> 4 #include <opencv2/highgui/highgui.hpp> 5 #include <opencv2/calib3d/calib3d.hpp> 6 #include <Eigen/Core> 7 #include <Eigen/Geometry> 8 #include <ceres/ceres.h> 9 #include <Eigen/SVD> 10 #include <chrono> 11 12 #include "common/tools/rotation.h" 13 14 15 using namespace std; 16 using namespace cv; 17 18 19 //第一步:定義Cost Function函數 20 struct cost_function 21 { 22 cost_function(Point3f p_3D,Point2f p_p):_p_3D(p_3D),_p_p(p_p) {}//3D-2D:知道n個3D空間點以及其投影位置,然后估計相機位姿 23 //計算殘差 24 template <typename T>//模板:使得在定義時可以模糊類型 25 bool operator() ( 26 const T* const r, const T* const t,//r,t為待優化的參數 27 const T* K,//k為待優化的參數 28 T* residual ) const //殘差 29 { 30 T p_3d[3]; 31 T p_Cam[3];//相機坐標系下空間點的坐標 32 p_3d[0]=T(_p_3D.x); 33 p_3d[1]=T(_p_3D.y); 34 p_3d[2]=T(_p_3D.z); 35 // 通過tool文件夾中的rotation.h中的AngleAxisRotatePoint()函數 36 // 計算在相機僅旋轉的情況下,新坐標系下的坐標 37 // 也就是p_Cam=R*p_3d 38 cout<<"point_3d: "<<p_3d[0]<<" "<<p_3d[1]<<" "<<p_3d[2]<<endl; 39 AngleAxisRotatePoint(r,p_3d,p_Cam); 40 41 p_Cam[0]=p_Cam[0]+t[0]; 42 p_Cam[1]=p_Cam[1]+t[1]; 43 p_Cam[2]=p_Cam[2]+t[2]; 44 //R,t計算T 45 //Eigen::Isometry3d T_w_c; 46 // T_w_c.rotate(r); 47 48 const T x=p_Cam[0]/p_Cam[2]; 49 const T y=p_Cam[1]/p_Cam[2]; 50 //3D點投影后的像素坐標 51 // const T u=x*520.9+325.1; 52 // const T v=y*521.0+249.7; 53 const T u=x*K[0]+K[1]; 54 const T v=y*K[2]+K[3]; 55 56 //觀測到的投影位置的像素坐標 57 T u1=T(_p_p.x); 58 T v1=T(_p_p.y); 59 60 //殘差 61 residual[0]=u-u1; 62 residual[1]=v-v1; 63 } 64 Point3f _p_3D; 65 Point2f _p_p; 66 }; 67 68 void find_feature_matches ( 69 const Mat& img_1, const Mat& img_2, 70 std::vector<KeyPoint>& keypoints_1, 71 std::vector<KeyPoint>& keypoints_2, 72 std::vector< DMatch >& matches ); 73 74 // 像素坐標轉相機歸一化坐標 75 Point2d pixel2cam ( const Point2d& p, const Mat& K ); 76 77 void bundleAdjustment(const vector<Point3f> points_3d, 78 const vector<Point2f> points_2d, Mat K, Mat &r, Mat &t); 79 80 int main ( int argc, char** argv ) 81 { 82 if ( argc != 4 ) 83 { 84 cout<<"usage: pose_estimation_3d2d img1 img2 depth1"<<endl; 85 return 1; 86 } 87 //-- 讀取圖像 88 Mat img_1 = imread ( argv[1], CV_LOAD_IMAGE_COLOR ); 89 Mat img_2 = imread ( argv[2], CV_LOAD_IMAGE_COLOR ); 90 91 vector<KeyPoint> keypoints_1, keypoints_2; 92 vector<DMatch> matches; 93 find_feature_matches ( img_1, img_2, keypoints_1, keypoints_2, matches ); 94 cout<<"一共找到了"<<matches.size() <<"組匹配點"<<endl; 95 96 // 建立3D點 97 Mat d1 = imread ( argv[3], CV_LOAD_IMAGE_UNCHANGED ); // 深度圖為16位無符號數,單通道圖像 98 Mat K = ( Mat_<double> ( 3,3 ) << 520.9, 0, 325.1, 0, 521.0, 249.7, 0, 0, 1 ); 99 vector<Point3f> pts_3d; 100 vector<Point2f> pts_2d; 101 for ( DMatch m:matches ) 102 { 103 ushort d = d1.ptr<unsigned short> (int ( keypoints_1[m.queryIdx].pt.y )) [ int ( keypoints_1[m.queryIdx].pt.x ) ]; 104 if ( d == 0 ) // bad depth 105 continue; 106 float dd = d/5000.0; 107 Point2d p1 = pixel2cam ( keypoints_1[m.queryIdx].pt, K ); 108 pts_3d.push_back ( Point3f ( p1.x*dd, p1.y*dd, dd ) ); 109 pts_2d.push_back ( keypoints_2[m.trainIdx].pt ); 110 } 111 112 cout<<"3d-2d pairs: "<<pts_3d.size() <<endl; 113 114 Mat r, t; 115 // solvePnP ( pts_3d, pts_2d, K, Mat(), r, t, false,cv::SOLVEPNP_EPNP ); // 調用OpenCV 的 PnP 求解,可選擇EPNP,DLS等方法 116 // solvePnP ( pts_3d, pts_2d, K, Mat(), r, t, false,CV_ITERATIVE ); 117 solvePnP ( pts_3d, pts_2d, K, Mat(), r, t, false); 118 Mat R; 119 cv::Rodrigues ( r, R ); // r為旋轉向量形式,用Rodrigues公式轉換為矩陣 120 121 cout<<"optional before: "<<endl; 122 cout<<"R="<<endl<<R<<endl; 123 cout<<"t="<<endl<<t<<endl<<endl; 124 125 cout<<"calling bundle adjustment"<<endl; 126 127 bundleAdjustment(pts_3d,pts_2d,K,r,t); 128 } 129 130 void find_feature_matches ( const Mat& img_1, const Mat& img_2, 131 std::vector<KeyPoint>& keypoints_1, 132 std::vector<KeyPoint>& keypoints_2, 133 std::vector< DMatch >& matches ) 134 { 135 //-- 初始化 136 Mat descriptors_1, descriptors_2; 137 // used in OpenCV3 138 Ptr<FeatureDetector> detector = ORB::create(); 139 Ptr<DescriptorExtractor> descriptor = ORB::create(); 140 // use this if you are in OpenCV2 141 // Ptr<FeatureDetector> detector = FeatureDetector::create ( "ORB" ); 142 // Ptr<DescriptorExtractor> descriptor = DescriptorExtractor::create ( "ORB" ); 143 Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create ( "BruteForce-Hamming" ); 144 //-- 第一步:檢測 Oriented FAST 角點位置 145 detector->detect ( img_1,keypoints_1 ); 146 detector->detect ( img_2,keypoints_2 ); 147 148 //-- 第二步:根據角點位置計算 BRIEF 描述子 149 descriptor->compute ( img_1, keypoints_1, descriptors_1 ); 150 descriptor->compute ( img_2, keypoints_2, descriptors_2 ); 151 152 //-- 第三步:對兩幅圖像中的BRIEF描述子進行匹配,使用 Hamming 距離 153 vector<DMatch> match; 154 // BFMatcher matcher ( NORM_HAMMING ); 155 matcher->match ( descriptors_1, descriptors_2, match ); 156 157 //-- 第四步:匹配點對篩選 158 double min_dist=10000, max_dist=0; 159 160 //找出所有匹配之間的最小距離和最大距離, 即是最相似的和最不相似的兩組點之間的距離 161 for ( int i = 0; i < descriptors_1.rows; i++ ) 162 { 163 double dist = match[i].distance; 164 if ( dist < min_dist ) min_dist = dist; 165 if ( dist > max_dist ) max_dist = dist; 166 } 167 168 printf ( "-- Max dist : %f \n", max_dist ); 169 printf ( "-- Min dist : %f \n", min_dist ); 170 171 //當描述子之間的距離大於兩倍的最小距離時,即認為匹配有誤.但有時候最小距離會非常小,設置一個經驗值30作為下限. 172 for ( int i = 0; i < descriptors_1.rows; i++ ) 173 { 174 if ( match[i].distance <= max ( 2*min_dist, 30.0 ) ) 175 { 176 matches.push_back ( match[i] ); 177 } 178 } 179 } 180 181 Point2d pixel2cam ( const Point2d& p, const Mat& K ) 182 { 183 return Point2d 184 ( 185 ( p.x - K.at<double> ( 0,2 ) ) / K.at<double> ( 0,0 ), 186 ( p.y - K.at<double> ( 1,2 ) ) / K.at<double> ( 1,1 ) 187 ); 188 } 189 190 //構建最小二乘問題 191 void bundleAdjustment(const vector<Point3f> points_3d, 192 const vector<Point2f> points_2d,Mat K, Mat &r, Mat &t) 193 { 194 // cout<<"R = "<<endl<<R<<endl; 195 cout<<"start:"<<endl; 196 double rotation_vector[3],tranf[3];//旋轉向量r,平移t 197 double k[4]; 198 rotation_vector[0]=r.at<double>(0,0); 199 rotation_vector[1]=r.at<double>(0,1); 200 rotation_vector[2]=r.at<double>(0,2); 201 202 tranf[0]=t.at<double>(0,0); 203 tranf[1]=t.at<double>(1,0); 204 tranf[2]=t.at<double>(2,0); 205 206 k[0]=520.9; 207 k[1]=325.1; 208 k[2]=521.0; 209 k[3]=249.7; 210 211 ceres::Problem problem; 212 for(int i=0;i<points_3d.size();++i) 213 { 214 ceres::CostFunction* costfunction=new ceres::AutoDiffCostFunction<cost_function,2,3,3,4>(new cost_function(points_3d[i],points_2d[i])); 215 problem.AddResidualBlock(costfunction,NULL,rotation_vector,tranf,k); 216 } 217 // cout<<rotation_vector[0]<<" "<<rotation_vector[1]<<" "<<rotation_vector[2]<<endl; 218 //配置求解器 219 ceres::Solver::Options option; 220 option.linear_solver_type=ceres::DENSE_QR;//DENSE_SCHUR 221 //true:迭代信息輸出到屏幕.false:不輸出 222 option.minimizer_progress_to_stdout=true; 223 224 ceres::Solver::Summary summary;//優化信息 225 //可以和g2o優化做對比 226 chrono::steady_clock::time_point t1=chrono::steady_clock::now(); 227 //開始優化 228 ceres::Solve(option,&problem,&summary); 229 chrono::steady_clock::time_point t2=chrono::steady_clock::now(); 230 chrono::duration<double> time_used=chrono::duration_cast<chrono::duration<double>>(t2-t1); 231 cout<<"solve time cost = "<<time_used.count()<<" second."<<endl; 232 233 //輸出結果 234 cout<<summary.BriefReport()<<endl; 235 Mat Rotaion_vector=(Mat_<double>(3,1)<<rotation_vector[0],rotation_vector[1],rotation_vector[2]); 236 // cout<<rotation_vector[0]<<" "<<rotation_vector[1]<<""<<rotation_vector[2]<<endl;//0,1,2 237 Mat Rotation_matrix; 238 Rodrigues(Rotaion_vector,Rotation_matrix);//r為旋轉向量形式,用Rodrigues公式轉換為矩陣 239 cout<<"after optional:"<<endl; 240 cout<<"R = "<<endl<<Rotation_matrix<<endl; 241 // cout<<"R = "<<endl<<<<endl; 242 cout<<"t = "<<tranf[0]<<" "<<tranf[1]<<" "<<tranf[2]<<endl; 243 }
結果:
和g2o優化所得結果做對比:結果大致相同,優化用時也差不多。
ICP:
參考:https://www.jianshu.com/p/1b02e9a36cc4
1 #include <iostream> 2 #include <opencv2/core/core.hpp> 3 #include <ceres/ceres.h> 4 5 #include <opencv2/highgui/highgui.hpp> 6 #include <opencv2/calib3d/calib3d.hpp> 7 #include <opencv2/features2d/features2d.hpp> 8 #include <Eigen/Core> 9 #include <Eigen/Geometry> 10 #include <Eigen/SVD> 11 12 #include <chrono> 13 14 #include "common/tools/rotation.h" 15 16 using namespace std; 17 using namespace cv; 18 19 void find_feature_matches(const Mat& img_1,const Mat& img_2, 20 vector<KeyPoint>& keypoints_1, 21 vector<KeyPoint>& keypoints_2, 22 vector<DMatch>& matches); 23 24 //像素坐標轉相機歸一化坐標 25 Point2d pixel2cam(const Point2d& p,const Mat& K); 26 27 void pose_estimation_3d3d(const vector<Point3f>& pts1, 28 const vector<Point3f>& pts2, 29 Mat& r,Mat& t_inv); 30 31 void bundleAdjustment(const vector<Point3f>& points_3f, 32 const vector<Point3f>& points_2f, 33 Mat& R, Mat& t_inv);//test 聲明的行參和定義的不同是否可行:可以的! 34 35 struct cost_function_define 36 { 37 cost_function_define(Point3f p1,Point3f p2):_p1(p1),_p2(p2){} 38 template<typename T> 39 bool operator()(const T* const cere_r,const T* const cere_t,T* residual) const 40 { 41 T p_1[3]; 42 T p_2[3]; 43 p_1[0]=T(_p1.x); 44 p_1[1]=T(_p1.y); 45 p_1[2]=T(_p1.z); 46 AngleAxisRotatePoint(cere_r,p_1,p_2); 47 p_2[0]=p_2[0]+cere_t[0]; 48 p_2[1]=p_2[1]+cere_t[1]; 49 p_2[2]=p_2[2]+cere_t[2]; 50 const T x=p_2[0]/p_2[2]; 51 const T y=p_2[1]/p_2[2]; 52 const T u=x*520.9+325.1; 53 const T v=y*521.0+249.7; 54 T p_3[3]; 55 p_3[0]=T(_p2.x); 56 p_3[1]=T(_p2.y); 57 p_3[2]=T(_p2.z); 58 59 const T x1=p_3[0]/p_3[2]; 60 const T y1=p_3[1]/p_3[2]; 61 62 const T u1=x1*520.9+325.1; 63 const T v1=y1*521.0+249.7; 64 65 residual[0]=u-u1; 66 residual[1]=v-v1; 67 } 68 Point3f _p1,_p2; 69 }; 70 71 int main(int argc,char** argv) 72 { 73 if(argc!=5) 74 { 75 cout<<"usage:pose_estimation_3d3d img1 img2 depth1 depth2"<<endl; 76 return 1; 77 } 78 79 //讀取圖像 80 Mat img_1=imread(argv[1],CV_LOAD_IMAGE_COLOR);//test 81 Mat img_2=imread(argv[2],CV_LOAD_IMAGE_COLOR); 82 83 vector<KeyPoint> keypoints_1,keypoints_2; 84 vector<DMatch> matches; 85 find_feature_matches(img_1,img_2,keypoints_1,keypoints_2,matches); 86 cout<<"一共找到了"<<matches.size()<<"組匹配點"<<endl; 87 88 //建立3D點 89 Mat depth_1=imread(argv[3],CV_LOAD_IMAGE_UNCHANGED);//深度圖為16位無符號數,單通道圖像 90 Mat depth_2=imread(argv[4],CV_LOAD_IMAGE_UNCHANGED);//深度圖為16位無符號數,單通道圖像 91 Mat K=(Mat_<double>(3,3)<<520.9,0,325.1, 92 0,521.0,249.7, 93 0,0,1); 94 vector<Point3f> pts1,pts2; 95 for(DMatch m:matches) 96 { 97 ushort d1=depth_1.ptr<unsigned short>(int(keypoints_1[m.queryIdx].pt.y))[int(keypoints_1[m.queryIdx].pt.x)]; 98 ushort d2=depth_2.ptr<unsigned short>(int(keypoints_2[m.trainIdx].pt.y))[int(keypoints_2[m.trainIdx].pt.x)]; 99 if(d1==0 || d2==0)//bad depth 100 continue; 101 Point2d p1=pixel2cam(keypoints_1[m.queryIdx].pt,K); 102 Point2d p2=pixel2cam(keypoints_2[m.trainIdx].pt,K); 103 float dd1=float(d1)/5000.0; 104 float dd2=float(d2)/5000.0; 105 pts1.push_back(Point3f(p1.x*dd1,p1.y*dd1,dd1)); 106 pts2.push_back(Point3f(p2.x*dd2,p2.y*dd2,dd2)); 107 } 108 109 cout<<"3d-3d pairs: "<<pts1.size()<<endl; 110 Mat R,t; 111 pose_estimation_3d3d(pts1,pts2,R,t); 112 cout<<"ICP via SVD results: "<<endl; 113 cout<<"R ="<<endl<<R<<endl; 114 cout<<"t ="<<endl<<t<<endl; 115 Mat R_inv=R.t(); 116 Mat t_inv=-R.t()*t; 117 cout<<"R_inv ="<<endl<<R_inv<<endl;//R^(-1) 118 cout<<"t_inv ="<<endl<<t_inv<<endl; 119 120 Mat r; 121 Rodrigues(R_inv,r);//R_inv->r 122 cout<<"r= "<<endl<<r<<endl; 123 124 for(int i=0;i<5;++i) 125 { 126 cout<<"p1= "<<pts1[i]<<endl;//?? 127 cout<<"p2= "<<pts2[i]<<endl;//?? 128 cout<<"(R*p2+t) = "<< 129 R*(Mat_<double>(3,1)<<pts2[i].x,pts2[i].y,pts2[i].z)+t 130 <<endl; 131 cout<<endl; 132 } 133 134 cout<<"calling bundle adjustment"<<endl; 135 bundleAdjustment(pts1,pts2,r,t_inv); 136 137 return 0; 138 } 139 140 //匹配特征點對 141 void find_feature_matches ( const Mat& img_1, const Mat& img_2, 142 std::vector<KeyPoint>& keypoints_1, 143 std::vector<KeyPoint>& keypoints_2, 144 std::vector< DMatch >& matches ) 145 { 146 //-- 初始化 147 Mat descriptors_1, descriptors_2; 148 // used in OpenCV3 149 Ptr<FeatureDetector> detector = ORB::create(); 150 Ptr<DescriptorExtractor> descriptor = ORB::create(); 151 // use this if you are in OpenCV2 152 // Ptr<FeatureDetector> detector = FeatureDetector::create ( "ORB" ); 153 // Ptr<DescriptorExtractor> descriptor = DescriptorExtractor::create ( "ORB" ); 154 Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create ( "BruteForce-Hamming" ); 155 //-- 第一步:檢測 Oriented FAST 角點位置 156 detector->detect ( img_1,keypoints_1 ); 157 detector->detect ( img_2,keypoints_2 ); 158 159 //-- 第二步:根據角點位置計算 BRIEF 描述子 160 descriptor->compute ( img_1, keypoints_1, descriptors_1 ); 161 descriptor->compute ( img_2, keypoints_2, descriptors_2 ); 162 163 //-- 第三步:對兩幅圖像中的BRIEF描述子進行匹配,使用 Hamming 距離 164 vector<DMatch> match; 165 // BFMatcher matcher ( NORM_HAMMING ); 166 matcher->match ( descriptors_1, descriptors_2, match ); 167 168 //-- 第四步:匹配點對篩選 169 double min_dist=10000, max_dist=0; 170 171 //找出所有匹配之間的最小距離和最大距離, 即是最相似的和最不相似的兩組點之間的距離 172 for ( int i = 0; i < descriptors_1.rows; i++ ) 173 { 174 double dist = match[i].distance; 175 if ( dist < min_dist ) min_dist = dist; 176 if ( dist > max_dist ) max_dist = dist; 177 } 178 179 printf ( "-- Max dist : %f \n", max_dist ); 180 printf ( "-- Min dist : %f \n", min_dist ); 181 182 //當描述子之間的距離大於兩倍的最小距離時,即認為匹配有誤.但有時候最小距離會非常小,設置一個經驗值30作為下限. 183 for ( int i = 0; i < descriptors_1.rows; i++ ) 184 { 185 if ( match[i].distance <= max ( 2*min_dist, 30.0 ) ) 186 { 187 matches.push_back ( match[i] ); 188 } 189 } 190 } 191 192 Point2d pixel2cam(const Point2d &p, const Mat &K) 193 { 194 return Point2d( 195 (p.x-K.at<double>(0,2))/K.at<double>(0,0), 196 (p.y-K.at<double>(1,2))/K.at<double>(1,1) 197 ); 198 } 199 200 void pose_estimation_3d3d(const vector<Point3f> &pts1, const vector<Point3f> &pts2, Mat &r, Mat &t_inv) 201 { 202 Point3f p1,p2;//center of mass 203 int N=pts1.size(); 204 //計算兩組點的質心位置p1,p2 205 for(int i=0;i<N;++i) 206 { 207 p1+=pts1[i]; 208 p2+=pts2[i]; 209 } 210 p1=Point3f(Vec3f(p1)/N);//test 211 cout<<"p1 ="<<endl<<p1<<endl; 212 p2=Point3f(Vec3f(p2)/N);//test 213 214 //計算每個點的去質心坐標q1,q2 215 vector<Point3f> q1(N),q2(N); 216 for(int i=0;i<N;++i) 217 { 218 q1[i]=pts1[i]-p1; 219 q2[i]=pts2[i]-p2; 220 } 221 222 //compute q1*q2^T,即書中P174的矩陣W(7.56) 223 Eigen::Matrix3d W=Eigen::Matrix3d::Zero(); 224 for(int i=0;i<N;++i) 225 { 226 W+=Eigen::Vector3d(q1[i].x,q1[i].y,q1[i].z)*Eigen::Vector3d(q2[i].x,q2[i].y,q2[i].z).transpose(); 227 } 228 cout<<"W ="<<endl<<W<<endl<<endl; 229 230 //SVD on W 231 Eigen::JacobiSVD<Eigen::Matrix3d> svd(W,Eigen::ComputeFullU|Eigen::ComputeFullV); 232 Eigen::Matrix3d U=svd.matrixU(); 233 Eigen::Matrix3d V=svd.matrixV(); 234 235 if(U.determinant()*V.determinant()<0) 236 { 237 for(int x=0;x<3;++x) 238 { 239 U(x,2)*=-1; 240 } 241 } 242 243 cout<<"U="<<U<<endl; 244 cout<<"V="<<V<<endl; 245 246 Eigen::Matrix3d R_=U*(V.transpose()); 247 Eigen::Vector3d t_=Eigen::Vector3d(p1.x,p1.y,p1.z)-R_*Eigen::Vector3d(p2.x,p2.y,p2.z); 248 249 //convert to cv::Mat 250 r=(Mat_<double>(3,3)<< 251 R_(0,0),R_(0,1),R_(0,2), 252 R_(1,0),R_(1,1),R_(1,2), 253 R_(2,0),R_(2,1),R_(2,2)); 254 t_inv=(Mat_<double>(3,1)<<t_(0,0),t_(1,0),t_(2,0)); 255 } 256 257 void bundleAdjustment(const vector<Point3f> &pts1, const vector<Point3f> &pts2, Mat &r, Mat &t_inv) 258 { 259 260 double cere_rot[3], cere_tranf[3]; 261 //關於初值的選取有疑問,隨便選擇一個初值時,結果不正確,why?? 262 cere_rot[0]=r.at<double>(0,0); 263 cere_rot[1]=r.at<double>(1,0); 264 cere_rot[2]=r.at<double>(2,0); 265 266 cere_tranf[0]=t_inv.at<double>(0,0); 267 cere_tranf[1]=t_inv.at<double>(1,0); 268 cere_tranf[2]=t_inv.at<double>(2,0); 269 270 271 ceres::Problem problem; 272 for(int i=0;i<pts1.size();++i) 273 { 274 ceres::CostFunction* costfunction=new ceres::AutoDiffCostFunction<cost_function_define,2,3,3>(new cost_function_define(pts1[i],pts2[i])); 275 problem.AddResidualBlock(costfunction,NULL,cere_rot,cere_tranf); 276 } 277 278 ceres::Solver::Options option; 279 option.linear_solver_type=ceres::DENSE_SCHUR; 280 option.minimizer_progress_to_stdout=true; 281 ceres::Solver::Summary summary; 282 ceres::Solve(option,&problem,&summary); 283 cout<<summary.BriefReport()<<endl; 284 285 cout<<"optional after: "<<endl; 286 Mat cam_3d=(Mat_<double>(3,1)<<cere_rot[0],cere_rot[1],cere_rot[2]); 287 // cout<<"cam_3d : "<<endl<<cam_3d<<endl; 288 Mat cam_9d; 289 Rodrigues(cam_3d,cam_9d); 290 291 292 cout<<"cam_9d: "<<endl<<cam_9d<<endl; 293 cout<<"cam_t: "<<endl<<cere_tranf[0]<<" "<<cere_tranf[1]<<" "<<cere_tranf[2]<<endl; 294 Mat tranf_3d=(Mat_<double>(3,1)<<cere_tranf[0],cere_tranf[1],cere_tranf[2]); 295 296 for(int i=0;i<5;++i) 297 { 298 cout<<"p1= "<<pts1[i]<<endl; 299 cout<<"p2= "<<pts2[i]<<endl; 300 cout<<"(R*p1+t)= "<< 301 cam_9d*(Mat_<double>(3,1)<<pts1[i].x,pts1[i].y,pts1[i].z)+tranf_3d<<endl; 302 cout<<endl; 303 } 304 }
結果:
僅用ICP得到的位姿為:
我們在兩幅圖片上分別找一些特征點驗證得到的位姿結果:p1=R*p2+t
利用Ceres優化之后:
進行驗證: