我們已經知道SIFT算法采用128維的特征描述子,由於描述子用的是浮點數,所以它將會占用512字節的空間。類似的SUFR算法,一般采用64維的描述子,它將占用256字節的空間。如果一幅圖像中有1000個特征點,那么SIFT或SURF特征描述子將占用大量的內存空間,對於那些資源緊張的應用,尤其是嵌入式的應用,這樣的特征描述子顯然是不可行的。而且,越占有越大的空間,意味着越長的匹配時間。
但是實際上SIFT或SURF的特征描述子中,並不是所有維都在匹配中有着實質性的作用。我們可以用PCA、LDA等特征降維的方法來壓縮特征描述子的維度。還有一些算法,例如局部敏感哈希(Locality-Sensitive Hashing, LSH),將SIFT的特征描述子轉換為一個二值的碼串,然后這個碼串用漢明距離進行特征點之間的匹配。這種方法將大大提高特征之間的匹配,因為漢明距離的計算可以用異或操作然后計算二進制位數來實現,在現代計算機結構中很方便。下面我們來提取一種二值碼串的特征描述子。
一 BRIEF簡述
BRIEF(Binary Robust Independent Elementary Features)應運而生,它提供了一種計算二值化的捷徑,並不需要計算一個類似於SIFT的特征描述子。它需要先平滑圖像,然后在特征點周圍選擇一個Patch,在這個Patch內通過一種選定的方法來挑選出來$n_d$個點對。然后對於每一個點對$(p,q)$,我們比較這兩個點的亮度值,如果$I(p)<I(q)$,則對應在二值串中的值為1,否則為0,。所有$n_d$個點對,都進行比較之間,我們就生成了一個$n_d$長的二進制串。
對於$n_d$的選擇,我們可以設置為128,256或者512,這三個參數在OpenCV中都有提供,但是OpenCV中默認的參數是256,這種情況下,經過大量實驗數據測試,不匹配的特征點的描述子的漢明距離在128左右,匹配點對描述子的漢明距離則遠小於128。一旦維數選定了,我們就可以用漢明距離來匹配這些描述子了。
我們總結一下特征描述子的建立過程:
- 利用Harris或者FAST等方法檢測特征點;
- 選定建立特征描述子的區域(特征點的一個正方形鄰域);
- 對該鄰域用$\sigma=2$窗口尺寸為9的的高斯核卷積,以消除一些噪聲。因為該特征描述子隨機性強,對噪聲較為敏感;
- 以一定的隨機化算法生成點對$(p,q)$,若點$I(p)<I(q)$的亮度,則返回1,否則返回0;
- 重復第三步若干次(如256次),得到一個256位的二進制碼串,即該特征點的描述子;
值得注意的是,對於BRIEF,它僅僅是一種特征描述符,它不提供提取特征點的方法。所以,如果你必須使一種特征點定位的方法,如FAST、SIFT、SURF等。這里,我們將使用CenSurE方法來提取關鍵點,對BRIEF來說,CenSurE的表現比SURF特征點稍好一些。
總體來說,BRIEF是一個效率很高的提取特征描述子的方法,我們總結一下該特征描述子的優缺點:
首先,它拋棄了傳統的利用圖像局部鄰域的灰度直方圖或梯度直方圖提取特征方法,改用檢測隨機響應,大大加快了描述子的建立速度;生成的二進制特征描述子便於高速匹配(計算漢明距離只需要通過異或操作加上統計二進制編碼中"1"的個數,這些通過底層運算可以實現),且便於在硬件上實現。其次,該特征描述子的缺點很明顯就是旋轉不變形較差,需要通過新的方法來改進。
通過實驗,作者進行結果比較:
- 在旋轉程度較小的圖像中,使用BRIEF特征描述子的匹配質量非常高,測試的大多數情況都超過了SURF,但是在旋轉大於30°后,BRIEF特征描述子的匹配率快速降到0左右;
- BRIEF的耗時非常短,在相同情況下計算512個特征點的描述子是,SURF耗時335ms,BRIEF僅8.18ms;匹配SURF描述子需要28.3ms,BRIEF僅需要2.19ms,在要求不太高的情形下,BRIEF描述子更容易做到實時;
BRIEF的優點主要在於速度,缺點也很明顯:
- 不具有旋轉不變形;
- 對噪聲敏感
- 不具有尺度不變性
因此為了解決前兩個缺點,並且綜合考慮速度和性能,提出了ORB算法,該算法將基於FAST關鍵點檢測的技術和BRIEF特征描述子技術相結合。
二 點對的選擇
設我們在特征點的鄰域塊大小為$S\times{S}$內選擇$n_d$個點對$(p,q)$,Colonder的實驗中測試了5種采樣方法。
- 在圖像塊內平均采樣;
- $p$和$q$都符合$(0,\frac{1}{25}S^2)$的 高斯分布;
- $p$符合$(0,\frac{1}{25}S^2)$的高斯分布,而$q$符合$(0,\frac{1}{100}S^2)$的高斯分布;
- 在空間量化極坐標下的離散位置隨機采樣;
- 把$p$固定為(0,0),$q$在周圍平均采樣;
下面是5種采樣方法的結果示意圖:
三 OpenCV實現
我們使用OpenCV演示一下特征點提取、BRIEF特征描述子提取、以及特征點匹配的過程:
# -*- coding: utf-8 -*- """ Created on Mon Sep 10 09:59:22 2018 @author: zy """ ''' 使用BRIEF特征描述符 ''' import cv2 import numpy as np def brief_test(): #加載圖片 灰色 img1 = cv2.imread('./image/match1.jpg') img1 = cv2.resize(img1,dsize=(600,400)) gray1 = cv2.cvtColor(img1,cv2.COLOR_BGR2GRAY) img2 = cv2.imread('./image/match2.jpg') img2 = cv2.resize(img2,dsize=(600,400)) gray2 = cv2.cvtColor(img2,cv2.COLOR_BGR2GRAY) image1 = gray1.copy() image2 = gray2.copy() image1 = cv2.medianBlur(image1,5) image2 = cv2.medianBlur(image2,5) ''' 1.使用SURF算法檢測關鍵點 ''' #創建一個SURF對象 閾值越高,能識別的特征就越少,因此可以采用試探法來得到最優檢測。 surf = cv2.xfeatures2d.SURF_create(3000) keypoints1 = surf.detect(image1) keypoints2 = surf.detect(image2) #在圖像上繪制關鍵點 image1 = cv2.drawKeypoints(image=image1,keypoints = keypoints1,outImage=image1,color=(255,0,255),flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS) image2 = cv2.drawKeypoints(image=image2,keypoints = keypoints2,outImage=image2,color=(255,0,255),flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS) #顯示圖像 cv2.imshow('sift_keypoints1',image1) cv2.imshow('sift_keypoints2',image2) cv2.waitKey(20) ''' 2.計算特征描述符 ''' brief = cv2.xfeatures2d.BriefDescriptorExtractor_create(32) keypoints1, descriptors1 = brief.compute(image1, keypoints1) keypoints2, descriptors2 = brief.compute(image2, keypoints2) print('descriptors1:',len(descriptors1),'descriptors2',len(descriptors2)) ''' 3.匹配 漢明距離匹配特征點 ''' matcher = cv2.BFMatcher_create(cv2.HAMMING_NORM_TYPE) matchePoints = matcher.match(descriptors1,descriptors2) print(type(matchePoints),len(matchePoints),matchePoints[0]) #提取強匹配特征點 minMatch = 1 maxMatch = 0 for i in range(len(matchePoints)): if minMatch > matchePoints[i].distance: minMatch = matchePoints[i].distance if maxMatch < matchePoints[i].distance: maxMatch = matchePoints[i].distance print('最佳匹配值是:',minMatch) print('最差匹配值是:',maxMatch) #獲取排雷在前邊的幾個最優匹配結果 goodMatchePoints = [] for i in range(len(matchePoints)): if matchePoints[i].distance < minMatch + (maxMatch-minMatch)/3: goodMatchePoints.append(matchePoints[i]) #繪制最優匹配點 outImg = None outImg = cv2.drawMatches(img1,keypoints1,img2,keypoints2,goodMatchePoints,outImg,matchColor=(0,255,0),flags=cv2.DRAW_MATCHES_FLAGS_DEFAULT) cv2.imshow('matche',outImg) cv2.waitKey(0) cv2.destroyAllWindows() if __name__ == '__main__': brief_test()
由於BRIEF描述子對噪聲比較敏感,因此我們對圖片進行了中值濾波處理,雖然消除了部分誤匹配點,但是從上圖中可以看到匹配的效果並不是很好。在實際應用上,我們應該選取兩個近似一致並且噪聲點較少的圖像,這樣才能取得較高的匹配質量。
四 自己實現
下面我們嘗試自己去實現BRIEF描述符,代碼如下(注意這個代碼是仿照OpenCV的C++實現):
# -*- coding: utf-8 -*- """ Created on Mon Sep 10 15:38:39 2018 @author: zy """ ''' 自己實現一個BRIEF特征描述符 參考:Opencv2.4.9源碼分析——BRIEF https://blog.csdn.net/zhaocj/article/details/44236863 ''' import cv2 import numpy as np import functools class BriefDescriptorExtractor(object): ''' BRIEF描述符實現 ''' def __init__(self,byte=16): ''' args: byte:描述子占用的字節數,這里只實現了16,32和64的沒有實現 ''' #鄰域范圍 self.__patch_size = 48 #平均積分核大小 self.__kernel_size = 9 #占用字節數16,對應描述子長度16*8=128 128個點對 self.__bytes = byte def compute(self,image,keypoints): ''' 計算特征描述符 args: image:輸入圖像 keypoints:圖像的關鍵點集合 return: 特征點,特征描述符元組 ''' if len(image.shape) == 3: gray_image = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY) else: gray_image = image.clone() #計算積分圖像 self.__image_sum = cv2.integral(gray_image,sdepth = cv2.CV_32S) print(type(self.__image_sum),self.__image_sum.shape) #移除接近邊界的關鍵點 keypoints_res = [] rows,cols = image.shape[:2] for keypoint in keypoints: point = keypoint.pt if point[0] > (self.__patch_size + self.__kernel_size)/2 and point[0] < cols-(self.__patch_size + self.__kernel_size/2): if point[1] > (self.__patch_size+self.__kernel_size)/2 and point[1] < rows - (self.__patch_size + self.__kernel_size )/2: keypoints_res.append(keypoint) #計算特征點描述符 return keypoints_res,self.pixelTests16(keypoints_res) def pixelTests16(self,keypoints): ''' 創建BRIEF描述符 args: keypoints:關鍵點 return: descriptors:返回描述符 ''' descriptors = np.zeros((len(keypoints),self.__bytes),dtype=np.uint8) for i in range(len(keypoints)): #固定默認參數 SMOOTHED = functools.partial(self.smoothed_sum,keypoint=keypoints[i]) descriptors[i][0] = (((SMOOTHED(-2, -1) < SMOOTHED(7, -1)) << 7) + ((SMOOTHED(-14, -1) < SMOOTHED(-3, 3)) << 6) + ((SMOOTHED(1, -2) < SMOOTHED(11, 2)) << 5) + ((SMOOTHED(1, 6) < SMOOTHED(-10, -7)) << 4) + ((SMOOTHED(13, 2) < SMOOTHED(-1, 0)) << 3) + ((SMOOTHED(-14, 5) < SMOOTHED(5, -3)) << 2) + ((SMOOTHED(-2, 8) < SMOOTHED(2, 4)) << 1) + ((SMOOTHED(-11, 8) < SMOOTHED(-15, 5)) << 0)) descriptors[i][1] = (((SMOOTHED(-6, -23) < SMOOTHED(8, -9)) << 7) + ((SMOOTHED(-12, 6) < SMOOTHED(-10, 8)) << 6) + ((SMOOTHED(-3, -1) < SMOOTHED(8, 1)) << 5) + ((SMOOTHED(3, 6) < SMOOTHED(5, 6)) << 4) + ((SMOOTHED(-7, -6) < SMOOTHED(5, -5)) << 3) + ((SMOOTHED(22, -2) < SMOOTHED(-11, -8)) << 2) + ((SMOOTHED(14, 7) < SMOOTHED(8, 5)) << 1) + ((SMOOTHED(-1, 14) < SMOOTHED(-5, -14)) << 0)) descriptors[i][2] = (((SMOOTHED(-14, 9) < SMOOTHED(2, 0)) << 7) + ((SMOOTHED(7, -3) < SMOOTHED(22, 6)) << 6) + ((SMOOTHED(-6, 6) < SMOOTHED(-8, -5)) << 5) + ((SMOOTHED(-5, 9) < SMOOTHED(7, -1)) << 4) + ((SMOOTHED(-3, -7) < SMOOTHED(-10, -18)) << 3) + ((SMOOTHED(4, -5) < SMOOTHED(0, 11)) << 2) + ((SMOOTHED(2, 3) < SMOOTHED(9, 10)) << 1) + ((SMOOTHED(-10, 3) < SMOOTHED(4, 9)) << 0)) descriptors[i][3] = (((SMOOTHED(0, 12) < SMOOTHED(-3, 19)) << 7) + ((SMOOTHED(1, 15) < SMOOTHED(-11, -5)) << 6) + ((SMOOTHED(14, -1) < SMOOTHED(7, 8)) << 5) + ((SMOOTHED(7, -23) < SMOOTHED(-5, 5)) << 4) + ((SMOOTHED(0, -6) < SMOOTHED(-10, 17)) << 3) + ((SMOOTHED(13, -4) < SMOOTHED(-3, -4)) << 2) + ((SMOOTHED(-12, 1) < SMOOTHED(-12, 2)) << 1) + ((SMOOTHED(0, 8) < SMOOTHED(3, 22)) << 0)) descriptors[i][4] = (((SMOOTHED(-13, 13) < SMOOTHED(3, -1)) << 7) + ((SMOOTHED(-16, 17) < SMOOTHED(6, 10)) << 6) + ((SMOOTHED(7, 15) < SMOOTHED(-5, 0)) << 5) + ((SMOOTHED(2, -12) < SMOOTHED(19, -2)) << 4) + ((SMOOTHED(3, -6) < SMOOTHED(-4, -15)) << 3) + ((SMOOTHED(8, 3) < SMOOTHED(0, 14)) << 2) + ((SMOOTHED(4, -11) < SMOOTHED(5, 5)) << 1) + ((SMOOTHED(11, -7) < SMOOTHED(7, 1)) << 0)) descriptors[i][5] = (((SMOOTHED(6, 12) < SMOOTHED(21, 3)) << 7) + ((SMOOTHED(-3, 2) < SMOOTHED(14, 1)) << 6) + ((SMOOTHED(5, 1) < SMOOTHED(-5, 11)) << 5) + ((SMOOTHED(3, -17) < SMOOTHED(-6, 2)) << 4) + ((SMOOTHED(6, 8) < SMOOTHED(5, -10)) << 3) + ((SMOOTHED(-14, -2) < SMOOTHED(0, 4)) << 2) + ((SMOOTHED(5, -7) < SMOOTHED(-6, 5)) << 1) + ((SMOOTHED(10, 4) < SMOOTHED(4, -7)) << 0)) descriptors[i][6] = (((SMOOTHED(22, 0) < SMOOTHED(7, -18)) << 7) + ((SMOOTHED(-1, -3) < SMOOTHED(0, 18)) << 6) + ((SMOOTHED(-4, 22) < SMOOTHED(-5, 3)) << 5) + ((SMOOTHED(1, -7) < SMOOTHED(2, -3)) << 4) + ((SMOOTHED(19, -20) < SMOOTHED(17, -2)) << 3) + ((SMOOTHED(3, -10) < SMOOTHED(-8, 24)) << 2) + ((SMOOTHED(-5, -14) < SMOOTHED(7, 5)) << 1) + ((SMOOTHED(-2, 12) < SMOOTHED(-4, -15)) << 0)) descriptors[i][7] = (((SMOOTHED(4, 12) < SMOOTHED(0, -19)) << 7) + ((SMOOTHED(20, 13) < SMOOTHED(3, 5)) << 6) + ((SMOOTHED(-8, -12) < SMOOTHED(5, 0)) << 5) + ((SMOOTHED(-5, 6) < SMOOTHED(-7, -11)) << 4) + ((SMOOTHED(6, -11) < SMOOTHED(-3, -22)) << 3) + ((SMOOTHED(15, 4) < SMOOTHED(10, 1)) << 2) + ((SMOOTHED(-7, -4) < SMOOTHED(15, -6)) << 1) + ((SMOOTHED(5, 10) < SMOOTHED(0, 24)) << 0)) descriptors[i][8] = (((SMOOTHED(3, 6) < SMOOTHED(22, -2)) << 7) + ((SMOOTHED(-13, 14) < SMOOTHED(4, -4)) << 6) + ((SMOOTHED(-13, 8) < SMOOTHED(-18, -22)) << 5) + ((SMOOTHED(-1, -1) < SMOOTHED(-7, 3)) << 4) + ((SMOOTHED(-19, -12) < SMOOTHED(4, 3)) << 3) + ((SMOOTHED(8, 10) < SMOOTHED(13, -2)) << 2) + ((SMOOTHED(-6, -1) < SMOOTHED(-6, -5)) << 1) + ((SMOOTHED(2, -21) < SMOOTHED(-3, 2)) << 0)) descriptors[i][9] = (((SMOOTHED(4, -7) < SMOOTHED(0, 16)) << 7) + ((SMOOTHED(-6, -5) < SMOOTHED(-12, -1)) << 6) + ((SMOOTHED(1, -1) < SMOOTHED(9, 18)) << 5) + ((SMOOTHED(-7, 10) < SMOOTHED(-11, 6)) << 4) + ((SMOOTHED(4, 3) < SMOOTHED(19, -7)) << 3) + ((SMOOTHED(-18, 5) < SMOOTHED(-4, 5)) << 2) + ((SMOOTHED(4, 0) < SMOOTHED(-20, 4)) << 1) + ((SMOOTHED(7, -11) < SMOOTHED(18, 12)) << 0)) descriptors[i][10] = (((SMOOTHED(-20, 17) < SMOOTHED(-18, 7)) << 7) + ((SMOOTHED(2, 15) < SMOOTHED(19, -11)) << 6) + ((SMOOTHED(-18, 6) < SMOOTHED(-7, 3)) << 5) + ((SMOOTHED(-4, 1) < SMOOTHED(-14, 13)) << 4) + ((SMOOTHED(17, 3) < SMOOTHED(2, -8)) << 3) + ((SMOOTHED(-7, 2) < SMOOTHED(1, 6)) << 2) + ((SMOOTHED(17, -9) < SMOOTHED(-2, 8)) << 1) + ((SMOOTHED(-8, -6) < SMOOTHED(-1, 12)) << 0)) descriptors[i][11] = (((SMOOTHED(-2, 4) < SMOOTHED(-1, 6)) << 7) + ((SMOOTHED(-2, 7) < SMOOTHED(6, 8)) << 6) + ((SMOOTHED(-8, -1) < SMOOTHED(-7, -9)) << 5) + ((SMOOTHED(8, -9) < SMOOTHED(15, 0)) << 4) + ((SMOOTHED(0, 22) < SMOOTHED(-4, -15)) << 3) + ((SMOOTHED(-14, -1) < SMOOTHED(3, -2)) << 2) + ((SMOOTHED(-7, -4) < SMOOTHED(17, -7)) << 1) + ((SMOOTHED(-8, -2) < SMOOTHED(9, -4)) << 0)) descriptors[i][12] = (((SMOOTHED(5, -7) < SMOOTHED(7, 7)) << 7) + ((SMOOTHED(-5, 13) < SMOOTHED(-8, 11)) << 6) + ((SMOOTHED(11, -4) < SMOOTHED(0, 8)) << 5) + ((SMOOTHED(5, -11) < SMOOTHED(-9, -6)) << 4) + ((SMOOTHED(2, -6) < SMOOTHED(3, -20)) << 3) + ((SMOOTHED(-6, 2) < SMOOTHED(6, 10)) << 2) + ((SMOOTHED(-6, -6) < SMOOTHED(-15, 7)) << 1) + ((SMOOTHED(-6, -3) < SMOOTHED(2, 1)) << 0)) descriptors[i][13] = (((SMOOTHED(11, 0) < SMOOTHED(-3, 2)) << 7) + ((SMOOTHED(7, -12) < SMOOTHED(14, 5)) << 6) + ((SMOOTHED(0, -7) < SMOOTHED(-1, -1)) << 5) + ((SMOOTHED(-16, 0) < SMOOTHED(6, 8)) << 4) + ((SMOOTHED(22, 11) < SMOOTHED(0, -3)) << 3) + ((SMOOTHED(19, 0) < SMOOTHED(5, -17)) << 2) + ((SMOOTHED(-23, -14) < SMOOTHED(-13, -19)) << 1) + ((SMOOTHED(-8, 10) < SMOOTHED(-11, -2)) << 0)) descriptors[i][14] = (((SMOOTHED(-11, 6) < SMOOTHED(-10, 13)) << 7) + ((SMOOTHED(1, -7) < SMOOTHED(14, 0)) << 6) + ((SMOOTHED(-12, 1) < SMOOTHED(-5, -5)) << 5) + ((SMOOTHED(4, 7) < SMOOTHED(8, -1)) << 4) + ((SMOOTHED(-1, -5) < SMOOTHED(15, 2)) << 3) + ((SMOOTHED(-3, -1) < SMOOTHED(7, -10)) << 2) + ((SMOOTHED(3, -6) < SMOOTHED(10, -18)) << 1) + ((SMOOTHED(-7, -13) < SMOOTHED(-13, 10)) << 0)) descriptors[i][15] = (((SMOOTHED(1, -1) < SMOOTHED(13, -10)) << 7) + ((SMOOTHED(-19, 14) < SMOOTHED(8, -14)) << 6) + ((SMOOTHED(-4, -13) < SMOOTHED(7, 1)) << 5) + ((SMOOTHED(1, -2) < SMOOTHED(12, -7)) << 4) + ((SMOOTHED(3, -5) < SMOOTHED(1, -5)) << 3) + ((SMOOTHED(-2, -2) < SMOOTHED(8, -10)) << 2) + ((SMOOTHED(2, 14) < SMOOTHED(8, 7)) << 1) + ((SMOOTHED(3, 9) < SMOOTHED(8, 2)) << 0)) return descriptors def smoothed_sum(self,y,x,keypoint): ''' 這里我們采用隨機點平滑,不采用論文中的高斯平滑,而是采用隨機點鄰域內積分和代替,同樣可以降低噪聲的影響 args: self.__image_sum:圖像積分圖 屬性 keypoint:其中一個關鍵點 y,x:x和y表示點對中某一個像素相對於特征點的坐標 return: 函數返回濾波的結果 ''' half_kernel = self.__kernel_size // 2 #計算點對中某一個像素的絕對坐標 img_y = int(keypoint.pt[1] + 0.5) + y img_x = int(keypoint.pt[0] + 0.5) + x #計算以該像素為中心,以KERNEL_SIZE為邊長的正方形內所有像素灰度值之和,本質上是均值濾波 ret = self.__image_sum[img_y + half_kernel+1][img_x + half_kernel+1] \ -self.__image_sum[img_y + half_kernel+1][img_x - half_kernel] \ -self.__image_sum[img_y - half_kernel][img_x + half_kernel+1] \ +self.__image_sum[img_y - half_kernel][img_x - half_kernel] return ret def brief_test(): #加載圖片 灰色 img1 = cv2.imread('./image/match1.jpg') img1 = cv2.resize(img1,dsize=(600,400)) gray1 = cv2.cvtColor(img1,cv2.COLOR_BGR2GRAY) img2 = cv2.imread('./image/match2.jpg') img2 = cv2.resize(img2,dsize=(600,400)) gray2 = cv2.cvtColor(img2,cv2.COLOR_BGR2GRAY) image1 = gray1.copy() image2 = gray2.copy() image1 = cv2.medianBlur(image1,5) image2 = cv2.medianBlur(image2,5) ''' 1.使用SURF算法檢測關鍵點 ''' #創建一個SURF對象 閾值越高,能識別的特征就越少,因此可以采用試探法來得到最優檢測。 surf = cv2.xfeatures2d.SURF_create(3000) keypoints1 = surf.detect(image1) keypoints2 = surf.detect(image2) #print(keypoints1[0].pt) #(x,y) #在圖像上繪制關鍵點 image1 = cv2.drawKeypoints(image=image1,keypoints = keypoints1,outImage=image1,color=(255,0,255),flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS) image2 = cv2.drawKeypoints(image=image2,keypoints = keypoints2,outImage=image2,color=(255,0,255),flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS) #顯示圖像 #cv2.imshow('surf_keypoints1',image1) #cv2.imshow('surf_keypoints2',image2) #cv2.waitKey(20) ''' 2.計算特征描述符 ''' brief = BriefDescriptorExtractor(16) keypoints1,descriptors1 = brief.compute(image1, keypoints1) keypoints2,descriptors2 = brief.compute(image2, keypoints2) print(descriptors1[:5]) print(descriptors2[:5]) print('descriptors1:',len(descriptors1),descriptors1.shape,'descriptors2',len(descriptors2),descriptors2.shape) ''' 3.匹配 漢明距離匹配特征點 ''' matcher = cv2.BFMatcher_create(cv2.HAMMING_NORM_TYPE) matchePoints = matcher.match(descriptors1,descriptors2) print('matchePoints',type(matchePoints),len(matchePoints),matchePoints[0]) #提取強匹配特征點 minMatch = 1 maxMatch = 0 for i in range(len(matchePoints)): if minMatch > matchePoints[i].distance: minMatch = matchePoints[i].distance if maxMatch < matchePoints[i].distance: maxMatch = matchePoints[i].distance print('最佳匹配值是:',minMatch) print('最差匹配值是:',maxMatch) #獲取排雷在前邊的幾個最優匹配結果 goodMatchePoints = [] for i in range(len(matchePoints)): if matchePoints[i].distance < minMatch + (maxMatch-minMatch)/3: goodMatchePoints.append(matchePoints[i]) #繪制最優匹配點 outImg = None outImg = cv2.drawMatches(img1,keypoints1,img2,keypoints2,goodMatchePoints,outImg,matchColor=(0,255,0),flags=cv2.DRAW_MATCHES_FLAGS_DEFAULT) cv2.imshow('matche',outImg) cv2.waitKey(0) cv2.destroyAllWindows() if __name__ == '__main__': brief_test()