在上一節中都是采用一階差分(導數),進行的邊緣提取。 也可以采用二階差分進行邊緣提取,如Laplacian算子,高斯拉普拉斯(LoG)邊緣檢測, 高斯差分(DoG)邊緣檢測,Marr-Hidreth邊緣檢測。這些邊緣提取算法詳細介紹如下:
1. Laplacian算子
Laplacian算子采用二階導數,其計算公式如下:(分別對x方向和y方向求二階導數,並求和)
其對應的Laplacian算子如下:
其推導過程如下:
opencv中提供Laplacian()函數計算拉普拉斯運算,其對應參數如下:
dst = cv2.Laplacian(src, ddepth, ksize, scale, delta, borderType)
src: 輸入圖像對象矩陣,單通道或多通道
ddepth:輸出圖片的數據深度,注意此處最好設置為cv.CV_32F或cv.CV_64F
ksize: Laplacian核的尺寸,默認為1,采用上面3*3的卷積核 scale: 放大比例系數 delta: 平移系數 borderType: 邊界填充類型
下面為使用代碼及其對應效果:

#coding:utf-8 import cv2 img_path= r"C:\Users\silence_cho\Desktop\Messi.jpg" img = cv2.imread(img_path) img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) dst_img = cv2.Laplacian(img, cv2.CV_32F) laplacian_edge = cv2.convertScaleAbs(dst_img) #取絕對值后,進行歸一化 dst_img_gray = cv2.Laplacian(img_gray, cv2.CV_32F) laplacian_edge_gray = cv2.convertScaleAbs(dst_img_gray) #取絕對值后,進行歸一化 cv2.imshow("img", img) cv2.imshow("laplacian_edge", laplacian_edge) cv2.imshow("img_gray", img_gray) cv2.imshow("laplacian_edge_gray ", laplacian_edge_gray) cv2.waitKey(0) cv2.destroyAllWindows()
Laplacina算子進行邊緣提取后,可以采用不同的后處理方法,其代碼和對應效果如下:

#coding:utf-8 import cv2 import numpy as np img_path= r"C:\Users\silence_cho\Desktop\Messi.jpg" img = cv2.imread(img_path) img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) dst_img_gray = cv2.Laplacian(img_gray, cv2.CV_32F) # 處理方式1 laplacian_edge = cv2.convertScaleAbs(dst_img_gray) #取絕對值后,進行歸一化 # convertScaleAbs等同於下面幾句: # laplacian_edge = np.abs(laplacian_edge) # laplacian_edge = laplacian_edge/np.max(laplacian_edge) # laplacian_edge = laplacian_edge*255 #進行歸一化處理 # laplacian_edge = laplacian_edge.astype(np.uint8) # 處理方式2 laplacian_edge2 = np.copy(laplacian_edge) # laplacian_edge2[laplacian_edge > 0] = 255 laplacian_edge2[laplacian_edge > 255] = 255 laplacian_edge2[laplacian_edge <= 0] = 0 laplacian_edge2 = laplacian_edge2.astype(np.uint8) #先進行平滑處理 gaussian_img_gray = cv2.GaussianBlur(dst_img_gray, (3, 3), 1) laplacian_edge3 = cv2.convertScaleAbs(gaussian_img_gray) #取絕對值后,進行歸一化 cv2.imshow("img_gray", img_gray) cv2.imshow("laplacian_edge", laplacian_edge) cv2.imshow("laplacian_edge2", laplacian_edge2) cv2.imshow("laplacian_edge3", laplacian_edge3) cv2.waitKey(0) cv2.destroyAllWindows()
2. 高斯拉普拉斯(LoG)邊緣檢測
拉普拉斯算子沒有對圖像做平滑處理,會對噪聲產生明顯的響應,所以一般先對圖片進行高斯平滑處理,再采用拉普拉斯算子進行處理,但這樣要進行兩次卷積處理。高斯拉普拉斯(LoG)邊緣檢測,是將兩者結合成一個卷積核,只進行一次卷積運算。
下面為一個標准差為1,3*3的LoG卷積核示例:
用python實現高斯拉普拉斯LoG,代碼及其對應效果如下:

#coding:utf-8 import numpy as np from scipy import signal import cv2 def createLoGKernel(sigma, size): H, W = size r, c = np.mgrid[0:H:1.0, 0:W:1.0] r -= (H-1)/2 c -= (W-1)/2 sigma2 = np.power(sigma, 2.0) norm2 = np.power(r, 2.0) + np.power(c, 2.0) LoGKernel = (norm2/sigma2 -2)*np.exp(-norm2/(2*sigma2)) # 省略掉了常數系數 1\2πσ4 print(LoGKernel) return LoGKernel def LoG(image, sigma, size, _boundary='symm'): LoGKernel = createLoGKernel(sigma, size) edge = signal.convolve2d(image, LoGKernel, 'same', boundary=_boundary) return edge if __name__ == "__main__": img_path= r"C:\Users\silence_cho\Desktop\Messi.jpg" img = cv2.imread(img_path, 0) LoG_edge = LoG(img, 1, (11, 11)) LoG_edge[LoG_edge>255] = 255 # LoG_edge[LoG_edge>255] = 0 LoG_edge[LoG_edge<0] = 0 LoG_edge = LoG_edge.astype(np.uint8) LoG_edge1 = LoG(img, 1, (37, 37)) LoG_edge1[LoG_edge1 > 255] = 255 LoG_edge1[LoG_edge1 < 0] = 0 LoG_edge1 = LoG_edge1.astype(np.uint8) LoG_edge2 = LoG(img, 2, (11, 11)) LoG_edge2[LoG_edge2 > 255] = 255 LoG_edge2[LoG_edge2 < 0] = 0 LoG_edge2 = LoG_edge2.astype(np.uint8) cv2.imshow("img", img) cv2.imshow("LoG_edge", LoG_edge) cv2.imshow("LoG_edge1", LoG_edge1) cv2.imshow("LoG_edge2", LoG_edge2) cv2.waitKey(0) cv2.destroyAllWindows()
3. 高斯差分(DoG)邊緣檢測
高斯差分(Difference of Gaussian, DoG), 是高斯拉普拉斯(LoG)的一種近似,兩者之間的關系推導如下:
高斯差分(Difference of Gaussian, DoG)邊緣檢測算法的步驟如下:
-
構建窗口大小為HxW,標准差為的DoG卷積核(H, W一般為奇數,且相等)
-
圖像與兩個高斯核卷積,卷積結果計算差分
-
邊緣后處理
python代碼實現DoG邊緣提取算法, 代碼和結果如下:

#coding:utf-8 import cv2 import numpy as np from scipy import signal # 二維高斯卷積核拆分為水平核垂直一維卷積核,分別進行卷積 def gaussConv(image, size, sigma): H, W = size # 先水平一維高斯核卷積 xr, xc = np.mgrid[0:1, 0:W] xc = xc.astype(np.float32) xc -= (W-1.0)/2.0 xk = np.exp(-np.power(xc, 2.0)/(2*sigma*sigma)) image_xk = signal.convolve2d(image, xk, 'same', 'symm') # 垂直一維高斯核卷積 yr, yc = np.mgrid[0:H, 0:1] yr = yr.astype(np.float32) yr -= (H-1.0)/2.0 yk = np.exp(-np.power(yr, 2.0)/(2*sigma*sigma)) image_yk = signal.convolve2d(image_xk, yk, 'same','symm') image_conv = image_yk/(2*np.pi*np.power(sigma, 2.0)) return image_conv #直接采用二維高斯卷積核,進行卷積 def gaussConv2(image, size, sigma): H, W = size r, c = np.mgrid[0:H:1.0, 0:W:1.0] c -= (W - 1.0) / 2.0 r -= (H - 1.0) / 2.0 sigma2 = np.power(sigma, 2.0) norm2 = np.power(r, 2.0) + np.power(c, 2.0) LoGKernel = (1 / (2*np.pi*sigma2)) * np.exp(-norm2 / (2 * sigma2)) image_conv = signal.convolve2d(image, LoGKernel, 'same','symm') return image_conv def DoG(image, size, sigma, k=1.1): Is = gaussConv(image, size, sigma) Isk = gaussConv(image, size, sigma*k) # Is = gaussConv2(image, size, sigma) # Isk = gaussConv2(image, size, sigma * k) doG = Isk - Is doG /= (np.power(sigma, 2.0)*(k-1)) return doG if __name__ == "__main__": img_path= r"C:\Users\silence_cho\Desktop\Messi.jpg" img = cv2.imread(img_path, 0) sigma = 1 k = 1.1 size = (7, 7) DoG_edge = DoG(img, size, sigma, k) DoG_edge[DoG_edge>255] = 255 DoG_edge[DoG_edge<0] = 0 DoG_edge = DoG_edge / np.max(DoG_edge) DoG_edge = DoG_edge * 255 DoG_edge = DoG_edge.astype(np.uint8) cv2.imshow("img", img) cv2.imshow("DoG_edge", DoG_edge) cv2.waitKey(0) cv2.destroyAllWindows()
4. Marri-Hildreth邊緣檢測算法
高斯拉普拉斯和高斯差分邊緣檢測,得到邊緣后,只進行了簡單的閾值處理,Marr-Hildreth則對其邊緣進行了進一步的細化,使邊緣更加精確細致,就像Canny對sobel算子的邊緣細化一樣。
Marr-Hildreth邊緣檢測可以細分為三步:
-
構建窗口大小為H*W的高斯拉普拉斯卷積核(LoG)或高斯差分卷積核(DoG)
-
圖形矩陣與LoG核或DoG核卷積
-
在第二步得到的結果中,尋找過零點的位置,過零點的位置即為邊緣位置
第三步可以這么理解,LoG核或DoG核卷積后表示的是二階導數,二階導數為0表示的是一階導數的極值,而一階導數為極值表示的是變化最劇烈的地方,因此對應到圖像邊緣提取中,二階導數為0,表示該位置像素點變化最明顯,即最有可能是邊緣交接位置。
對於連續函數g(x), 如果g(x1)*g(x2) < 0,即 g(x1) 和g(x2) 異號,那么在x1,x2之間一定存在x 使得g(x)=0, 則x為g(x)的過零點。在圖像中,Marr-Hildreth將像素點分為下面四種情況,分別判斷其領域點之間是否異號:
python代碼實現Marri-Hildreth邊緣檢測算法, 代碼和結果如下所示:

#coding:utf-8 import cv2 import numpy as np from scipy import signal # 二維高斯卷積核拆分為水平核垂直一維卷積核,分別進行卷積 def gaussConv(image, size, sigma): H, W = size # 先水平一維高斯核卷積 xr, xc = np.mgrid[0:1, 0:W] xc = xc.astype(np.float32) xc -= (W-1.0)/2.0 xk = np.exp(-np.power(xc, 2.0)/(2*sigma*sigma)) image_xk = signal.convolve2d(image, xk, 'same', 'symm') # 垂直一維高斯核卷積 yr, yc = np.mgrid[0:H, 0:1] yr = yr.astype(np.float32) yr -= (H-1.0)/2.0 yk = np.exp(-np.power(yr, 2.0)/(2*sigma*sigma)) image_yk = signal.convolve2d(image_xk, yk, 'same','symm') image_conv = image_yk/(2*np.pi*np.power(sigma, 2.0)) return image_conv def DoG(image, size, sigma, k=1.1): Is = gaussConv(image, size, sigma) Isk = gaussConv(image, size, sigma*k) doG = Isk - Is doG /= (np.power(sigma, 2.0)*(k-1)) return doG def zero_cross_default(doG): zero_cross = np.zeros(doG.shape, np.uint8); rows, cols = doG.shape for r in range(1, rows-1): for c in range(1, cols-1): if doG[r][c-1]*doG[r][c+1] < 0: zero_cross[r][c]=255 continue if doG[r-1][c] * doG[r+1][c] <0: zero_cross[r][c] = 255 continue if doG[r-1][c-1] * doG[r+1][c+1] <0: zero_cross[r][c] = 255 continue if doG[r-1][c+1] * doG[r+1][c-1] <0: zero_cross[r][c] = 255 continue return zero_cross def Marr_Hildreth(image, size, sigma, k=1.1): doG = DoG(image, size, sigma, k) zero_cross = zero_cross_default(doG) return zero_cross if __name__ == "__main__": img_path= r"C:\Users\silence_cho\Desktop\Messi.jpg" img = cv2.imread(img_path, 0) k = 1.1 marri_edge = Marr_Hildreth(img, (11, 11), 1, k) marri_edge2 = Marr_Hildreth(img, (11, 11), 2, k) marri_edge3 = Marr_Hildreth(img, (7, 7), 1, k) cv2.imshow("img", img) cv2.imshow("marri_edge", marri_edge) cv2.imshow("marri_edge2", marri_edge2) cv2.imshow("marri_edge3", marri_edge3) cv2.waitKey(0) cv2.destroyAllWindows()