基於opencv+python的二維碼識別


花了2天時間終於把二維碼識別做出來了,不過效果一般,后面會應用在ROS輔助定位上,廢話少說先上圖:

具體過程參考了這位大神的博客:http://blog.csdn.net/qq_25491201/article/details/51065547

詳細解釋:

第一步:利用opencv提取二維碼區域

1,先將讀入的攝像頭frame轉換成灰度圖:

gray = cv2.cvtColor(image_path, cv2.COLOR_BGR2GRAY)

2,使用opencv自帶的Sobel算子進行過濾:

gradX = cv2.Sobel(gray, cv2.CV_32F, 1, 0,-1)
gradY = cv2.Sobel(gray, cv2.CV_32F, 0, 1,-1)

具體參數可參考:http://blog.csdn.net/sunny2038/article/details/9170013

3,將過濾得到的X方向像素值減去Y方向的像素值:

gradient = cv2.subtract(gradX, gradY)

4,先縮放元素再取絕對值,最后轉換格式為8bit型

gradient = cv2.convertScaleAbs(gradient)

5,均值濾波取二值化:

blurred = cv2.blur(gradient, (9, 9))
(_, thresh) = cv2.threshold(blurred, 160, 160, cv2.THRESH_BINARY)

6,腐蝕和膨脹的函數:

kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (21, 7))
closed = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)

closed = cv2.erode(closed, None, iterations = 4)
closed = cv2.dilate(closed, None, iterations = 4)

7,找到邊界findContours函數

binary,cnts,hierarchy = cv2.findContours(closed.copy(), cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)

8,計算出包圍目標的最小矩形區域:

c = sorted(cnts, key = cv2.contourArea, reverse = True)[0]
rect = cv2.minAreaRect(c)
box = np.int0(cv2.boxPoints(rect))

 

第二步:識別二維碼

關於識別那就比較簡單了,主要是加載import zbar庫,然后scan就好了。

cap = cv2.VideoCapture(camera_idx)

# create a reader
scanner = zbar.ImageScanner()
# configure the reader
scanner.parse_config(
'enable')


box = detect.detect(frame)
if box != None:
# 這下面的3步得到掃描區域,掃描區域要比檢測出來的位置要大
min = np.min(box, axis=0)
max = np.max(box, axis=0)

roi = frame[min[1] - 10:max[1] + 10, min[0] - 10:max[0] + 10]
print roi.shape
# 把區域里的二維碼傳換成RGB,並把它轉換成pil里面的圖像,因為zbar得調用pil里面的圖像,而不能用opencv的圖像
roi = cv2.cvtColor(roi, cv2.COLOR_BGR2RGB)
pil = Image.fromarray(frame).convert('L')
width, height = pil.size
raw = pil.tostring()

# 把圖像裝換成數據
zarimage = zbar.Image(width, height, 'Y800', raw)

# 掃描器進行掃描
scanner.scan(zarimage)


 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM