使用GoodFeaturesToTrack進行關鍵點檢測---29


原創博客:轉載請標明出處:http://www.cnblogs.com/zxouxuewei/

 

關鍵點:是多個方向上亮度變化強的區域。

opencv:版本是2.4.

偵測器:opencv有大量的關鍵點偵測器,我們本次采用goodFeaturesToTrack()。

相應的啟動文件為:good_features.launch

偵測器返回的關鍵點變量:

    maxCorners : 設置最多返回的關鍵點數量。
    qualityLevel : 反應一個像素點強度有多強才能成為關鍵點。

    minDistance : 關鍵點之間的最少像素點。
    blockSize : 計算一個像素點是否為關鍵點時所取的區域大小。
    useHarrisDetector :使用原聲的 Harris 角偵測器或最小特征值標准。
    k : 一個用在Harris偵測器中的自由變量。

首先確保你的kinect驅動或者uvc相機驅動能正常啟動:(如果你使用的是kinect,請運行openni驅動)

roslaunch openni_launch openni.launch

  如果你沒有安裝kinect深度相機驅動,請看我前面的博文。

然后運行下面的launch文件:

roslaunch rbx1_vision good_features.launch

當視頻出現時,通過鼠標畫矩形將圖像中的某個對象框住。這個矩形表示所選的區域,你會看到這個區域中會出現一些綠色的小圓點,他們是goodFeaturesToTrack()。偵測器在該區域中發現的關鍵點,

以下是我的運行結果:

下面我們看看代碼,主要是good_features.py腳本。

#!/usr/bin/env python

""" good_features.py - Version 1.1 2013-12-20
    Locate the Good Features To Track in a video stream.
    
    Created for the Pi Robot Project: http://www.pirobot.org
    Copyright (c) 2011 Patrick Goebel.  All rights reserved.
    This program is free software; you can redistribute it and/or modify
    it under the terms of the GNU General Public License as published by
    the Free Software Foundation; either version 2 of the License, or
    (at your option) any later version.
    
    This program is distributed in the hope that it will be useful,
    but WITHOUT ANY WARRANTY; without even the implied warranty of
    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
    GNU General Public License for more details at:
    
    http://www.gnu.org/licenses/gpl.html
"""

import rospy
import cv2
import cv2.cv as cv
from rbx1_vision.ros2opencv2 import ROS2OpenCV2
import numpy as np

class GoodFeatures(ROS2OpenCV2):
    def __init__(self, node_name): 
        super(GoodFeatures, self).__init__(node_name)
          
        # Do we show text on the display?
        self.show_text = rospy.get_param("~show_text", True)
        
        # How big should the feature points be (in pixels)?
        self.feature_size = rospy.get_param("~feature_size", 1)
        
        # Good features parameters
        self.gf_maxCorners = rospy.get_param("~gf_maxCorners", 200)
        self.gf_qualityLevel = rospy.get_param("~gf_qualityLevel", 0.02)
        self.gf_minDistance = rospy.get_param("~gf_minDistance", 7)
        self.gf_blockSize = rospy.get_param("~gf_blockSize", 10)
        self.gf_useHarrisDetector = rospy.get_param("~gf_useHarrisDetector", True)
        self.gf_k = rospy.get_param("~gf_k", 0.04)
        
        # Store all parameters together for passing to the detector
        self.gf_params = dict(maxCorners = self.gf_maxCorners, 
                       qualityLevel = self.gf_qualityLevel,
                       minDistance = self.gf_minDistance,
                       blockSize = self.gf_blockSize,
                       useHarrisDetector = self.gf_useHarrisDetector,
                       k = self.gf_k)

        # Initialize key variables
        self.keypoints = list()
        self.detect_box = None
        self.mask = None
        
    def process_image(self, cv_image):
        try:
            # If the user has not selected a region, just return the image
            if not self.detect_box:
                return cv_image
    
            # Create a greyscale version of the image
            grey = cv2.cvtColor(cv_image, cv2.COLOR_BGR2GRAY)
            
            # Equalize the histogram to reduce lighting effects
            grey = cv2.equalizeHist(grey)
    
            # Get the good feature keypoints in the selected region
            keypoints = self.get_keypoints(grey, self.detect_box)
            
            # If we have points, display them
            if keypoints is not None and len(keypoints) > 0:
                for x, y in keypoints:
                    cv2.circle(self.marker_image, (x, y), self.feature_size, (0, 255, 0, 0), cv.CV_FILLED, 8, 0)
            
            # Process any special keyboard commands
            if self.keystroke != -1:
                try:
                    cc = chr(self.keystroke & 255).lower()
                    if cc == 'c':
                        # Clear the current keypoints
                        keypoints = list()
                        self.detect_box = None
                except:
                    pass
        except:
            pass
                                
        return cv_image

    def get_keypoints(self, input_image, detect_box):
        # Initialize the mask with all black pixels
        self.mask = np.zeros_like(input_image)
 
        # Get the coordinates and dimensions of the detect_box
        try:
            x, y, w, h = detect_box
        except: 
            return None
        
        # Set the selected rectangle within the mask to white
        self.mask[y:y+h, x:x+w] = 255

        # Compute the good feature keypoints within the selected region
        keypoints = list()
        kp = cv2.goodFeaturesToTrack(input_image, mask = self.mask, **self.gf_params)
        if kp is not None and len(kp) > 0:
            for x, y in np.float32(kp).reshape(-1, 2):
                keypoints.append((x, y))
                
        return keypoints

if __name__ == '__main__':
    try:
        node_name = "good_features"
        GoodFeatures(node_name)
        rospy.spin()
    except KeyboardInterrupt:
        print "Shutting down the Good Features node."
        cv.DestroyAllWindows()

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM