Kinect+OpenNI學習筆記之13(Kinect驅動類,OpenCV顯示類和手部預分割類的設計)


 

  前言

  為了減小以后項目的開發效率,本次實驗將OpenNI底層驅動Kinect,OpenCV初步處理OpenNI獲得的原始數據,以及手勢識別中的分割(因為本系統最后是開發手勢識別的)這3個部分的功能單獨做成類,以便以后移植和擴展。其實在前面已經有不少文章涉及到了這3部分的設計,比如說:Kinect+OpenNI學習筆記之3(獲取kinect的數據並在Qt中顯示的類的設計)Kinect+OpenNI學習筆記之11(OpenNI驅動kinect手勢相關的類的設計)Kinect+OpenNI學習筆記之12(簡單手勢所表示的數字的識別) 。這次是綜合前面幾次的設計,優化了下這幾個類。

  開發環境:開發環境:QtCreator2.5.1+OpenNI1.5.4.0+Qt4.8.2+OpenCV2.4.3

 

  實驗基礎

  OPenNI/OPenCV知識點總結:

  Kinect驅動類,OpenCV顯示類和手部預分割類這3個類,單獨來設計其實參考了前面的博文還是很簡單的,但是由於這3個類之間有相互聯系,設計不好就會出現圖像顯示非常卡。這時候,需要注意下面幾點問題(在本程序代碼中,Kinect驅動類為COpenniHand,OpenCV顯示類為CKinectOpenCV, 手部預分割類為CKinectHandSegment):

  因為在kinect驅動類中有完整的kinect驅動程序(這個驅動會占用一部分時間的),而OpenCV顯示類調用了Kinect驅動類中的內容,相當於完成了一次Kinect驅動完整過程,這時候,因為在手部預分割過程中,要獲得手部的中心點,如果在該類中再次執行kinect的驅動來獲得該中心點,那么整個系統中一個流程的時間其kinect需要驅動兩次,這會浪費很多系統資源,導致圖像顯示不流暢等。因此我們應該在OpenCV顯示類中就返回Kinect驅動類中能夠返回的值,比如說手部中心點的位置

  在CKinectOpenCV類中由於要返回手部的中心點位置,本打算在類內部公共部分設置一個獲取手部中心點位置的函數的,但是發現如果這個函數的返回值是map類型時,運行時老出錯誤(理論上應該是不會出錯的),所以后面該為直接返回手部中心點的變量(map類型),但是在這個變量返回前要保證它的值是實時更新的,所以應該在返回前加入kinect驅動程序中的Updata函數,我這里將其設計成了一個開關函數,即如果允許獲取手部中心點,就將開關函數中的參數設置為ture,具體參見代碼部分。 

  C/C++知識點總結:

  定義類的對象並使用該對象后,一般會先調用該類的初始化函數,該函數的作用一般是為類成員變量進行一些初始設置,但如果類中其它函數的調用前每次都初始化某些變量時,這些變量的初始化不宜放在類的初始化函數中,而應該單獨給個私有函數,在那些需要調用它的函數前面進行被調用,達到初始化某些變量的目的。

  類的設計的目的一是為了方便,而是為了提高效率,有時候不能夠光為了方便而去設計,比如說在本次類設計中要獲得分割好了的圖像,或者原始圖像,或者深度圖像等等,確實是可以直接使用一個函數每一幅圖像,不過每次獲得圖像就要更新一個下kinect驅動中的數據,因此這樣的效率就非常低了,在實際設計中,我把那些kinect驅動設備的程序寫在了一個函數中,但是這個函數又不能被獲取圖像的每個函數去調用,否則還是相當於驅動了多次,因此只能由類所定義的對象來調用了。結果是每一個主函數循環中,我們在定義了類的對象后多調用一個函數,再去獲得所需的圖像,這樣只是多了一句代碼,卻節省了不少時間消耗。

 

  實驗結果

  本次實驗完成的功能依舊是獲取kinect的深度圖,顏色圖,手勢分割圖,手勢輪廓圖等。下面是手勢分割圖和輪廓處理圖的結果:

  

 

  實驗代碼及注釋:

  copennihand.h:

#ifndef COpenniHand_H
#define COpenniHand_H

#include <XnCppWrapper.h>
#include <iostream>
#include <vector>
#include <map>

using namespace xn;
using namespace std;

class COpenniHand
{
public:
    COpenniHand();
    ~COpenniHand();

    /*OpenNI的內部初始化,屬性設置*/
    bool Initial();

    /*啟動OpenNI讀取Kinect數據*/
    bool Start();

    /*更新OpenNI讀取到的數據*/
    bool UpdateData();

    /*得到色彩圖像的node*/
    ImageGenerator& getImageGenerator();

    /*得到深度圖像的node*/
    DepthGenerator& getDepthGenerator();

    /*得到手勢姿勢的node*/
    GestureGenerator& getGestureGenerator();

    /*得到手部的node*/
    HandsGenerator& getHandGenerator();
    DepthMetaData depth_metadata_;   //返回深度圖像數據
    ImageMetaData image_metadata_;   //返回彩色圖像數據
    std::map<XnUserID, XnPoint3D> hand_points_;  //為了存儲不同手的實時點而設置的
    std::map< XnUserID, vector<XnPoint3D> > hands_track_points_; //為了繪畫后面不同手部的跟蹤軌跡而設定的

private:
    /*該函數返回真代表出現了錯誤,返回假代表正確*/
    bool CheckError(const char* error);

    /*表示某個手勢動作已經完成檢測的回調函數*/
    static void XN_CALLBACK_TYPE  CBGestureRecognized(xn::GestureGenerator &generator, const XnChar *strGesture,
                                                      const XnPoint3D *pIDPosition, const XnPoint3D *pEndPosition,
                                                      void *pCookie);

    /*表示檢測到某個手勢開始的回調函數*/
    static void XN_CALLBACK_TYPE CBGestureProgress(xn::GestureGenerator &generator, const XnChar *strGesture,
                                                   const XnPoint3D *pPosition, XnFloat fProgress, void *pCookie);

    /*手部開始建立的回調函數*/
    static void XN_CALLBACK_TYPE HandCreate(HandsGenerator& rHands, XnUserID xUID, const XnPoint3D* pPosition,
                                            XnFloat fTime, void* pCookie);

    /*手部開始更新的回調函數*/
    static void XN_CALLBACK_TYPE HandUpdate(HandsGenerator& rHands, XnUserID xUID, const XnPoint3D* pPosition, XnFloat fTime,
                                            void* pCookie);

    /*手部銷毀的回調函數*/
    static void XN_CALLBACK_TYPE HandDestroy(HandsGenerator& rHands, XnUserID xUID, XnFloat fTime, void* pCookie);

    XnStatus status_;
    Context context_;
    XnMapOutputMode xmode_;
    ImageGenerator  image_generator_;
    DepthGenerator  depth_generator_;
    GestureGenerator gesture_generator_;
    HandsGenerator  hand_generator_;
};

#endif // COpenniHand_H

 

  copennihand.cpp:

#include "copennihand.h"
#include <XnCppWrapper.h>
#include <iostream>
#include <map>

using namespace xn;
using namespace std;

COpenniHand::COpenniHand()
{
}

COpenniHand::~COpenniHand()
{
}

bool COpenniHand::Initial()
{
    status_ = context_.Init();
    if(CheckError("Context initial failed!")) {
        return false;
    }

    context_.SetGlobalMirror(true);//設置鏡像
    xmode_.nXRes = 640;
    xmode_.nYRes = 480;
    xmode_.nFPS = 30;

    //產生顏色node
    status_ = image_generator_.Create(context_);
    if(CheckError("Create image generator  error!")) {
        return false;
    }

    //設置顏色圖片輸出模式
    status_ = image_generator_.SetMapOutputMode(xmode_);
    if(CheckError("SetMapOutputMdoe error!")) {
        return false;
    }

    //產生深度node
    status_ = depth_generator_.Create(context_);
    if(CheckError("Create depth generator  error!")) {
        return false;
    }

    //設置深度圖片輸出模式
    status_ = depth_generator_.SetMapOutputMode(xmode_);
    if(CheckError("SetMapOutputMdoe error!")) {
        return false;
    }

    //產生手勢node
    status_ = gesture_generator_.Create(context_);
    if(CheckError("Create gesture generator error!")) {
        return false;
    }

    /*添加手勢識別的種類*/
    gesture_generator_.AddGesture("Wave", NULL);
    gesture_generator_.AddGesture("click", NULL);
    gesture_generator_.AddGesture("RaiseHand", NULL);
    gesture_generator_.AddGesture("MovingHand", NULL);

    //產生手部的node
    status_ = hand_generator_.Create(context_);
    if(CheckError("Create hand generaotr error!")) {
        return false;
    }

    //視角校正
    status_ = depth_generator_.GetAlternativeViewPointCap().SetViewPoint(image_generator_);
    if(CheckError("Can't set the alternative view point on depth generator!")) {
        return false;
    }

    //設置與手勢有關的回調函數
    XnCallbackHandle gesture_cb;
    gesture_generator_.RegisterGestureCallbacks(CBGestureRecognized, CBGestureProgress, this, gesture_cb);

    //設置於手部有關的回調函數
    XnCallbackHandle hands_cb;
    hand_generator_.RegisterHandCallbacks(HandCreate, HandUpdate, HandDestroy, this, hands_cb);

    return true;
}

bool COpenniHand::Start()
{
    status_ = context_.StartGeneratingAll();
    if(CheckError("Start generating error!")) {
        return false;
    }
    return true;
}

bool COpenniHand::UpdateData()
{
    status_ = context_.WaitNoneUpdateAll();
    if(CheckError("Update date error!")) {
        return false;
    }
    //獲取數據
    image_generator_.GetMetaData(image_metadata_);
    depth_generator_.GetMetaData(depth_metadata_);

    return true;
}

ImageGenerator &COpenniHand::getImageGenerator()
{
    return image_generator_;
}

DepthGenerator &COpenniHand::getDepthGenerator()
{
    return depth_generator_;
}

GestureGenerator &COpenniHand::getGestureGenerator()
{
    return gesture_generator_;
}

HandsGenerator &COpenniHand::getHandGenerator()
{
    return hand_generator_;
}

bool COpenniHand::CheckError(const char *error)
{
    if(status_ != XN_STATUS_OK) {
        cerr << error << ": " << xnGetStatusString( status_ ) << endl;
        return true;
    }
    return false;
}

void COpenniHand::CBGestureRecognized(GestureGenerator &generator, const XnChar *strGesture, const XnPoint3D *pIDPosition, const XnPoint3D *pEndPosition, void *pCookie)
{
    COpenniHand *openni = (COpenniHand*)pCookie;
    openni->hand_generator_.StartTracking(*pEndPosition);
}

void COpenniHand::CBGestureProgress(GestureGenerator &generator, const XnChar *strGesture, const XnPoint3D *pPosition, XnFloat fProgress, void *pCookie)
{
}

void COpenniHand::HandCreate(HandsGenerator &rHands, XnUserID xUID, const XnPoint3D *pPosition, XnFloat fTime, void *pCookie)
{
    COpenniHand *openni = (COpenniHand*)pCookie;
    XnPoint3D project_pos;
    openni->depth_generator_.ConvertRealWorldToProjective(1, pPosition, &project_pos);
    pair<XnUserID, XnPoint3D> hand_point_pair(xUID, XnPoint3D());//在進行pair類型的定義時,可以將第2個設置為空
    hand_point_pair.second = project_pos;
    openni->hand_points_.insert(hand_point_pair);//將檢測到的手部存入map類型的hand_points_中。

    pair<XnUserID, vector<XnPoint3D>> hand_track_point(xUID, vector<XnPoint3D>());
    hand_track_point.second.push_back(project_pos);
    openni->hands_track_points_.insert(hand_track_point);
}

void COpenniHand::HandUpdate(HandsGenerator &rHands, XnUserID xUID, const XnPoint3D *pPosition, XnFloat fTime, void *pCookie)
{
    COpenniHand *openni = (COpenniHand*)pCookie;
    XnPoint3D project_pos;
    openni->depth_generator_.ConvertRealWorldToProjective(1, pPosition, &project_pos);
    openni->hand_points_.find(xUID)->second = project_pos;
    openni->hands_track_points_.find(xUID)->second.push_back(project_pos);
}

void COpenniHand::HandDestroy(HandsGenerator &rHands, XnUserID xUID, XnFloat fTime, void *pCookie)
{
    COpenniHand *openni = (COpenniHand*)pCookie;
    openni->hand_points_.erase(openni->hand_points_.find(xUID));
    openni->hands_track_points_.erase(openni->hands_track_points_.find(xUID ));
}

 

  ckinectopencv.h:

#ifndef CKINECTOPENCV_H
#define CKINECTOPENCV_H

#include <opencv2/core/core.hpp>
#include "copennihand.h"

using namespace cv;

class CKinectOpenCV
{
public:
    CKinectOpenCV();
    ~CKinectOpenCV();
    void GetAllInformation();   //在返回有用信息前調用該函數,因為openni的數據在不斷更新,信息的處理最好放在一個函數中
    Mat GetColorImage() ;
    Mat GetDepthImage() ;
    std::map<XnUserID, XnPoint3D> GetHandPoints();

private:
    COpenniHand openni_hand_;
    std::map<XnUserID, XnPoint3D> hand_points_;  //為了存儲不同手的實時點而設置的
    Mat color_image_;    //顏色圖像
    Mat depth_image_;    //深度圖像


};

#endif // CKINECTOPENCV_H

 

  ckinectopencv.cpp:

#include "ckinectopencv.h"
#include <opencv2/imgproc/imgproc.hpp>
#include <map>

using namespace cv;
using namespace std;

#define DEPTH_SCALE_FACTOR 255./4096.

CKinectOpenCV::CKinectOpenCV()
{   
    /*初始化openni對應的設備*/
     CV_Assert(openni_hand_.Initial());

    /*啟動openni對應的設備*/
    CV_Assert(openni_hand_.Start());
}

CKinectOpenCV::~CKinectOpenCV()
{
}

void CKinectOpenCV::GetAllInformation()
{
    CV_Assert(openni_hand_.UpdateData());
    /*獲取色彩圖像*/
    Mat color_image_src(openni_hand_.image_metadata_.YRes(), openni_hand_.image_metadata_.XRes(),
                        CV_8UC3, (char *)openni_hand_.image_metadata_.Data());
    cvtColor(color_image_src, color_image_, CV_RGB2BGR);

    /*獲取深度圖像*/
    Mat depth_image_src(openni_hand_.depth_metadata_.YRes(), openni_hand_.depth_metadata_.XRes(),
                        CV_16UC1, (char *)openni_hand_.depth_metadata_.Data());//因為kinect獲取到的深度圖像實際上是無符號的16位數據
    depth_image_src.convertTo(depth_image_, CV_8U, DEPTH_SCALE_FACTOR);

    hand_points_ = openni_hand_.hand_points_;   //返回手部點的位置

    return;
}

Mat CKinectOpenCV::GetColorImage()
{
    return color_image_;
}

Mat CKinectOpenCV::GetDepthImage()
{
    return depth_image_;
}

std::map<XnUserID, XnPoint3D> CKinectOpenCV::GetHandPoints()
{
    return hand_points_;
}

 

  ckinecthandsegment.h:

#ifndef KINECTHAND_H
#define KINECTHAND_H

#include "ckinectopencv.h"

using namespace cv;

#define MAX_HANDS_COLOR 10
#define MAX_HANDS_NUMBER  10

class CKinectHandSegment
{
public:
    CKinectHandSegment();
    ~CKinectHandSegment();
    void Initial();
    void StartKinectHand(); //啟動kinect手部設備驅動
    Mat GetColorImageWithHandsPoint();
    Mat GetHandSegmentImage();
    Mat GetHandHandlingImage();
    Mat GetColorImage();
    Mat GetDepthImage();


private:
    CKinectOpenCV kinect_opencv_;
    vector<Scalar> hand_center_color_array_;//采用默認的10種顏色
    std::map<XnUserID, XnPoint3D> hand_points_;
    vector<unsigned int> hand_depth_;
    vector<Rect> hands_roi_;
    bool hand_segment_flag_;
    Mat color_image_with_handspoint_;   //帶有手部中心位置的色彩圖
    Mat color_image_;   //色彩圖
    Mat depth_image_;
    Mat hand_segment_image_;
    Mat hand_handling_image_;
    Mat hand_segment_mask_;
};

#endif // KINECTHAND_H

 

  ckinecthandsegment.cpp:

#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include "ckinecthandsegment.h"
#include "copennihand.h"
#include "ckinectopencv.h"

using namespace cv;
using namespace std;

#define DEPTH_SCALE_FACTOR 255./4096.
#define ROI_HAND_WIDTH 140
#define ROI_HAND_HEIGHT 140
#define MEDIAN_BLUR_K 5
#define XRES  640
#define YRES  480
#define DEPTH_SEGMENT_THRESH 5
#define HAND_LIKELY_AREA 2000


CKinectHandSegment::CKinectHandSegment()
{
}


CKinectHandSegment::~CKinectHandSegment()
{
}


void CKinectHandSegment::Initial()
{
    color_image_with_handspoint_ = kinect_opencv_.GetColorImage();
    depth_image_ = kinect_opencv_.GetDepthImage();
    {
        hand_center_color_array_.push_back(Scalar(255, 0, 0));
        hand_center_color_array_.push_back(Scalar(0, 255, 0));
        hand_center_color_array_.push_back(Scalar(0, 0, 255));
        hand_center_color_array_.push_back(Scalar(255, 0, 255));
        hand_center_color_array_.push_back(Scalar(255, 255, 0));
        hand_center_color_array_.push_back(Scalar(0, 255, 255));
        hand_center_color_array_.push_back(Scalar(128, 255, 0));
        hand_center_color_array_.push_back(Scalar(0, 128, 255));
        hand_center_color_array_.push_back(Scalar(255, 0, 128));
        hand_center_color_array_.push_back(Scalar(255, 128, 255));
    }
    vector<unsigned int> hand_depth_temp(MAX_HANDS_NUMBER, 0);
    hand_depth_ = hand_depth_temp;
    vector<Rect> hands_roi_temp(MAX_HANDS_NUMBER, Rect(XRES/2, YRES/2, ROI_HAND_WIDTH, ROI_HAND_HEIGHT));
    hands_roi_ = hands_roi_temp;
}


void CKinectHandSegment::StartKinectHand()
{
    kinect_opencv_.GetAllInformation();
}


Mat CKinectHandSegment::GetColorImage()
{
    return kinect_opencv_.GetColorImage();
}


Mat CKinectHandSegment::GetDepthImage()
{
    return kinect_opencv_.GetDepthImage();
}


/*該函數只是在Kinect獲取的色彩圖片上將手的中心點位置畫出來而已,圖片的其它地方不變*/
Mat CKinectHandSegment::GetColorImageWithHandsPoint()
{
    color_image_with_handspoint_ = kinect_opencv_.GetColorImage();
    hand_points_ = kinect_opencv_.GetHandPoints();
    for(auto itUser = hand_points_.cbegin(); itUser != hand_points_.cend(); ++itUser) {
        circle(color_image_with_handspoint_, Point(itUser->second.X, itUser->second.Y),
               5, hand_center_color_array_.at(itUser->first % hand_center_color_array_.size()), 3, 8);
    }

    return color_image_with_handspoint_;
}


Mat CKinectHandSegment::GetHandSegmentImage()
{
    hand_segment_flag_ = false;
    color_image_ = kinect_opencv_.GetColorImage();
    depth_image_ = kinect_opencv_.GetDepthImage();
    hand_points_ = kinect_opencv_.GetHandPoints();
    hand_segment_mask_ = Mat::zeros(color_image_.size(), CV_8UC1);  //  因為zeros是一個靜態函數,所以不能直接用具體的對象去調用,而需要用類來調用

    for(auto itUser = hand_points_.cbegin(); itUser != hand_points_.cend(); ++itUser) {

        /*設置不同手部的深度*/
        hand_depth_.at(itUser->first % MAX_HANDS_COLOR) = (unsigned int)(itUser->second.Z* DEPTH_SCALE_FACTOR);//itUser->first會導致程序出現bug

        /*設置不同手部的不同感興趣區域*/
        hands_roi_.at(itUser->first % MAX_HANDS_NUMBER) = Rect(itUser->second.X - ROI_HAND_WIDTH/2, itUser->second.Y - ROI_HAND_HEIGHT/2,
                                           ROI_HAND_WIDTH, ROI_HAND_HEIGHT);
        hands_roi_.at(itUser->first % MAX_HANDS_NUMBER).x =  itUser->second.X - ROI_HAND_WIDTH/2;
        hands_roi_.at(itUser->first % MAX_HANDS_NUMBER).y =  itUser->second.Y - ROI_HAND_HEIGHT/2;
        hands_roi_.at(itUser->first % MAX_HANDS_NUMBER).width = ROI_HAND_WIDTH;
        hands_roi_.at(itUser->first % MAX_HANDS_NUMBER).height = ROI_HAND_HEIGHT;
        if(hands_roi_.at(itUser->first % MAX_HANDS_NUMBER).x <= 0)
            hands_roi_.at(itUser->first % MAX_HANDS_NUMBER).x  = 0;
        if(hands_roi_.at(itUser->first % MAX_HANDS_NUMBER).x > XRES)
            hands_roi_.at(itUser->first % MAX_HANDS_NUMBER).x =  XRES;
        if(hands_roi_.at(itUser->first % MAX_HANDS_NUMBER).y <= 0)
            hands_roi_.at(itUser->first % MAX_HANDS_NUMBER).y = 0;
        if(hands_roi_.at(itUser->first % MAX_HANDS_NUMBER).y > YRES)
            hands_roi_.at(itUser->first % MAX_HANDS_NUMBER).y =  YRES;
    }

    //取出手的mask部分,不管原圖像時多少通道的,mask矩陣聲明為單通道就ok
    for(auto itUser = hand_points_.cbegin(); itUser != hand_points_.cend(); ++itUser) {
        for(int i = hands_roi_.at(itUser->first % MAX_HANDS_NUMBER).x; i < min(hands_roi_.at(itUser->first % MAX_HANDS_NUMBER).x+hands_roi_.at(itUser->first % MAX_HANDS_NUMBER).width, XRES); i++)
            for(int j = hands_roi_.at(itUser->first % MAX_HANDS_NUMBER).y; j < min(hands_roi_.at(itUser->first % MAX_HANDS_NUMBER).y+hands_roi_.at(itUser->first % MAX_HANDS_NUMBER).height, YRES); j++) {
                hand_segment_mask_.at<unsigned char>(j, i) = ((hand_depth_.at(itUser->first % MAX_HANDS_NUMBER)-DEPTH_SEGMENT_THRESH) < depth_image_.at<unsigned char>(j, i))
                                                            & ((hand_depth_.at(itUser->first % MAX_HANDS_NUMBER)+DEPTH_SEGMENT_THRESH) > depth_image_.at<unsigned char>(j,i));
                hand_segment_mask_.at<unsigned char>(j, i) = 255*hand_segment_mask_.at<unsigned char>(j, i);
            }
        }

    medianBlur(hand_segment_mask_, hand_segment_mask_, MEDIAN_BLUR_K);
    hand_segment_image_.convertTo(hand_segment_image_, CV_8UC3, 0, 0 ); //  需要清零
    color_image_.copyTo(hand_segment_image_, hand_segment_mask_);
    hand_segment_flag_ = true;  //返回之前將分割標志置位為1,表示已經完成分割函數

    return hand_segment_image_;
}


Mat CKinectHandSegment::GetHandHandlingImage()
{
    /*對mask圖像進行輪廓提取,並在手勢識別圖像中畫出來*/
    std::vector< std::vector<Point> > contours;
    CV_Assert(hand_segment_flag_);  //  因為后面要用到分割函數的mask圖,所以這里先要保證調用過分割函數
    findContours(hand_segment_mask_, contours, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);//找出mask圖像的輪廓
    hand_handling_image_ = Mat::zeros(color_image_.rows, color_image_.cols, CV_8UC3);

    for(int i = 0; i < contours.size(); i++) {  //只有在檢測到輪廓時才會去求它的多邊形,凸包集,凹陷集
        /*找出輪廓圖像多邊形擬合曲線*/
        Mat contour_mat = Mat(contours[i]);
        if(contourArea(contour_mat) > HAND_LIKELY_AREA) {   //比較有可能像手的區域
            std::vector<Point> approx_poly_curve;
            approxPolyDP(contour_mat, approx_poly_curve, 10, true);//找出輪廓的多邊形擬合曲線
            std::vector< std::vector<Point> > approx_poly_curve_debug;
            approx_poly_curve_debug.push_back(approx_poly_curve);

             drawContours(hand_handling_image_, contours, i, Scalar(255, 0, 0), 1, 8); //畫出輪廓
//            drawContours(hand_handling_image_, approx_poly_curve_debug, 0, Scalar(256, 128, 128), 1, 8); //畫出多邊形擬合曲線

            /*對求出的多邊形擬合曲線求出其凸包集*/
            vector<int> hull;
            convexHull(Mat(approx_poly_curve), hull, true);
            for(int i = 0; i < hull.size(); i++) {
                circle(hand_handling_image_, approx_poly_curve[hull[i]], 2, Scalar(0, 255, 0), 2, 8);
            }

            /*對求出的多邊形擬合曲線求出凹陷集*/
            std::vector<Vec4i> convexity_defects;
            if(Mat(approx_poly_curve).checkVector(2, CV_32S) > 3)
                convexityDefects(approx_poly_curve, Mat(hull), convexity_defects);
            for(int i = 0; i < convexity_defects.size(); i++) {
                circle(hand_handling_image_, approx_poly_curve[convexity_defects[i][2]] , 2, Scalar(0, 0, 255), 2, 8);

            }
        }
    }

    /**畫出手勢的中心點**/
    for(auto itUser = hand_points_.cbegin(); itUser != hand_points_.cend(); ++itUser) {
        circle(hand_handling_image_, Point(itUser->second.X, itUser->second.Y), 3, Scalar(0, 255, 255), 3, 8);
    }

    return hand_handling_image_;
}

 

  main.cpp:

#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include "ckinectopencv.h"
#include "ckinecthandsegment.h"

using namespace std;
using namespace cv;

int main()
{
    CKinectHandSegment kinect_hand_segment;
    Mat color_image ;
    Mat depth_image ;
    Mat hand_segment;
    Mat hand_handling_image;

    kinect_hand_segment.Initial();
    while(1)
    {
        kinect_hand_segment.StartKinectHand();
        color_image = kinect_hand_segment.GetColorImageWithHandsPoint();
        hand_segment = kinect_hand_segment.GetHandSegmentImage();
        hand_handling_image = kinect_hand_segment.GetHandHandlingImage();
        depth_image = kinect_hand_segment.GetDepthImage();

        imshow("color_image", color_image);
        imshow("depth_image", depth_image);
        imshow("hand_segment", hand_segment);
        imshow("hand_handling", hand_handling_image);
        waitKey(30);
    }

    return 0;
}

 

 

  實驗總結:把這些基本功能類設計好了后,就可以更方面測試我后面的手勢識別算法了,加油!

 

 

  參考資料:

     Kinect+OpenNI學習筆記之3(獲取kinect的數據並在Qt中顯示的類的設計)

     Kinect+OpenNI學習筆記之11(OpenNI驅動kinect手勢相關的類的設計)

     Kinect+OpenNI學習筆記之12(簡單手勢所表示的數字的識別) 

 

 

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM