OpenCV3.4兩種立體匹配算法效果對比


 以OpenCV自帶的Aloe圖像對為例:

   

1.BM算法(Block Matching)

參數設置如下:

    int numberOfDisparities = ((imgSize.width / 8) + 15) & -16;
    cv::Ptr<cv::StereoBM> bm = cv::StereoBM::create(16, 9);
    cv::Rect roi1, roi2;
    bm->setROI1(roi1);
    bm->setROI2(roi2);
    bm->setPreFilterCap(31);
    bm->setBlockSize(9);
    bm->setMinDisparity(0);
    bm->setNumDisparities(numberOfDisparities);
    bm->setTextureThreshold(10);
    bm->setUniquenessRatio(15);
    bm->setSpeckleWindowSize(100);
    bm->setSpeckleRange(32);
    bm->setDisp12MaxDiff(1);
    bm->compute(imgL, imgR, disp);

效果如下:

BM算法得到的視差圖(左),空洞填充后得到的視差圖(右)

  

2.SGBM(Semi-Global Block matching)算法:

參數設置如下:

enum { STEREO_BM = 0, STEREO_SGBM = 1, STEREO_HH = 2, STEREO_VAR = 3, STEREO_3WAY = 4 };
    int numberOfDisparities = ((imgSize.width / 8) + 15) & -16;
    cv::Ptr<cv::StereoSGBM> sgbm = cv::StereoSGBM::create(0, 16, 3);
    sgbm->setPreFilterCap(63);
    int SADWindowSize = 9;
    int sgbmWinSize = SADWindowSize > 0 ? SADWindowSize : 3;
    sgbm->setBlockSize(sgbmWinSize);
    int cn = imgL.channels();
    sgbm->setP1(8 * cn*sgbmWinSize*sgbmWinSize);
    sgbm->setP2(32 * cn*sgbmWinSize*sgbmWinSize);
    sgbm->setMinDisparity(0);
    sgbm->setNumDisparities(numberOfDisparities);
    sgbm->setUniquenessRatio(10);
    sgbm->setSpeckleWindowSize(100);
    sgbm->setSpeckleRange(32);
    sgbm->setDisp12MaxDiff(1);

    int alg = STEREO_SGBM;
    if (alg == STEREO_HH)
        sgbm->setMode(cv::StereoSGBM::MODE_HH);
    else if (alg == STEREO_SGBM)
        sgbm->setMode(cv::StereoSGBM::MODE_SGBM);
    else if (alg == STEREO_3WAY)
        sgbm->setMode(cv::StereoSGBM::MODE_SGBM_3WAY);
    sgbm->compute(imgL, imgR, disp);

 效果如圖:

SGBM算法得到的視差圖(左),空洞填充后得到的視差圖(右)

   

可見SGBM算法得到的視差圖相比於BM算法來說,減少了很多不准確的匹配點,尤其是在深度不連續區域,速度上SGBM要慢於BM算法。OpenCV3.0以后沒有實現GC算法,可能是出於速度考慮,以后找時間補上對比圖,以及各個算法的詳細原理分析。

后面我填充空洞的效果不是很好,如果有更好的方法,望不吝賜教。

 


preFilterCap()匹配圖像預處理

  •  兩種立體匹配算法都要先對輸入圖像做預處理,OpenCV源碼中中調用函數 static void prefilterXSobel(const cv::Mat& src, cv::Mat& dst, int preFilterCap),參數設置中preFilterCap在此函數中用到。函數步驟如下,作用主要有兩點:對於無紋理區域,能夠排除噪聲干擾;對於邊界區域,能夠提高邊界的區分性,利於后續的匹配代價計算:
  1. 先利用水平Sobel算子求輸入圖像x方向的微分值Value;
  2. 如果Value<-preFilterCap, 則Value=0;
    如果Value>preFilterCap,則Value=2*preFilterCap;
    如果Value>=-preFilterCap &&Value<=preFilterCap,則Value=Value+preFilterCap;
  3. 輸出處理后的圖像作為下一步計算匹配代價的輸入圖像。
static void prefilterXSobel(const cv::Mat& src, cv::Mat& dst, int ftzero)
{
    int x, y;
    const int OFS = 256 * 4, TABSZ = OFS * 2 + 256;
    uchar tab[TABSZ];
    cv::Size size = src.size();

    for (x = 0; x < TABSZ; x++)
        tab[x] = (uchar)(x - OFS < -ftzero ? 0 : x - OFS > ftzero ? ftzero * 2 : x - OFS + ftzero);
    uchar val0 = tab[0 + OFS];

    for (y = 0; y < size.height - 1; y += 2)
    {
        const uchar* srow1 = src.ptr<uchar>(y);
        const uchar* srow0 = y > 0 ? srow1 - src.step : size.height > 1 ? srow1 + src.step : srow1;
        const uchar* srow2 = y < size.height - 1 ? srow1 + src.step : size.height > 1 ? srow1 - src.step : srow1;
        const uchar* srow3 = y < size.height - 2 ? srow1 + src.step * 2 : srow1;
        uchar* dptr0 = dst.ptr<uchar>(y);
        uchar* dptr1 = dptr0 + dst.step;

        dptr0[0] = dptr0[size.width - 1] = dptr1[0] = dptr1[size.width - 1] = val0;
        x = 1;
        for (; x < size.width - 1; x++)
        {
            int d0 = srow0[x + 1] - srow0[x - 1], d1 = srow1[x + 1] - srow1[x - 1],
                d2 = srow2[x + 1] - srow2[x - 1], d3 = srow3[x + 1] - srow3[x - 1];
            int v0 = tab[d0 + d1 * 2 + d2 + OFS];
            int v1 = tab[d1 + d2 * 2 + d3 + OFS];
            dptr0[x] = (uchar)v0;
            dptr1[x] = (uchar)v1;
        }
    }

    for (; y < size.height; y++)
    {
        uchar* dptr = dst.ptr<uchar>(y);
        x = 0;
        for (; x < size.width; x++)
            dptr[x] = val0;
    }
}
View Code

自己實現的函數如下:

void mySobelX(cv::Mat srcImg, cv::Mat dstImg, int preFilterCap)
{
    assert(srcImg.channels() == 1);
    int radius = 1;
    int width = srcImg.cols;
    int height = srcImg.rows;
    uchar *pSrcData = srcImg.data;
    uchar *pDstData = dstImg.data;
    for (int i = 0; i < height; i++)
    {
        for (int j = 0; j < width; j++)
        {
            int idx = i*width + j;
            if (i >= radius && i < height - radius && j >= radius && j < width - radius)
            {
                int diff0 = pSrcData[(i - 1)*width + j + 1] - pSrcData[(i - 1)*width + j - 1];
                int diff1 = pSrcData[i*width + j + 1] - pSrcData[i*width + j - 1];
                int diff2 = pSrcData[(i + 1)*width + j + 1] - pSrcData[(i + 1)*width + j - 1];

                int value = diff0 + 2 * diff1 + diff2;
                if (value < -preFilterCap)
                {
                    pDstData[idx] = 0;
                }
                else if (value >= -preFilterCap && value <= preFilterCap)
                {
                    pDstData[idx] = uchar(value + preFilterCap);
                }
                else
                {
                    pDstData[idx] = uchar(2 * preFilterCap);
                }

            }
            else
            {
                pDstData[idx] = 0;
            }
        }
    }
}
View Code

 函數輸入,輸出結果如圖:

    

 


 filterSpeckles()視差圖后處理

  •  兩種立體匹配算法在算出初始視差圖后會進行視差圖后處理,包括中值濾波,連通域檢測等。其中中值濾波能夠有效去除視差圖中孤立的噪點,而連通域檢測能夠檢測出視差圖中因噪聲引起小團塊(blob)。在BM和SGBM中都有speckleWindowSize和speckleRange這兩個參數,speckleWindowSize是指設置檢測出的連通域中像素點個數,也就是連通域的大小。speckleRange是指設置判斷兩個點是否屬於同一個連通域的閾值條件。大概流程如下:
  1. 判斷當前像素點四鄰域的鄰域點與當前像素點的差值diff,如果diff<speckRange,則表示該鄰域點與當前像素點是一個連通域,設置一個標記。然后再以該鄰域點為中心判斷其四鄰域點,步驟同上。直至某一像素點四鄰域的點均不滿足條件,則停止。
  2. 步驟1完成后,判斷被標記的像素點個數count,如果像素點個數count<=speckleWindowSize,則說明該連通域是一個小團塊(blob),則將當前像素點值設置為newValue(表示錯誤的視差值,newValue一般設置為負數或者0值)。否則,表示該連通域是個大團塊,不做處理。同時建立標記值與是否為小團塊的關系表rtype[label],rtype[label]為0,表示label值對應的像素點屬於小團塊,為1則不屬於小團塊。
  3. 處理下一個像素點時,先判斷其是否已經被標記:
    如果已經被標記,則根據關系表rtype[label]判斷是否為小團塊(blob),如果是,則直接將該像素值設置為newValue;如果不是,則不做處理。繼續處理下一個像素。
    如果沒有被標記,則按照步驟1處理。
  4. 所有像素點處理后,滿足條件的區域會被設置為newValue值,后續可以用空洞填充等方法重新估計其視差值。

OpenCV中有對應的API函數,void filterSpeckles(InputOutputArray img, double newVal, int maxSpeckleSize, double maxDiff, InputOutputArray buf=noArray() ) 

函數源碼如下,使用時根據視差圖或者深度圖數據類型設置模板中的數據類型:

typedef cv::Point_<short> Point2s;
template <typename T> void filterSpecklesImpl(cv::Mat& img, int newVal, int maxSpeckleSize, int maxDiff, cv::Mat& _buf)
{
    using namespace cv;

    int width = img.cols, height = img.rows, npixels = width*height;
    size_t bufSize = npixels*(int)(sizeof(Point2s) + sizeof(int) + sizeof(uchar));
    if (!_buf.isContinuous() || _buf.empty() || _buf.cols*_buf.rows*_buf.elemSize() < bufSize)
        _buf.create(1, (int)bufSize, CV_8U);

    uchar* buf = _buf.ptr();
    int i, j, dstep = (int)(img.step / sizeof(T));
    int* labels = (int*)buf;
    buf += npixels * sizeof(labels[0]);
    Point2s* wbuf = (Point2s*)buf;
    buf += npixels * sizeof(wbuf[0]);
    uchar* rtype = (uchar*)buf;
    int curlabel = 0;

    // clear out label assignments
    memset(labels, 0, npixels * sizeof(labels[0]));

    for (i = 0; i < height; i++)
    {
        T* ds = img.ptr<T>(i);
        int* ls = labels + width*i;

        for (j = 0; j < width; j++)
        {
            if (ds[j] != newVal)   // not a bad disparity
            {
                if (ls[j])     // has a label, check for bad label
                {
                    if (rtype[ls[j]]) // small region, zero out disparity
                        ds[j] = (T)newVal;
                }
                // no label, assign and propagate
                else
                {
                    Point2s* ws = wbuf; // initialize wavefront
                    Point2s p((short)j, (short)i);  // current pixel
                    curlabel++; // next label
                    int count = 0;  // current region size
                    ls[j] = curlabel;

                    // wavefront propagation
                    while (ws >= wbuf) // wavefront not empty
                    {
                        count++;
                        // put neighbors onto wavefront
                        T* dpp = &img.at<T>(p.y, p.x); //current pixel value
                        T dp = *dpp;
                        int* lpp = labels + width*p.y + p.x; //current label value

                        //bot
                        if (p.y < height - 1 && !lpp[+width] && dpp[+dstep] != newVal && std::abs(dp - dpp[+dstep]) <= maxDiff)
                        {
                            lpp[+width] = curlabel;
                            *ws++ = Point2s(p.x, p.y + 1);
                        }
                        //top
                        if (p.y > 0 && !lpp[-width] && dpp[-dstep] != newVal && std::abs(dp - dpp[-dstep]) <= maxDiff)
                        {
                            lpp[-width] = curlabel;
                            *ws++ = Point2s(p.x, p.y - 1);
                        }
                        //right
                        if (p.x < width - 1 && !lpp[+1] && dpp[+1] != newVal && std::abs(dp - dpp[+1]) <= maxDiff)
                        {
                            lpp[+1] = curlabel;
                            *ws++ = Point2s(p.x + 1, p.y);
                        }
                        //left
                        if (p.x > 0 && !lpp[-1] && dpp[-1] != newVal && std::abs(dp - dpp[-1]) <= maxDiff)
                        {
                            lpp[-1] = curlabel;
                            *ws++ = Point2s(p.x - 1, p.y);
                        }
                        

                        // pop most recent and propagate
                        // NB: could try least recent, maybe better convergence
                        p = *--ws;
                    }

                    // assign label type
                    if (count <= maxSpeckleSize)   // speckle region
                    {
                        rtype[ls[j]] = 1;   // small region label
                        ds[j] = (T)newVal;
                    }
                    else
                        rtype[ls[j]] = 0;   // large region label
                }
            }
        }
    }
}
View Code

 或者下面博主自己整理一遍的代碼:

typedef cv::Point_<short> Point2s;
template <typename T> void myFilterSpeckles(cv::Mat &img, int newVal, int maxSpeckleSize, int maxDiff)
{
    int width = img.cols;
    int height = img.rows;
    int imgSize = width*height;
    int *pLabelBuf = (int*)malloc(sizeof(int)*imgSize);//標記值buffer
    Point2s *pPointBuf = (Point2s*)malloc(sizeof(short)*imgSize);//點坐標buffer
    uchar *pTypeBuf = (uchar*)malloc(sizeof(uchar)*imgSize);//blob判斷標記buffer
    //初始化Labelbuffer
    int currentLabel = 0;
    memset(pLabelBuf, 0, sizeof(int)*imgSize);

    for (int i = 0; i < height; i++)
    {
        T *pData = img.ptr<T>(i);
        int *pLabel = pLabelBuf + width*i;
        for (int j = 0; j < width; j++)
        {
            if (pData[j] != newVal)
            {
                if (pLabel[j])
                {
                    if (pTypeBuf[pLabel[j]])
                    {
                        pData[j] = (T)newVal;
                    }
                }
                else
                {
                    Point2s *pWave = pPointBuf;
                    Point2s curPoint((T)j, (T)i);
                    currentLabel++;
                    int count = 0;
                    pLabel[j] = currentLabel;
                    while (pWave >= pPointBuf)
                    {
                        count++;
                        T *pCurPos = &img.at<T>(curPoint.y, curPoint.x);
                        T curValue = *pCurPos;
                        int *pCurLabel = pLabelBuf + width*curPoint.y + curPoint.x;
                        //bot
                        if (curPoint.y < height - 1 && !pCurLabel[+width] && pCurPos[+width] != newVal  && abs(curValue - pCurPos[+width]) <= maxDiff)
                        {
                            pCurLabel[+width] = currentLabel;
                            *pWave++ = Point2s(curPoint.x, curPoint.y + 1);
                        }
                        //top
                        if (curPoint.y > 0 && !pCurLabel[-width] && pCurPos[-width] != newVal && abs(curValue - pCurPos[-width]) <= maxDiff)
                        {
                            pCurLabel[-width] = currentLabel;
                            *pWave++ = Point2s(curPoint.x, curPoint.y - 1);
                        }
                        //right
                        if (curPoint.x < width-1 && !pCurLabel[+1] && pCurPos[+1] != newVal  && abs(curValue - pCurPos[+1]) <= maxDiff)
                        {
                            pCurLabel[+1] = currentLabel;
                            *pWave++ = Point2s(curPoint.x + 1, curPoint.y);
                        }
                        //left
                        if (curPoint.x > 0 && !pCurLabel[-1] && pCurPos[-1] != newVal && abs(curValue - pCurPos[-1]) <= maxDiff)
                        {
                            pCurLabel[-1] = currentLabel;
                            *pWave++ = Point2s(curPoint.x - 1, curPoint.y);
                        }

                        --pWave;
                        curPoint = *pWave;
                    }

                    if (count <= maxSpeckleSize)
                    {
                        pTypeBuf[pLabel[j]] = 1;
                        pData[j] = (T)newVal;
                    }
                    else
                    {
                        pTypeBuf[pLabel[j]] = 0;
                    }
                }
            }
        }
    }

    free(pLabelBuf);
    free(pPointBuf);
    free(pTypeBuf);
}
View Code

 

如下視差圖中左上角部分有7個小團塊,設置speckleWindowSize和speckleRange分別為50和32,連通域檢測后結果為如下圖右,小團塊能夠全部檢測出來,方便后續用周圍視差填充。當然還有一個缺點就是,圖像中其他地方尤其是邊界區域也會被檢測為小團塊,后續填充可能會對邊界造成平滑。

    

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM