數字圖像處理之直方圖均衡化


直方圖均衡化

直方圖均衡化是圖像處理領域中利用圖像直方圖對對比度進行調整的方法。

 

直方圖均衡化要達到的效果:

 

基本思想:把原始圖的直方圖變換為均勻分 布的形式,這樣就增加了像素灰度值的動態 范圍,從而達到增強圖像整體對比度的效果

使用的方法是灰度級變換:s = T(r) 

原理:

s=T(r) 0≤r≤1

T(r)滿足下列兩個條件:

(1)T(r)在區間0≤r≤1中為單值且單調遞增

(2)當0≤r≤1時,0≤T(r) ≤1

 

條件(1)保證原圖各灰度級在變換后仍保持從黑 到白(或從白到黑)的排列次序

條件(2)保證變換前后灰度值動態范圍的一致性

 

Pr(r)是r的概率密度函數,Ps(s)是s的概 率密度函數,Pr(r)和T(r)已知,且T-1(s) 滿足上述條件(1),所以有

已知一種重要的變換函數:

關於上限的定積分的導數就是該上限的積分值 (萊布尼茨准則)

對於離散值:

其中r是第k個灰度級,k = 0,1,2,…,L-1.

  nk是圖像中灰度級為rk的像素個數.

  n是圖像中像素的總數.

已知變換函數的離散形式為:

sk稱作直方圖均衡化 將輸入圖像中灰度級為rk(橫坐標)的像素映射 到輸出圖像中灰度級為sk (橫坐標)的對應像素得到.

 實現代碼:

/******************************************************************************
*    作用:            灰度均衡函數
*    參數:        
*        pixel        原始像素數組
*        tempPixel    保存變換后圖像的像素數組
*        width        原始圖像寬度
******************************************************************************/
void GrayEqualize(BYTE* pixel, BYTE* tempPixel, UINT width, UINT height)
{

    // 灰度映射表
    BYTE map[256];
    long lCounts[256];

    memset(lCounts, 0, sizeof(long) * 256);

    // 計算各灰度值個數
    for (UINT i = 0; i < width * height; i++)
    {
        int x = pixel[i * 4];
        lCounts[x]++;
    }

    // 保存運算中的臨時值
    long lTemp;

    for (int i = 0; i < 256; i++)
    {
        lTemp = 0;
        for (int j = 0; j <= i; j++)
            lTemp += lCounts[j];

        map[i] = (BYTE)(lTemp * 255.0f / width / height);
    }

    // 變換后的值直接在映射表中查找
    for (UINT i = 0; i < width * height; i++)
    {
        int x = pixel[i * 4];

        tempPixel[i*4] = tempPixel[i*4+1] = tempPixel[i*4+2] = pixel[i * 4];
        tempPixel[i*4+3] = 255;
    }
}
View Code

 

 

 

彩色圖直方圖均衡化: 

 

更清晰:

 

 opencv代碼:

////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>

using namespace cv;
using namespace std;

int main( int argc, const char** argv )
{
       Mat img = imread("MyPic.JPG", CV_LOAD_IMAGE_COLOR); //open and read the image

       if (img.empty()) //if unsuccessful, exit the program
       {
            cout << "Image cannot be loaded..!!" << endl;
            return -1;
       }

       vector<Mat> channels; 
       Mat img_hist_equalized;

       cvtColor(img, img_hist_equalized, CV_BGR2YCrCb); //change the color image from BGR to YCrCb format

       split(img_hist_equalized,channels); //split the image into channels

       equalizeHist(channels[0], channels[0]); //equalize histogram on the 1st channel (Y)

   merge(channels,img_hist_equalized); //merge 3 channels including the modified 1st channel into one image

      cvtColor(img_hist_equalized, img_hist_equalized, CV_YCrCb2BGR); //change the color image from YCrCb to BGR format (to display image properly)

       //create windows
       namedWindow("Original Image", CV_WINDOW_AUTOSIZE);
       namedWindow("Histogram Equalized", CV_WINDOW_AUTOSIZE);

       //show the image
       imshow("Original Image", img);
       imshow("Histogram Equalized", img_hist_equalized);

       waitKey(0); //wait for key press

       destroyAllWindows(); //destroy all open windows

       return 0;
}
View Code

 代碼中使用的函數:

 New OpenCV functions

  • cvtColor(img, img_hist_equalized, CV_BGR2YCrCb)
This line converts the color space of BGR in ' img' to YCrCb color space and stores the resulting image in ' img_hist_equalized'.
 
In the above example, I am going to equalize the histogram of color images. In this scenario, I have to equalize the histogram of the intensity component only, not the color components. So, BGR format cannot be used because its all three planes represent color components blue, green and red. So, I have to convert the original BGR color space to YCrCb color space because its 1st plane represents the intensity of the image where as other planes represent the color components.  
 
  • void split(const Mat& m, vector<Mat>& mv )
This function splits each channel of the ' m' multi-channel array into separate channels and stores them in a vector, referenced by ' mv'.
 
Argument list
  • const Mat& m - Input multi-channel array
  •  vector<Mat>& mv - vector that stores the each channel of the input array
 
  • equalizeHist(channels[0], channels[0]);
Here we are only interested in the 1st channel (Y) because it  represents the intensity information whereas other two channels (Cr and Cb) represent color components. So, we equalize the histogram of the 1st channel using OpenCV in-built function, ' equalizeHist(..)' and other two channels remain unchanged.
 
  • void merge(const vector<Mat>& mv, OutputArray dst )
This function does the reverse operation of the split function. It takes the vector of channels and create a single multi-channel array.
Argument list
  • const vector<Mat>& mv - vector that holds several channels. All channels should have same size and same depths
  • OutputArray dst - stores the destination multi-channel array
 
  • cvtColor(img_hist_equalized, img_hist_equalized, CV_YCrCb2BGR)
This line converts the image from YCrCb color space to BGR color space. It is essential to convert to BGR color space because ' imshow(..)' OpenCV function can only show images with that color space. 
 
This is the end of the explanation of new OpenCV functions, found in the above sample code. If you are not familiar with other OpenCV functions, please refer to the previous lessons.
 
 

 參考博客:http://opencv-srf.blogspot.jp/2013/08/histogram-equalization.html


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM