基礎圖像處理之混合空間增強——(Java:拉普拉斯銳化、Sobel邊緣檢測、均值濾波、伽馬變換)


      相信看過岡薩雷斯第三版數字圖像處理的童鞋都知道,里面涉及到了很多的基礎圖像處理的算法,今天,就專門借用其中一個混合空間增強的案例,來將常見的幾種圖像處理算法集合起來,看能發生什么樣的化學反應

      首先,通過一張圖來看下,我們即將需要完成的工作目標

    

   同時,我們也借用書中的人體全身骨骼圖像來進行模擬實現這些算法,這樣,我們可以通過和書中展示的效果來評判我們實現的算法是否正確,那接下來,我們就來一步一步的實現吧。

 

    第一步:拉普拉斯銳化

          這里就不講解具體的原理了,拉普拉斯是一個二階微分的算子,這樣的算子通過和圖像進行卷積操作,可以讓我們得到圖像中灰度突變區域,就是說那些不同顏色的交界處,下面看代碼和運行效果圖

public BufferedImage laplaceProcess(BufferedImage src) {

        // 拉普拉斯算子
        int[] LAPLACE = new int[] { 0, -1, 0, -1, 4, -1, 0, -1, 0 };

        int width = src.getWidth();
        int height = src.getHeight();

        int[] pixels = new int[width * height];
        int[] outPixels = new int[width * height];

        int type = src.getType();
        if (type == BufferedImage.TYPE_INT_ARGB
                || type == BufferedImage.TYPE_INT_RGB) {
            src.getRaster().getDataElements(0, 0, width, height, pixels);
        }
        src.getRGB(0, 0, width, height, pixels, 0, width);

        int k0 = 0, k1 = 0, k2 = 0;
        int k3 = 0, k4 = 0, k5 = 0;
        int k6 = 0, k7 = 0, k8 = 0;

        k0 = LAPLACE[0];
        k1 = LAPLACE[1];
        k2 = LAPLACE[2];
        k3 = LAPLACE[3];
        k4 = LAPLACE[4];
        k5 = LAPLACE[5];
        k6 = LAPLACE[6];
        k7 = LAPLACE[7];
        k8 = LAPLACE[8];
        int offset = 0;

        int sr = 0, sg = 0, sb = 0;
        int r = 0, g = 0, b = 0;
        for (int row = 1; row < height - 1; row++) {
            offset = row * width;
            for (int col = 1; col < width - 1; col++) {
                // red
                sr = k0 * ((pixels[offset - width + col - 1] >> 16) & 0xff)
                        + k1 * ((pixels[offset - width + col] >> 16) & 0xff)
                        + k2
                        * ((pixels[offset - width + col + 1] >> 16) & 0xff)
                        + k3 * ((pixels[offset + col - 1] >> 16) & 0xff) + k4
                        * ((pixels[offset + col] >> 16) & 0xff) + k5
                        * ((pixels[offset + col + 1] >> 16) & 0xff) + k6
                        * ((pixels[offset + width + col - 1] >> 16) & 0xff)
                        + k7 * ((pixels[offset + width + col] >> 16) & 0xff)
                        + k8
                        * ((pixels[offset + width + col + 1] >> 16) & 0xff);
                // green
                sg = k0 * ((pixels[offset - width + col - 1] >> 8) & 0xff) + k1
                        * ((pixels[offset - width + col] >> 8) & 0xff) + k2
                        * ((pixels[offset - width + col + 1] >> 8) & 0xff) + k3
                        * ((pixels[offset + col - 1] >> 8) & 0xff) + k4
                        * ((pixels[offset + col] >> 8) & 0xff) + k5
                        * ((pixels[offset + col + 1] >> 8) & 0xff) + k6
                        * ((pixels[offset + width + col - 1] >> 8) & 0xff) + k7
                        * ((pixels[offset + width + col] >> 8) & 0xff) + k8
                        * ((pixels[offset + width + col + 1] >> 8) & 0xff);
                // blue
                sb = k0 * (pixels[offset - width + col - 1] & 0xff) + k1
                        * (pixels[offset - width + col] & 0xff) + k2
                        * (pixels[offset - width + col + 1] & 0xff) + k3
                        * (pixels[offset + col - 1] & 0xff) + k4
                        * (pixels[offset + col] & 0xff) + k5
                        * (pixels[offset + col + 1] & 0xff) + k6
                        * (pixels[offset + width + col - 1] & 0xff) + k7
                        * (pixels[offset + width + col] & 0xff) + k8
                        * (pixels[offset + width + col + 1] & 0xff);
                r = sr;
                g = sg;
                b = sb;
                outPixels[offset + col] = (0xff << 24) | (clamp(r) << 16)
                        | (clamp(g) << 8) | clamp(b);
                sr = 0;
                sg = 0;
                sb = 0;
            }
        }

        BufferedImage dest = new BufferedImage(width, height,
                BufferedImage.TYPE_INT_ARGB);

        if (type == BufferedImage.TYPE_INT_ARGB
                || type == BufferedImage.TYPE_INT_RGB) {
            dest.getRaster().setDataElements(0, 0, width, height, outPixels);
        } else {
            dest.setRGB(0, 0, width, height, outPixels, 0, width);
        }

        return dest;
    }

注:左邊是原圖,右邊是運行效果圖,由於原圖是比較‘緩和’的圖片,可以看到拉普拉斯后,生成了一些點,而這些點對應到原圖中的位置,正是灰度有突變的地方

 

緊接着,我們將原圖和拉普拉斯銳化后的圖進行相加操作,這樣,我們就可以看到一樣比較清晰的圖了,下面是代碼和運行效果

/** 拉普拉斯疊加原圖像 **/
    public BufferedImage laplaceAddProcess(BufferedImage src) {

        // 拉普拉斯算子
        int[] LAPLACE = new int[] { 0, -1, 0, -1, 4, -1, 0, -1, 0 };

        int width = src.getWidth();
        int height = src.getHeight();

        int[] pixels = new int[width * height];
        int[] outPixels = new int[width * height];

        int type = src.getType();
        if (type == BufferedImage.TYPE_INT_ARGB
                || type == BufferedImage.TYPE_INT_RGB) {
            src.getRaster().getDataElements(0, 0, width, height, pixels);
        }
        src.getRGB(0, 0, width, height, pixels, 0, width);

        int k0 = 0, k1 = 0, k2 = 0;
        int k3 = 0, k4 = 0, k5 = 0;
        int k6 = 0, k7 = 0, k8 = 0;

        k0 = LAPLACE[0];
        k1 = LAPLACE[1];
        k2 = LAPLACE[2];
        k3 = LAPLACE[3];
        k4 = LAPLACE[4];
        k5 = LAPLACE[5];
        k6 = LAPLACE[6];
        k7 = LAPLACE[7];
        k8 = LAPLACE[8];
        int offset = 0;

        int sr = 0, sg = 0, sb = 0;
        int r = 0, g = 0, b = 0;
        for (int row = 1; row < height - 1; row++) {
            offset = row * width;
            for (int col = 1; col < width - 1; col++) {

                r = (pixels[offset + col] >> 16) & 0xff;
                g = (pixels[offset + col] >> 8) & 0xff;
                b = (pixels[offset + col]) & 0xff;
                // red
                sr = k0 * ((pixels[offset - width + col - 1] >> 16) & 0xff)
                        + k1 * ((pixels[offset - width + col] >> 16) & 0xff)
                        + k2
                        * ((pixels[offset - width + col + 1] >> 16) & 0xff)
                        + k3 * ((pixels[offset + col - 1] >> 16) & 0xff) + k4
                        * ((pixels[offset + col] >> 16) & 0xff) + k5
                        * ((pixels[offset + col + 1] >> 16) & 0xff) + k6
                        * ((pixels[offset + width + col - 1] >> 16) & 0xff)
                        + k7 * ((pixels[offset + width + col] >> 16) & 0xff)
                        + k8
                        * ((pixels[offset + width + col + 1] >> 16) & 0xff);
                // green
                sg = k0 * ((pixels[offset - width + col - 1] >> 8) & 0xff) + k1
                        * ((pixels[offset - width + col] >> 8) & 0xff) + k2
                        * ((pixels[offset - width + col + 1] >> 8) & 0xff) + k3
                        * ((pixels[offset + col - 1] >> 8) & 0xff) + k4
                        * ((pixels[offset + col] >> 8) & 0xff) + k5
                        * ((pixels[offset + col + 1] >> 8) & 0xff) + k6
                        * ((pixels[offset + width + col - 1] >> 8) & 0xff) + k7
                        * ((pixels[offset + width + col] >> 8) & 0xff) + k8
                        * ((pixels[offset + width + col + 1] >> 8) & 0xff);
                // blue
                sb = k0 * (pixels[offset - width + col - 1] & 0xff) + k1
                        * (pixels[offset - width + col] & 0xff) + k2
                        * (pixels[offset - width + col + 1] & 0xff) + k3
                        * (pixels[offset + col - 1] & 0xff) + k4
                        * (pixels[offset + col] & 0xff) + k5
                        * (pixels[offset + col + 1] & 0xff) + k6
                        * (pixels[offset + width + col - 1] & 0xff) + k7
                        * (pixels[offset + width + col] & 0xff) + k8
                        * (pixels[offset + width + col + 1] & 0xff);
                // 運算后的像素值和原圖像素疊加
                r += sr;
                g += sg;
                b += sb;
                outPixels[offset + col] = (0xff << 24) | (clamp(r) << 16)
                        | (clamp(g) << 8) | clamp(b);

                // next pixel
                r = 0;
                g = 0;
                b = 0;
            }
        }

        BufferedImage dest = new BufferedImage(width, height,
                BufferedImage.TYPE_INT_ARGB);

        if (type == BufferedImage.TYPE_INT_ARGB
                || type == BufferedImage.TYPE_INT_RGB) {
            dest.getRaster().setDataElements(0, 0, width, height, outPixels);
        } else {
            dest.setRGB(0, 0, width, height, outPixels, 0, width);
        }
        return dest;
    }

 

同樣,左邊是原圖,右邊是原圖和經拉普拉斯銳化后相疊加的圖,比原圖感覺更亮一點了,因為灰度突變的地方加強了

 

第二步:Sobel提取邊緣

  接下來,我們還是繼續針對原圖進行處理,我們想提取原圖的邊緣,Sobel是一階算子,結果和圖像卷積運算后,可以提取到圖像的邊緣信息,同樣,下面給出代碼和運行效果

public BufferedImage sobelProcess(BufferedImage src) {

        // Sobel算子
        int[] sobel_y = new int[] { -1, -2, -1, 0, 0, 0, 1, 2, 1 };
        int[] sobel_x = new int[] { -1, 0, 1, -2, 0, 2, -1, 0, 1 };

        int width = src.getWidth();
        int height = src.getHeight();

        int[] pixels = new int[width * height];
        int[] outPixels = new int[width * height];

        int type = src.getType();
        if (type == BufferedImage.TYPE_INT_ARGB
                || type == BufferedImage.TYPE_INT_RGB) {
            src.getRaster().getDataElements(0, 0, width, height, pixels);
        }
        src.getRGB(0, 0, width, height, pixels, 0, width);

        int offset = 0;
        int x0 = sobel_x[0];
        int x1 = sobel_x[1];
        int x2 = sobel_x[2];
        int x3 = sobel_x[3];
        int x4 = sobel_x[4];
        int x5 = sobel_x[5];
        int x6 = sobel_x[6];
        int x7 = sobel_x[7];
        int x8 = sobel_x[8];

        int k0 = sobel_y[0];
        int k1 = sobel_y[1];
        int k2 = sobel_y[2];
        int k3 = sobel_y[3];
        int k4 = sobel_y[4];
        int k5 = sobel_y[5];
        int k6 = sobel_y[6];
        int k7 = sobel_y[7];
        int k8 = sobel_y[8];

        int yr = 0, yg = 0, yb = 0;
        int xr = 0, xg = 0, xb = 0;
        int r = 0, g = 0, b = 0;

        for (int row = 1; row < height - 1; row++) {
            offset = row * width;
            for (int col = 1; col < width - 1; col++) {

                // red
                yr = k0 * ((pixels[offset - width + col - 1] >> 16) & 0xff)
                        + k1 * ((pixels[offset - width + col] >> 16) & 0xff)
                        + k2
                        * ((pixels[offset - width + col + 1] >> 16) & 0xff)
                        + k3 * ((pixels[offset + col - 1] >> 16) & 0xff) + k4
                        * ((pixels[offset + col] >> 16) & 0xff) + k5
                        * ((pixels[offset + col + 1] >> 16) & 0xff) + k6
                        * ((pixels[offset + width + col - 1] >> 16) & 0xff)
                        + k7 * ((pixels[offset + width + col] >> 16) & 0xff)
                        + k8
                        * ((pixels[offset + width + col + 1] >> 16) & 0xff);

                xr = x0 * ((pixels[offset - width + col - 1] >> 16) & 0xff)
                        + x1 * ((pixels[offset - width + col] >> 16) & 0xff)
                        + x2
                        * ((pixels[offset - width + col + 1] >> 16) & 0xff)
                        + x3 * ((pixels[offset + col - 1] >> 16) & 0xff) + x4
                        * ((pixels[offset + col] >> 16) & 0xff) + x5
                        * ((pixels[offset + col + 1] >> 16) & 0xff) + x6
                        * ((pixels[offset + width + col - 1] >> 16) & 0xff)
                        + x7 * ((pixels[offset + width + col] >> 16) & 0xff)
                        + x8
                        * ((pixels[offset + width + col + 1] >> 16) & 0xff);

                // green
                yg = k0 * ((pixels[offset - width + col - 1] >> 8) & 0xff) + k1
                        * ((pixels[offset - width + col] >> 8) & 0xff) + k2
                        * ((pixels[offset - width + col + 1] >> 8) & 0xff) + k3
                        * ((pixels[offset + col - 1] >> 8) & 0xff) + k4
                        * ((pixels[offset + col] >> 8) & 0xff) + k5
                        * ((pixels[offset + col + 1] >> 8) & 0xff) + k6
                        * ((pixels[offset + width + col - 1] >> 8) & 0xff) + k7
                        * ((pixels[offset + width + col] >> 8) & 0xff) + k8
                        * ((pixels[offset + width + col + 1] >> 8) & 0xff);

                xg = x0 * ((pixels[offset - width + col - 1] >> 8) & 0xff) + x1
                        * ((pixels[offset - width + col] >> 8) & 0xff) + x2
                        * ((pixels[offset - width + col + 1] >> 8) & 0xff) + x3
                        * ((pixels[offset + col - 1] >> 8) & 0xff) + x4
                        * ((pixels[offset + col] >> 8) & 0xff) + x5
                        * ((pixels[offset + col + 1] >> 8) & 0xff) + x6
                        * ((pixels[offset + width + col - 1] >> 8) & 0xff) + x7
                        * ((pixels[offset + width + col] >> 8) & 0xff) + x8
                        * ((pixels[offset + width + col + 1] >> 8) & 0xff);
                // blue
                yb = k0 * (pixels[offset - width + col - 1] & 0xff) + k1
                        * (pixels[offset - width + col] & 0xff) + k2
                        * (pixels[offset - width + col + 1] & 0xff) + k3
                        * (pixels[offset + col - 1] & 0xff) + k4
                        * (pixels[offset + col] & 0xff) + k5
                        * (pixels[offset + col + 1] & 0xff) + k6
                        * (pixels[offset + width + col - 1] & 0xff) + k7
                        * (pixels[offset + width + col] & 0xff) + k8
                        * (pixels[offset + width + col + 1] & 0xff);

                xb = x0 * (pixels[offset - width + col - 1] & 0xff) + x1
                        * (pixels[offset - width + col] & 0xff) + x2
                        * (pixels[offset - width + col + 1] & 0xff) + x3
                        * (pixels[offset + col - 1] & 0xff) + x4
                        * (pixels[offset + col] & 0xff) + x5
                        * (pixels[offset + col + 1] & 0xff) + x6
                        * (pixels[offset + width + col - 1] & 0xff) + x7
                        * (pixels[offset + width + col] & 0xff) + x8
                        * (pixels[offset + width + col + 1] & 0xff);

                // 索貝爾梯度
                r = (int) Math.sqrt(yr * yr + xr * xr);
                g = (int) Math.sqrt(yg * yg + xg * xg);
                b = (int) Math.sqrt(yb * yb + xb * xb);

                outPixels[offset + col] = (0xff << 24) | (clamp(r) << 16)
                        | (clamp(g) << 8) | clamp(b);
            }
        }

        BufferedImage dest = new BufferedImage(width, height,
                BufferedImage.TYPE_INT_ARGB);

        if (type == BufferedImage.TYPE_INT_ARGB
                || type == BufferedImage.TYPE_INT_RGB) {
            dest.getRaster().setDataElements(0, 0, width, height, outPixels);
        } else {
            dest.setRGB(0, 0, width, height, outPixels, 0, width);
        }
        return dest;

    }

注:左邊是原圖,右邊是經過Sobel變換后的圖,從圖中可以明顯看到Sobel達到的效果

 

第三步:均值濾波

這一步需要在Sobel變換后得到的圖像基礎上進行進一步的變換,均值濾波,就是將圖像中每一點的像素值用其周邊多個像素值的平均值重新賦值,有3*3,5*5的濾波器,我們即將使用的是5*5的濾波器,即是說,將某一個像素點重新賦值為以其為中心的25的像素值的平均值,下面我們看下代碼和運行效果

/** 均值濾波 **/
    public BufferedImage meanValueProcess(BufferedImage src) {

        BufferedImage image = this.sobelProcess(src);// 已經索貝爾處理的圖像

        int width = image.getWidth();
        int height = image.getHeight();

        int[] pixels = new int[width * height];
        int[] outPixels = new int[width * height];

        int type = image.getType();
        if (type == BufferedImage.TYPE_INT_ARGB
                || type == BufferedImage.TYPE_INT_RGB) {
            image.getRaster().getDataElements(0, 0, width, height, pixels);
        }
        image.getRGB(0, 0, width, height, pixels, 0, width);

        // 均值濾波使用的卷積模板半徑,這里使用5*5均值,所以半徑使用2
        int radius = 2;
        int total = (2 * radius + 1) * (2 * radius + 1);

        int r = 0, g = 0, b = 0;
        for (int row = 0; row < height; row++) {
            for (int col = 0; col < width; col++) {
                int sum = 0;
                for (int i = -radius; i <= radius; i++) {
                    int roffset = row + i;
                    roffset = (roffset < 0) ? 0
                            : (roffset >= height ? height - 1 : roffset);

                    for (int j = -radius; j <= radius; j++) {

                        int coffset = col + j;
                        coffset = (coffset < 0) ? 0
                                : (coffset >= width ? width - 1 : coffset);

                        int pixel = pixels[roffset * width + coffset];

                        r = (pixel >> 16) & 0XFF;

                        sum += r;
                    }
                }

                r = sum / total;
                g = sum / total;
                b = sum / total;

                outPixels[row * width + col] = (255 << 24) | (clamp(r) << 16)
                        | (clamp(g) << 8) | clamp(b);
            }
        }

        BufferedImage dest = new BufferedImage(width, height,
                BufferedImage.TYPE_INT_ARGB);

        if (type == BufferedImage.TYPE_INT_ARGB
                || type == BufferedImage.TYPE_INT_RGB) {
            dest.getRaster().setDataElements(0, 0, width, height, outPixels);
        } else {
            dest.setRGB(0, 0, width, height, outPixels, 0, width);
        }

        return dest;
    }

注:左邊是原圖,右邊是經Sobel變換后再經5*5均值濾波變換的效果,我們可以看到,均值濾波有模糊的效果

 

第四步:數學變換

   在進行伽馬變換之前,我們需要將經拉普拉斯銳化后的圖像和經Sobel及均值濾波后得到的圖像進行相乘,此外,還要和原圖進行相加,經過這些數學變化后的圖像,再進行伽馬變化,下面先給出這些數學變化的代碼和效果圖

/** 數學運算*/
    public BufferedImage mathProcess(BufferedImage src) {

        // 獲取經拉普拉斯運算后與原圖疊加的圖片
        BufferedImage lapsImage = this.laplaceAddProcess(src);

        // 獲取索貝爾5*5均值濾波后的圖像
        BufferedImage meanImage = this.meanValueProcess(src);

        int type = src.getType();
        int width = src.getWidth();
        int height = src.getHeight();

        // 原始圖像的像素信息
        int[] pixels = new int[width * height];
        if (type == BufferedImage.TYPE_INT_ARGB
                || type == BufferedImage.TYPE_INT_RGB) {
            src.getRaster().getDataElements(0, 0, width, height, pixels);
        }
        src.getRGB(0, 0, width, height, pixels, 0, width);

        // 拉普拉斯銳化后的像素信息
        int[] lapsPixels = new int[width * height];
        if (type == BufferedImage.TYPE_INT_ARGB
                || type == BufferedImage.TYPE_INT_RGB) {
            lapsImage.getRaster().getDataElements(0, 0, width, height,
                    lapsPixels);
        }
        lapsImage.getRGB(0, 0, width, height, lapsPixels, 0, width);

        // Sobel和均值濾波后的像素信息
        int[] meanPixels = new int[width * height];
        if (type == BufferedImage.TYPE_INT_ARGB
                || type == BufferedImage.TYPE_INT_RGB) {
            meanImage.getRaster().getDataElements(0, 0, width, height,
                    meanPixels);
        }
        meanImage.getRGB(0, 0, width, height, meanPixels, 0, width);

        int[] outPixels = new int[width * height];

        // 圖像相乘
        int lr = 0, lg = 0, lb = 0;
        int mr = 0, mg = 0, mb = 0;
        int or = 0, og = 0, ob = 0;
        int r = 0, g = 0, b = 0;
        for (int row = 0; row < height; row++) {
            for (int col = 0; col < width; col++) {
                int lpixel = lapsPixels[row * width + col];
                int mpixel = meanPixels[row * width + col];

                // 原始圖像
                int opixel = pixels[row * width + col];

                lr = (lpixel >> 16) & 0XFF;
                mr = (mpixel >> 16) & 0XFF;
                or = (opixel >> 16) & 0XFF;

                lg = (lpixel >> 8) & 0XFF;
                mg = (mpixel >> 8) & 0XFF;
                og = (opixel >> 8) & 0XFF;

                lb = (lpixel) & 0XFF;
                mb = (mpixel) & 0XFF;
                ob = (opixel) & 0XFF;

                /** 圖像相乘 標定到0~255 */
                r = (lr * mr) / 255;
                g = (lg * mg) / 255;
                b = (lb * mb) / 255;

                // 相乘后圖像與原圖相加
                r = r + or;
                g = g + og;
                b = b + ob;

                outPixels[row * width + col] = (255 << 24) | (clamp(r) << 16)
                        | (clamp(g) << 8) | (clamp(b));
            }
        }

        BufferedImage dest = new BufferedImage(width, height,
                BufferedImage.TYPE_INT_ARGB);
        
        if (type == BufferedImage.TYPE_INT_ARGB
                || type == BufferedImage.TYPE_INT_RGB) {
            dest.getRaster().setDataElements(0, 0, width, height, outPixels);
        } else {
            dest.setRGB(0, 0, width, height, outPixels, 0, width);
        }
        
        
        return dest;
    }
    
    private int clamp(int value) {
        return value > 255 ? 255 : (value < 0 ? 0 : value);
    }

注:左邊是原圖,右邊是將經過拉普拉斯銳化的圖像和經過Sobel及均值濾波后的圖像相乘再和原圖像疊加的效果

 

第四步:伽馬變換

     伽馬變換也叫冪律變換,是一個冪函數,可以壓縮和擴展灰度級,圖像經過冪函數的處理后,增加了對比度,下面看下代碼和運行效果圖,也是最終的效果圖

/** 伽馬變化 */
    public BufferedImage gammaProcess(BufferedImage src) {

        BufferedImage image = this.mathProcess(src);

        double gamma = 0.5;// 冪級數

        int type = image.getType();
        int width = src.getWidth();
        int height = src.getHeight();

        // 經過數學變換后的像素信息
        int[] pixels = new int[width * height];
        if (type == BufferedImage.TYPE_INT_ARGB
                || type == BufferedImage.TYPE_INT_RGB) {
            image.getRaster().getDataElements(0, 0, width, height, pixels);
        }
        image.getRGB(0, 0, width, height, pixels, 0, width);

        int[] outPixels = new int[width * height];

        // 建立LUT查找表
        int[] lut = new int[256];
        for (int i = 0; i < 256; i++) {

            float f = (float) (i / 255.0);
            f = (float) Math.pow(f, gamma);

            lut[i] = (int) (f * 255.0);
        }

        int r = 0, g = 0, b = 0;
        int or = 0, og = 0, ob = 0;
        for (int row = 0; row < height; row++) {
            for (int col = 0; col < width; col++) {

                int pixel = pixels[row * width + col];

                r = (pixel >> 16) & 0XFF;
                g = (pixel >> 8) & 0XFF;
                b = (pixel) & 0XFF;

                or = lut[r];
                og = lut[g];
                ob = lut[b];

                outPixels[row * width + col] = (255 << 24) | (clamp(or) << 16)
                        | (clamp(og) << 8) | (clamp(ob));

            }
        }

        BufferedImage dest = new BufferedImage(width, height,
                BufferedImage.TYPE_INT_ARGB);
        if (type == BufferedImage.TYPE_INT_ARGB
                || type == BufferedImage.TYPE_INT_RGB) {
            dest.getRaster().setDataElements(0, 0, width, height, outPixels);
        } else {
            dest.setRGB(0, 0, width, height, outPixels, 0, width);
        }
        
        return dest;
    }

注:左邊是原圖,右邊是最終效果圖,經過伽馬變換,擴展了灰度級,可以展示出人體的整個輪廓

 

     這樣,一個完成的案例就完成了,每一步的實現效果也都和書中的插圖是一樣的,如果有錯誤的地方歡迎指正!


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM