Qt圖像的縮放顯示
實現圖像縮放的方法很多,在 OpenCV&Qt學習之一——打開圖片文件並顯示 的例程中,label控件是通過
ui->imagelabel->resize(ui->imagelabel->pixmap()->size());
來實現適應圖像顯示的,但是由於窗口固定,在打開的圖像小於控件大小時就會縮在左上角顯示,在打開圖像過大時則顯示不全。因此這個例程中首先實現圖像適合窗口的縮放顯示。
由於是基於OpenCV和Qt的圖像處理,因此圖像的縮放處理在OpenCV和Qt都可以完成,我這里就把OpenCV用作圖像的原始處理,Qt用作顯示處理,因此縮放顯示由Qt完成。
Qt中QImage提供了用於縮放的基本函數,而且功能非常強大,使用Qt自帶的幫助可以檢索到相關信息。
函數原型:
QImage QImage::scaled ( const QSize & size, Qt::AspectRatioMode aspectRatioMode = Qt::IgnoreAspectRatio, Qt::TransformationMode transformMode = Qt::FastTransformation ) const
這是直接獲取大小,還有另一種形式:
QImage QImage::scaled ( int width, int height, Qt::AspectRatioMode aspectRatioMode = Qt::IgnoreAspectRatio, Qt::TransformationMode transformMode = Qt::FastTransformation ) const
Returns a copy of the image scaled to a rectangle defined by the given size according to the given aspectRatioMode and transformMode.
- If aspectRatioMode is Qt::IgnoreAspectRatio, the image is scaled to size.
- If aspectRatioMode is Qt::KeepAspectRatio, the image is scaled to a rectangle as large as possible inside size, preserving the aspect ratio.
- If aspectRatioMode is Qt::KeepAspectRatioByExpanding, the image is scaled to a rectangle as small as possible outside size, preserving the aspect ratio.
官方文檔中已經說的比較清楚了,代碼實現也比較簡單,代碼如下:
{ QImage imgScaled ; imgScaled = img.scaled(ui->imagelabel->size(),Qt::KeepAspectRatio); // imgScaled = img.QImage::scaled(ui->imagelabel->width(),ui->imagelabel->height(),Qt::KeepAspectRatio); ui->imagelabel->setPixmap(QPixmap::fromImage(imgScaled)); }
顯示效果如下:
QImage的一點疑問與理解
在查找資料時參考了這篇 Qt中圖像的顯示與基本操作 博客,但是存在一些疑點,博客中相關代碼如下:
QImage* imgScaled = new QImage; *imgScaled=img->scaled(width,height,Qt::KeepAspectRatio); ui->label->setPixmap(QPixmap::fromImage(*imgScaled));
對於以上代碼通過和我之前的代碼做簡單對比,發現有幾點不一樣的地方:
- 圖像的定義方式,這里的定義方式為QImage* imgScale = new QImage
- scaled函數的調用方式,一個是imgScaled = img.scaled后者為*imgScaled=img->scaled,我最開始也是將.寫為->一直沒找出錯誤,提示base operand of '->' has non-pointer type 'QImage'
繼續查找Qt的幫助手冊,發現QImage的構造函數還真是多:
Public Functions
QImage ()
QImage ( const QSize & size, Format format )
QImage ( int width, int height, Format format )
QImage ( uchar * data, int width, int height, Format format )
QImage ( const uchar * data, int width, int height, Format format )
QImage ( uchar * data, int width, int height, int bytesPerLine, Format format )
QImage ( const uchar * data, int width, int height, int bytesPerLine, Format format )
QImage ( const char * const[] xpm )
QImage ( const QString & fileName, const char * format = 0 )
QImage ( const char * fileName, const char * format = 0 )
QImage ( const QImage & image )
~QImage ()
QImage提供了適用於不同場合的構造方式,在手冊中對他們也有具體的應用,但是我仍然沒找到QImage image;和QImage* image = new QImage這兩種究竟對應的是哪兩種,有什么區別和不同。 在上一篇博文 OpenCV&Qt學習之二——QImage的進一步認識 中提到了對於圖像數據的一點認識,其中提到QImage是對現有數據的一種重新整合,是一種格式,但是數據還是指向原來的。從這里來看還需要根據構造方式具體區別,並不完全正確。
凌亂查了查資料,網上的資料就那幾個,互相轉來轉去的,而且多數比較老,仍然沒有幫助我想通關於這里面數據結構的一些疑問,Qt 和 OpenCV對C和指針的要求還是比較高的,長時間從單片機類的程序過來那點功底還真不夠,具體的C應用都忘光了。這個問題只能暫時擱置,在后面的學習中慢慢理解。
基於OpenCV的圖像初步處理
以下兩個例程根據書籍 OpenCV 2 Computer Vision Application Programming Cookbook中的相關例程整理,這是一本比較新也比較基礎的入門書籍。
salt-and-pepper noise
關於圖像數據的基礎知識參見這段介紹:
Fundamentally, an image is a matrix of numerical values. This is why OpenCV 2 manipulates them using the cv::Mat data structure. Each element of the matrix represents one pixel. For a gray-level image (a "black-and-white" image), pixels are unsigned 8-bit values where 0 corresponds to black and corresponds 255 to white. For a color image, three such values per pixel are required to represent the usual three primary color channels {Red, Green, Blue}. A matrix element is therefore made, in this case, of a triplet of values.
這兒以想圖像中添加saltand-pepper noise為例,來說明如何訪問圖像矩陣中的獨立元素。saltand-pepper noise就是圖片中一些像素點,隨機的被黑色或者白色的像素點所替代,因此添加saltand-pepper noise也比較簡單,只需要隨機的產生行和列,將這些行列值對應的像素值更改即可,當然通過上面的介紹,需要更改RGB3個通道。程序如下:
void Widget::salt(cv::Mat &image, int n) { int i,j; for (int k=0; k<n; k++) { i= qrand()%image.cols; j= qrand()%image.rows; if (image.channels() == 1) { // gray-level image image.at<uchar>(j,i)= 255; } else if (image.channels() == 3) { // color image image.at<cv::Vec3b>(j,i)[0]= 255; image.at<cv::Vec3b>(j,i)[1]= 255; image.at<cv::Vec3b>(j,i)[2]= 255; } } }
對Win 7系統中的自帶圖像考拉進行處理后的效果如下圖所示(程序是Ubuntu 12.04下的):
減少色彩位數
在很多處理中需要對圖片中的所有像素進行遍歷操作,采用什么方式進行這個操作是需要思考的問題,關於這個問題的論述可以參考下面一段簡介:
Color images are composed of 3-channel pixels. Each of these channels corresponds to the intensity value of one of the three primary colors (red, green, blue). Since each of these values is an 8-bit unsigned char, the total number of colors is 256x256x256, which is more than 16 million colors. Consequently, to reduce the complexity of an analysis, it is sometimes useful to reduce the number of colors in an image. One simple way to achieve this goal is to simply subdivide the RGB space into cubes of equal sizes. For example, if you reduce the number of colors in each dimension by 8, then you would obtain a total of 32x32x32 colors. Each color in the original image is then assigned a new color value in the color-reduced image that corresponds to the value in the center of the cube to which it belongs.
這個例子就是通過操作每一個像素點來減少色彩的位數,基本內容在以上的英文引文中已經有了介紹,代碼的實現也比較直接。在彩色圖像中,3個通道的數據是依次排列的,每一行的像素三個通道的值依次排列,cv::Mat中的通道排列順序為BGR,那么一個圖像需要的地址塊空間為uchar 寬×高×3.但是需要注意的是,有些處理器針對行數為4或8的圖像處理更有效率,因此為了提高效率就會填充一些額外的像素,這些額外的像素不被顯示和保存,值是忽略的。
實現這個功能的代碼如下:
// using .ptr and [] void Widget::colorReduce0(cv::Mat &image, int div) { int nl= image.rows; // number of lines int nc= image.cols * image.channels(); // total number of elements per line for (int j=0; j<nl; j++) { uchar* data= image.ptr<uchar>(j); for (int i=0; i<nc; i++) { // process each pixel --------------------- data[i]= data[i]/div*div+div/2; // end of pixel processing ---------------- } // end of line } }
data[i]= data[i]/div*div+div/2; 通過整除的方式,就像素位數進行減少,這里沒明白的是為啥后面還要加上div/2。
效果如下:
程序源代碼:
#include "widget.h" #include "ui_widget.h" #include <QDebug> Widget::Widget(QWidget *parent) : QWidget(parent), ui(new Ui::Widget) { ui->setupUi(this); } Widget::~Widget() { delete ui; } void Widget::on_openButton_clicked() { QString fileName = QFileDialog::getOpenFileName(this,tr("Open Image"), ".",tr("Image Files (*.png *.jpg *.bmp)")); qDebug()<<"filenames:"<<fileName; image = cv::imread(fileName.toAscii().data()); ui->imgfilelabel->setText(fileName); //here use 2 ways to make a copy // image.copyTo(originalimg); //make a copy originalimg = image.clone(); //clone the img qimg = Widget::Mat2QImage(image); display(qimg); //display by the label if(image.data) { ui->saltButton->setEnabled(true); ui->originalButton->setEnabled(true); ui->reduceButton->setEnabled(true); } } QImage Widget::Mat2QImage(const cv::Mat &mat) { QImage img; if(mat.channels()==3) { //cvt Mat BGR 2 QImage RGB cvtColor(mat,rgb,CV_BGR2RGB); img =QImage((const unsigned char*)(rgb.data), rgb.cols,rgb.rows, rgb.cols*rgb.channels(), QImage::Format_RGB888); } else { img =QImage((const unsigned char*)(mat.data), mat.cols,mat.rows, mat.cols*mat.channels(), QImage::Format_RGB888); } return img; } void Widget::display(QImage img) { QImage imgScaled; imgScaled = img.scaled(ui->imagelabel->size(),Qt::KeepAspectRatio); // imgScaled = img.QImage::scaled(ui->imagelabel->width(),ui->imagelabel->height(),Qt::KeepAspectRatio); ui->imagelabel->setPixmap(QPixmap::fromImage(imgScaled)); } void Widget::on_originalButton_clicked() { qimg = Widget::Mat2QImage(originalimg); display(qimg); } void Widget::on_saltButton_clicked() { salt(image,3000); qimg = Widget::Mat2QImage(image); display(qimg); } void Widget::on_reduceButton_clicked() { colorReduce0(image,64); qimg = Widget::Mat2QImage(image); display(qimg); } void Widget::salt(cv::Mat &image, int n) { int i,j; for (int k=0; k<n; k++) { i= qrand()%image.cols; j= qrand()%image.rows; if (image.channels() == 1) { // gray-level image image.at<uchar>(j,i)= 255; } else if (image.channels() == 3) { // color image image.at<cv::Vec3b>(j,i)[0]= 255; image.at<cv::Vec3b>(j,i)[1]= 255; image.at<cv::Vec3b>(j,i)[2]= 255; } } } // using .ptr and [] void Widget::colorReduce0(cv::Mat &image, int div) { int nl= image.rows; // number of lines int nc= image.cols * image.channels(); // total number of elements per line for (int j=0; j<nl; j++) { uchar* data= image.ptr<uchar>(j); for (int i=0; i<nc; i++) { // process each pixel --------------------- data[i]= data[i]/div*div+div/2; // end of pixel processing ---------------- } // end of line } }
#ifndef WIDGET_H #define WIDGET_H #include <QWidget> #include <QImage> #include <QFileDialog> #include <QTimer> #include <opencv2/core/core.hpp> #include <opencv2/highgui/highgui.hpp> #include <opencv2/imgproc/imgproc.hpp> using namespace cv; namespace Ui { class Widget; } class Widget : public QWidget { Q_OBJECT public: explicit Widget(QWidget *parent = 0); ~Widget(); private slots: void on_openButton_clicked(); QImage Mat2QImage(const cv::Mat &mat); void display(QImage image); void salt(cv::Mat &image, int n); void on_saltButton_clicked(); void on_reduceButton_clicked(); void colorReduce0(cv::Mat &image, int div); void on_originalButton_clicked(); private: Ui::Widget *ui; cv::Mat image; cv::Mat originalimg; //store the original img QImage qimg; QImage imgScaled; cv::Mat rgb; }; #endif // WIDGET_H
書中還給了其他十余種操作的方法:
#include <iostream> #include <opencv2/core/core.hpp> #include <opencv2/highgui/highgui.hpp> // using .ptr and [] void colorReduce0(cv::Mat &image, int div=64) { int nl= image.rows; // number of lines int nc= image.cols * image.channels(); // total number of elements per line for (int j=0; j<nl; j++) { uchar* data= image.ptr<uchar>(j); for (int i=0; i<nc; i++) { // process each pixel --------------------- data[i]= data[i]/div*div + div/2; // end of pixel processing ---------------- } // end of line } } // using .ptr and * ++ void colorReduce1(cv::Mat &image, int div=64) { int nl= image.rows; // number of lines int nc= image.cols * image.channels(); // total number of elements per line for (int j=0; j<nl; j++) { uchar* data= image.ptr<uchar>(j); for (int i=0; i<nc; i++) { // process each pixel --------------------- *data++= *data/div*div + div/2; // end of pixel processing ---------------- } // end of line } } // using .ptr and * ++ and modulo void colorReduce2(cv::Mat &image, int div=64) { int nl= image.rows; // number of lines int nc= image.cols * image.channels(); // total number of elements per line for (int j=0; j<nl; j++) { uchar* data= image.ptr<uchar>(j); for (int i=0; i<nc; i++) { // process each pixel --------------------- int v= *data; *data++= v - v%div + div/2; // end of pixel processing ---------------- } // end of line } } // using .ptr and * ++ and bitwise void colorReduce3(cv::Mat &image, int div=64) { int nl= image.rows; // number of lines int nc= image.cols * image.channels(); // total number of elements per line int n= static_cast<int>(log(static_cast<double>(div))/log(2.0)); // mask used to round the pixel value uchar mask= 0xFF<<n; // e.g. for div=16, mask= 0xF0 for (int j=0; j<nl; j++) { uchar* data= image.ptr<uchar>(j); for (int i=0; i<nc; i++) { // process each pixel --------------------- *data++= *data&mask + div/2; // end of pixel processing ---------------- } // end of line } } // direct pointer arithmetic void colorReduce4(cv::Mat &image, int div=64) { int nl= image.rows; // number of lines int nc= image.cols * image.channels(); // total number of elements per line int n= static_cast<int>(log(static_cast<double>(div))/log(2.0)); int step= image.step; // effective width // mask used to round the pixel value uchar mask= 0xFF<<n; // e.g. for div=16, mask= 0xF0 // get the pointer to the image buffer uchar *data= image.data; for (int j=0; j<nl; j++) { for (int i=0; i<nc; i++) { // process each pixel --------------------- *(data+i)= *data&mask + div/2; // end of pixel processing ---------------- } // end of line data+= step; // next line } } // using .ptr and * ++ and bitwise with image.cols * image.channels() void colorReduce5(cv::Mat &image, int div=64) { int nl= image.rows; // number of lines int n= static_cast<int>(log(static_cast<double>(div))/log(2.0)); // mask used to round the pixel value uchar mask= 0xFF<<n; // e.g. for div=16, mask= 0xF0 for (int j=0; j<nl; j++) { uchar* data= image.ptr<uchar>(j); for (int i=0; i<image.cols * image.channels(); i++) { // process each pixel --------------------- *data++= *data&mask + div/2; // end of pixel processing ---------------- } // end of line } } // using .ptr and * ++ and bitwise (continuous) void colorReduce6(cv::Mat &image, int div=64) { int nl= image.rows; // number of lines int nc= image.cols * image.channels(); // total number of elements per line if (image.isContinuous()) { // then no padded pixels nc= nc*nl; nl= 1; // it is now a 1D array } int n= static_cast<int>(log(static_cast<double>(div))/log(2.0)); // mask used to round the pixel value uchar mask= 0xFF<<n; // e.g. for div=16, mask= 0xF0 for (int j=0; j<nl; j++) { uchar* data= image.ptr<uchar>(j); for (int i=0; i<nc; i++) { // process each pixel --------------------- *data++= *data&mask + div/2; // end of pixel processing ---------------- } // end of line } } // using .ptr and * ++ and bitwise (continuous+channels) void colorReduce7(cv::Mat &image, int div=64) { int nl= image.rows; // number of lines int nc= image.cols ; // number of columns if (image.isContinuous()) { // then no padded pixels nc= nc*nl; nl= 1; // it is now a 1D array } int n= static_cast<int>(log(static_cast<double>(div))/log(2.0)); // mask used to round the pixel value uchar mask= 0xFF<<n; // e.g. for div=16, mask= 0xF0 for (int j=0; j<nl; j++) { uchar* data= image.ptr<uchar>(j); for (int i=0; i<nc; i++) { // process each pixel --------------------- *data++= *data&mask + div/2; *data++= *data&mask + div/2; *data++= *data&mask + div/2; // end of pixel processing ---------------- } // end of line } } // using Mat_ iterator void colorReduce8(cv::Mat &image, int div=64) { // get iterators cv::Mat_<cv::Vec3b>::iterator it= image.begin<cv::Vec3b>(); cv::Mat_<cv::Vec3b>::iterator itend= image.end<cv::Vec3b>(); for ( ; it!= itend; ++it) { // process each pixel --------------------- (*it)[0]= (*it)[0]/div*div + div/2; (*it)[1]= (*it)[1]/div*div + div/2; (*it)[2]= (*it)[2]/div*div + div/2; // end of pixel processing ---------------- } } // using Mat_ iterator and bitwise void colorReduce9(cv::Mat &image, int div=64) { // div must be a power of 2 int n= static_cast<int>(log(static_cast<double>(div))/log(2.0)); // mask used to round the pixel value uchar mask= 0xFF<<n; // e.g. for div=16, mask= 0xF0 // get iterators cv::Mat_<cv::Vec3b>::iterator it= image.begin<cv::Vec3b>(); cv::Mat_<cv::Vec3b>::iterator itend= image.end<cv::Vec3b>(); // scan all pixels for ( ; it!= itend; ++it) { // process each pixel --------------------- (*it)[0]= (*it)[0]&mask + div/2; (*it)[1]= (*it)[1]&mask + div/2; (*it)[2]= (*it)[2]&mask + div/2; // end of pixel processing ---------------- } } // using MatIterator_ void colorReduce10(cv::Mat &image, int div=64) { // get iterators cv::Mat_<cv::Vec3b> cimage= image; cv::Mat_<cv::Vec3b>::iterator it=cimage.begin(); cv::Mat_<cv::Vec3b>::iterator itend=cimage.end(); for ( ; it!= itend; it++) { // process each pixel --------------------- (*it)[0]= (*it)[0]/div*div + div/2; (*it)[1]= (*it)[1]/div*div + div/2; (*it)[2]= (*it)[2]/div*div + div/2; // end of pixel processing ---------------- } } void colorReduce11(cv::Mat &image, int div=64) { int nl= image.rows; // number of lines int nc= image.cols; // number of columns for (int j=0; j<nl; j++) { for (int i=0; i<nc; i++) { // process each pixel --------------------- image.at<cv::Vec3b>(j,i)[0]= image.at<cv::Vec3b>(j,i)[0]/div*div + div/2; image.at<cv::Vec3b>(j,i)[1]= image.at<cv::Vec3b>(j,i)[1]/div*div + div/2; image.at<cv::Vec3b>(j,i)[2]= image.at<cv::Vec3b>(j,i)[2]/div*div + div/2; // end of pixel processing ---------------- } // end of line } } // with input/ouput images void colorReduce12(const cv::Mat &image, // input image cv::Mat &result, // output image int div=64) { int nl= image.rows; // number of lines int nc= image.cols ; // number of columns // allocate output image if necessary result.create(image.rows,image.cols,image.type()); // created images have no padded pixels nc= nc*nl; nl= 1; // it is now a 1D array int n= static_cast<int>(log(static_cast<double>(div))/log(2.0)); // mask used to round the pixel value uchar mask= 0xFF<<n; // e.g. for div=16, mask= 0xF0 for (int j=0; j<nl; j++) { uchar* data= result.ptr<uchar>(j); const uchar* idata= image.ptr<uchar>(j); for (int i=0; i<nc; i++) { // process each pixel --------------------- *data++= (*idata++)&mask + div/2; *data++= (*idata++)&mask + div/2; *data++= (*idata++)&mask + div/2; // end of pixel processing ---------------- } // end of line } } // using overloaded operators void colorReduce13(cv::Mat &image, int div=64) { int n= static_cast<int>(log(static_cast<double>(div))/log(2.0)); // mask used to round the pixel value uchar mask= 0xFF<<n; // e.g. for div=16, mask= 0xF0 // perform color reduction image=(image&cv::Scalar(mask,mask,mask))+cv::Scalar(div/2,div/2,div/2); }