代碼是作者頁面上下載的matlab版。香港中文大學湯曉鷗教授。Learning a Deep Convolutional Network for Image Super-Resolution。
http://mmlab.ie.cuhk.edu.hk/projects/SRCNN.html
- demo_SR.m為運行主文件。
up_scale = 3; model = 'model\9-5-5(ImageNet)\x3.mat';
- .mat文件存儲數據。三個卷積層的權重和偏置。y=wx+b中的w和b。
%% work on illuminance only if size(im,3)>1 im = rgb2ycbcr(im); im = im(:, :, 1); end im_gnd = modcrop(im, up_scale); im_gnd = single(im_gnd)/255;
彩色圖像的話,RGB轉為YCbCr。並只對Y通道處理。
若為灰度圖像,直接處理。
其中,modcrop函數:將圖片裁剪為能夠調整的大小(與放大率匹配)。裁剪舍掉余數行和列。
function imgs = modcrop(imgs, modulo) if size(imgs,3)==1 sz = size(imgs); sz = sz - mod(sz, modulo); imgs = imgs(1:sz(1), 1:sz(2)); else tmpsz = size(imgs); sz = tmpsz(1:2); sz = sz - mod(sz, modulo); imgs = imgs(1:sz(1), 1:sz(2),:); end
mod取余,crop修剪的意思。
double數據類型占8個字節,single類型占4個字節。Single(單精度浮點型)對圖像歸一化處理。得到im_gnd。
%% bicubic interpolation im_l = imresize(im_gnd, 1/up_scale, 'bicubic'); im_b = imresize(im_l, up_scale, 'bicubic');
im_l :將im_gnd 雙三次插值縮小后的圖像。
im_b : 將im_gnd 雙三次插值縮小后再進行同比例放大的圖像。
現在,LR是85*85的,放大三倍后是Bicubic,大小為255*255。
經過邊界修剪,shave.m。四周各去邊框-3。大小變為249*249。
function I = shave(I, border) I = I(1+border(1):end-border(1), ... 1+border(2):end-border(2), :, :);
或
function I = shave(I, border) I = I(1+border(1):end-border(1),1+border(2):end-border(2));
結果是灰度圖像:
想要生成彩色圖像:可以見別人改過的代碼。
【轉載自】
SRCNN(一) - 劉一好 - 博客園 https://www.cnblogs.com/howtoloveyou/p/9691233.html
超分辨率重建SRCNN--Matlab 7.0中運行 - juebai123的博客 - CSDN博客 https://blog.csdn.net/juebai123/article/details/80532577
SRCNN.m
function im_h = SRCNN(model, im_b) %% load CNN model parameters load(model); [conv1_patchsize2,conv1_filters] = size(weights_conv1); conv1_patchsize = sqrt(conv1_patchsize2); [conv2_channels,conv2_patchsize2,conv2_filters] = size(weights_conv2); conv2_patchsize = sqrt(conv2_patchsize2); [conv3_channels,conv3_patchsize2] = size(weights_conv3); conv3_patchsize = sqrt(conv3_patchsize2); [hei, wid] = size(im_b); %% conv1 weights_conv1 = reshape(weights_conv1, conv1_patchsize, conv1_patchsize, conv1_filters); conv1_data = zeros(hei, wid, conv1_filters); for i = 1 : conv1_filters conv1_data(:,:,i) = imfilter(im_b, weights_conv1(:,:,i), 'same', 'replicate'); conv1_data(:,:,i) = max(conv1_data(:,:,i) + biases_conv1(i), 0); end %% conv2 conv2_data = zeros(hei, wid, conv2_filters); for i = 1 : conv2_filters for j = 1 : conv2_channels conv2_subfilter = reshape(weights_conv2(j,:,i), conv2_patchsize, conv2_patchsize); conv2_data(:,:,i) = conv2_data(:,:,i) + imfilter(conv1_data(:,:,j), conv2_subfilter, 'same', 'replicate'); end conv2_data(:,:,i) = max(conv2_data(:,:,i) + biases_conv2(i), 0); end %% conv3 conv3_data = zeros(hei, wid); for i = 1 : conv3_channels conv3_subfilter = reshape(weights_conv3(i,:), conv3_patchsize, conv3_patchsize); conv3_data(:,:) = conv3_data(:,:) + imfilter(conv2_data(:,:,i), conv3_subfilter, 'same', 'replicate'); end %% SRCNN reconstruction im_h = conv3_data(:,:) + biases_conv3;
- l Conv1: f1 = 9 *9 activation = ‘relu’
- l Conv2: f2 = 1 *1 activation = ‘relu’ #為了非線性映射 增強非線性
- l Conv3: f3 = 5 * 5 activation = ‘lienar’
【轉載自】
SRCNN流程細節 - Python少年 - 博客園 https://www.cnblogs.com/echoboy/p/10289741.html