超分的開篇之作,2014 ECCV 港中文 Chao Dong
三層網絡,文中還對各層網絡的意義做出了解釋
使用caffe訓練模型,matlab做inference,代碼見 http://mmlab.ie.cuhk.edu.hk/projects/SRCNN.html
700*700像素的圖重建出來需要30s+,效果還是比較朦朧
可以用自己的圖片做一下直觀了解
path = 'E:\Download\超分辨率\test\test'; list = dir(path); for i1 = 1: size(list,1) name = list(i1).name; if name == '.' continue; end fullName = fullfile(path, name); demo_SR(fullName); % 需要把作者的demo源文件改為function end
評估方法
計算PSNR、SSIM,計算的方式可以參見VDSR的matlab代碼
常用評估數據集 set5,set14,B100,urban100,之后的新paper,還用了manga109、部分DIV2K等
用二維坐標表示性能也相當直觀
缺點:
1、Works for only a single scale
2、卷積感受野太小
3、學習速率 1e-5 太慢,需要加快訓練速度。而且inference的速度也比較慢
如何訓練model
關於訓練數據的說明:
For a fair comparison with traditional example-based methods, we use the same training set, test sets, and protocols as in [20]. Speci cally, the training set consists of 91 images. The Set5 [2] (5 images) is used to evaluate the performance of upscaling factors 2, 3, and 4, and Set14 [28] (14 images) is used to evaluate the upscaling factor 3. In addition to the 91-image training set, we also investigate a larger training set in Section 5.2.
Anchored Neighborhood Regression for Fast Example-Based Super-Resolution
http://www.vision.ee.ethz.ch/~timofter/ICCV2013_ID1774_SUPPLEMENTARY/index.html
而cvpr2018的SR文章使用的訓練集,基本就是比較大的數據集了,其實選什么都可以的,只要不包含test data即可。DIV2K consists of 800 training images, 100 validation images, and 100 test images. We train all of our models with 800 training images and use 5 validation images in the training process. For testing, we use five standard benchmark datasets: Set5 [1], Set14 [33], B100 [18], Urban100 [8], and Manga109 [19]. The SR results are evaluated with PSNR and SSIM [32] on Y channel (i.e., luminance) of transformed YCbCr space.
整理了SRCNN的訓練數據 91,測試數據集 set5,set14,B100,urban100
鏈接: https://pan.baidu.com/s/1f5CrntYV2RgsAVoDx3hvUg 提取碼: jv2f 復制這段內容后打開百度網盤手機App,操作更方便哦
keras版本的模型訓練代碼,可以參考一下
https://github.com/DeNA/SRCNNKit/tree/master/script
使用misc縮放圖片,處理圖片,獲得網絡的訓練數據
建立網絡,使用generator產生batch來訓練網絡
VDSR
CVPR 2016 首爾大學
https://cv.snu.ac.kr/research/VDSR/