ESPCN单帧超分辨重构实现


ESPCN单帧超分辨重构实现

2019年ASC第3题:

(此处省略一堆背景介绍.)In this competition, the participant should design an algorithm using SOTA strategies like deep learning to do the 4x SR upscaling for images which were down-sampled with a bicubic kernel. For instance, the resolution of a 400x600 image after 4x upscaling is 1600x2400. The evaluation will be done in a perceptual-quality aware manner. The perceptual index (PI) defined in pirm2018 [4] will be used to calculate the quality of the reconstructed high-resolution images. Lower PI means higher quality of the reconstructed image. Ma and NIQE are two no-reference image quality measures [5-6].

ps:软工实践作业(改)


目录:

背景:

那是一段昏暗的时光,寒假过半,我刚完成实验室工作,而没任何机器学习经验的我作死认领了这个任务.英文很差的我没有认真审题,草草读完<终极算法>后,拿起<机器学习实战>和点开网购的课程一头扎入了tensorflow的坑.后来机缘巧合下发现pytorch十分适合我笔记本的硬件与环境配置,便开始自学pytorch;此处推荐'莫烦python'课程,简单易懂,可快速学完.简单过一遍课程后便开始审题.'只能用pytorch,不能用其他机器学习库',想想都后怕,幸好当初没选错库.

实现:

首先是大量搜索论文博客源码,比较尝试多种模块后,最终选择了espcn亚像素卷神经网络,简单而且看起来效果不错的一个网络.一开始,我的初始学习步长定为0.1,训练效果极差,要么全黑,要么全白.首当其冲想到的是'死神经',所以我自作聪明地违背论文加了归一话处理.效果还是不错的,几轮训练下来可以得到信噪比不错的4倍分辨率图像.就这样我拿着这个假的espcn神经网络用游戏本的cuda训练了几个模型:cpu一直高温警报,凛冬架着风扇,笔记本散热器,含着泪跑完的.

代码分析:

找源码的过程很痛苦,但边找边分析更折磨人.现在看来那代码也不是那么复杂.根据实际需求,我们只分析单帧超分辨重构实现部分.

源码:github

分析思路

  1. 扫目录,看readme.
  2. 分析得该项目有图片超分辨率与视频超分辨率两个主要模块,模块间是否依赖未知.
  3. 确定思路,从test_image.py和train.py溯源,弄懂神经网络模型,数据加载,损失函数.简单看下训练方法.并在不用第三方库的情况下尝试实现.

具体分析

  • test_image.py代码
import argparse
import os
from os import listdir

import numpy as np
import torch
from PIL import Image
from torch.autograd import Variable
from torchvision.transforms import ToTensor
from tqdm import tqdm

from data_utils import is_image_file
from model import Net

if __name__ == "__main__":
    parser = argparse.ArgumentParser(description='Test Super Resolution')
    parser.add_argument('--upscale_factor', default=3, type=int, help='super resolution upscale factor')
    parser.add_argument('--model_name', default='epoch_3_100.pt', type=str, help='super resolution model name')
    opt = parser.parse_args()

    UPSCALE_FACTOR = opt.upscale_factor
    MODEL_NAME = opt.model_name

    path = 'data/test/SRF_' + str(UPSCALE_FACTOR) + '/data/'
    images_name = [x for x in listdir(path) if is_image_file(x)]
    model = Net(upscale_factor=UPSCALE_FACTOR)
    if torch.cuda.is_available():
        model = model.cuda()
    model.load_state_dict(torch.load('epochs/' + MODEL_NAME))

    out_path = 'results/SRF_' + str(UPSCALE_FACTOR) + '/'
    if not os.path.exists(out_path):
        os.makedirs(out_path)
    for image_name in tqdm(images_name, desc='convert LR images to HR images'):

        img = Image.open(path + image_name).convert('YCbCr')
        y, cb, cr = img.split()
        image = Variable(ToTensor()(y)).view(1, -1, y.size[1], y.size[0])
        if torch.cuda.is_available():
            image = image.cuda()

        out = model(image)
        out = out.cpu()
        out_img_y = out.data[0].numpy()
        out_img_y *= 255.0
        out_img_y = out_img_y.clip(0, 255)
        out_img_y = Image.fromarray(np.uint8(out_img_y[0]), mode='L')
        out_img_cb = cb.resize(out_img_y.size, Image.BICUBIC)
        out_img_cr = cr.resize(out_img_y.size, Image.BICUBIC)
        out_img = Image.merge('YCbCr', [out_img_y, out_img_cb, out_img_cr]).convert('RGB')
        out_img.save(out_path + image_name)

第38行前都大多是模块导入和参数分析功能的实现,先跳过

从第38行可知,该模块在实现超分辨率前先对其做了灰度处理.不知道具体原因,故我采取直接对三通道图片处理的方法实现.保留该思想.

第28行溯源可得神经网络模型.

这里数据加载方法有点乱,总体思想难以接受.到train.py找吧.先看net

  • model.py
import torch.nn as nn
import torch.nn.functional as F


class Net(nn.Module):
    def __init__(self, upscale_factor):
        super(Net, self).__init__()

        self.conv1 = nn.Conv2d(1, 64, (5, 5), (1, 1), (2, 2))
        self.conv2 = nn.Conv2d(64, 64, (3, 3), (1, 1), (1, 1))
        self.conv3 = nn.Conv2d(64, 32, (3, 3), (1, 1), (1, 1))
        self.conv4 = nn.Conv2d(32, 1 * (upscale_factor ** 2), (3, 3), (1, 1), (1, 1))
        self.pixel_shuffle = nn.PixelShuffle(upscale_factor)

    def forward(self, x):
        x = F.tanh(self.conv1(x))
        x = F.tanh(self.conv2(x))
        x = F.tanh(self.conv3(x))
        x = F.sigmoid(self.pixel_shuffle(self.conv4(x)))
        return x

直接亚像素卷神经,很好理解,三通道情况下也好实现.

比较特别的,这里隐藏层没做归一化处理.所以训练步长不可过大.开始我用较大步长训练,得到无法辨认的图像,边私下给他加了归一化,违背了原始论文的思想.可现在回头看,也许某些卷积层间加归一化函数也许可以得到更好的训练效果.

sigmoid一般在多分类中计算概率,这里用法也比较特别,学习.

  • train.py
import argparse

import torch
import torch.nn as nn
import torch.optim as optim
import torchnet as tnt
import torchvision.transforms as transforms
from torch.autograd import Variable
from torch.optim.lr_scheduler import MultiStepLR
from torch.utils.data import DataLoader
from torchnet.engine import Engine
from torchnet.logger import VisdomPlotLogger
from tqdm import tqdm

from data_utils import DatasetFromFolder
from model import Net
from psnrmeter import PSNRMeter


def processor(sample):
    data, target, training = sample
    data = Variable(data)
    target = Variable(target)
    if torch.cuda.is_available():
        data = data.cuda()
        target = target.cuda()

    output = model(data)
    loss = criterion(output, target)

    return loss, output


def on_sample(state):
    state['sample'].append(state['train'])


def reset_meters():
    meter_psnr.reset()
    meter_loss.reset()


def on_forward(state):
    meter_psnr.add(state['output'].data, state['sample'][1])
    meter_loss.add(state['loss'].data[0])


def on_start_epoch(state):
    reset_meters()
    scheduler.step()
    state['iterator'] = tqdm(state['iterator'])


def on_end_epoch(state):
    print('[Epoch %d] Train Loss: %.4f (PSNR: %.2f db)' % (
        state['epoch'], meter_loss.value()[0], meter_psnr.value()))

    train_loss_logger.log(state['epoch'], meter_loss.value()[0])
    train_psnr_logger.log(state['epoch'], meter_psnr.value())

    reset_meters()

    engine.test(processor, val_loader)
    val_loss_logger.log(state['epoch'], meter_loss.value()[0])
    val_psnr_logger.log(state['epoch'], meter_psnr.value())

    print('[Epoch %d] Val Loss: %.4f (PSNR: %.2f db)' % (
        state['epoch'], meter_loss.value()[0], meter_psnr.value()))

    torch.save(model.state_dict(), 'epochs/epoch_%d_%d.pt' % (UPSCALE_FACTOR, state['epoch']))


if __name__ == "__main__":

    parser = argparse.ArgumentParser(description='Train Super Resolution')
    parser.add_argument('--upscale_factor', default=3, type=int, help='super resolution upscale factor')
    parser.add_argument('--num_epochs', default=100, type=int, help='super resolution epochs number')
    opt = parser.parse_args()

    UPSCALE_FACTOR = opt.upscale_factor
    NUM_EPOCHS = opt.num_epochs

    train_set = DatasetFromFolder('data/train', upscale_factor=UPSCALE_FACTOR, input_transform=transforms.ToTensor(),
                                  target_transform=transforms.ToTensor())
    val_set = DatasetFromFolder('data/val', upscale_factor=UPSCALE_FACTOR, input_transform=transforms.ToTensor(),
                                target_transform=transforms.ToTensor())
    train_loader = DataLoader(dataset=train_set, num_workers=4, batch_size=64, shuffle=True)
    val_loader = DataLoader(dataset=val_set, num_workers=4, batch_size=64, shuffle=False)

    model = Net(upscale_factor=UPSCALE_FACTOR)
    criterion = nn.MSELoss()
    if torch.cuda.is_available():
        model = model.cuda()
        criterion = criterion.cuda()

    print('# parameters:', sum(param.numel() for param in model.parameters()))

    optimizer = optim.Adam(model.parameters(), lr=1e-2)
    scheduler = MultiStepLR(optimizer, milestones=[30, 80], gamma=0.1)

    engine = Engine()
    meter_loss = tnt.meter.AverageValueMeter()
    meter_psnr = PSNRMeter()

    train_loss_logger = VisdomPlotLogger('line', opts={'title': 'Train Loss'})
    train_psnr_logger = VisdomPlotLogger('line', opts={'title': 'Train PSNR'})
    val_loss_logger = VisdomPlotLogger('line', opts={'title': 'Val Loss'})
    val_psnr_logger = VisdomPlotLogger('line', opts={'title': 'Val PSNR'})

    engine.hooks['on_sample'] = on_sample
    engine.hooks['on_forward'] = on_forward
    engine.hooks['on_start_epoch'] = on_start_epoch
    engine.hooks['on_end_epoch'] = on_end_epoch

    engine.train(processor, train_loader, maxepoch=NUM_EPOCHS, optimizer=optimizer)

train的大部分信息已经在test中得到,这里主要学习损失函数和训练数据的准备.

数据加载函数DatasetFromFolder,损失函数为均方差(应用了比赛不允许使用的库,直接自己写个均方差吧)

  • DatasetFromFolder
class DatasetFromFolder(Dataset):
    def __init__(self, dataset_dir, upscale_factor, input_transform=None, target_transform=None):
        super(DatasetFromFolder, self).__init__()
        self.image_dir = dataset_dir + '/SRF_' + str(upscale_factor) + '/data'
        self.target_dir = dataset_dir + '/SRF_' + str(upscale_factor) + '/target'
        self.image_filenames = [join(self.image_dir, x) for x in listdir(self.image_dir) if is_image_file(x)]
        self.target_filenames = [join(self.target_dir, x) for x in listdir(self.target_dir) if is_image_file(x)]
        self.input_transform = input_transform
        self.target_transform = target_transform

    def __getitem__(self, index):
        image, _, _ = Image.open(self.image_filenames[index]).convert('YCbCr').split()
        target, _, _ = Image.open(self.target_filenames[index]).convert('YCbCr').split()
        if self.input_transform:
            image = self.input_transform(image)
        if self.target_transform:
            target = self.target_transform(target)

        return image, target

    def __len__(self):
        return len(self.image_filenames)

继承官方的Dataset类,实现__getitem__和__len__方法便可作为批数据!!!学习但我还是没有对图像进行灰度处理,采取了三通道下直接训练的方法实现.

优化:

随着训练的进行和我对机器学习的进一步了解,我发现自己确实'to young,to simple'.学习过程中我突然有了去掉归一化的想法.抱着忐忑的心情,我真的这么做了,然后把初始学习步长定为10的-4次方.对我来说简直是奇迹,不仅训练速度快了很多,训练效果也比原来好.估计是之前步长过长,参数改动过大,造成生成像素值为0或1(三通道),微调后仍为0/1,损失函数得到的均值平方差不变,造成参数一直在一个错误的范围内改动,神经网络无法学习到有用信息,即成为'死神经'.故调小初始学习步长是不错的选择,在没有多次归一化的情况下能极大提高训练速度.

效果:

使用非训练集放大效果如下,左为代码生成,右为原图放大.

源码:
espcn

ps:代码已上传,也许后面会加gan


免责声明!

本站转载的文章为个人学习借鉴使用,本站对版权不负任何法律责任。如果侵犯了您的隐私权益,请联系本站邮箱yoyou2525@163.com删除。



 
粤ICP备18138465号  © 2018-2025 CODEPRJ.COM