圖像風格遷移(Pytorch)


圖像風格遷移

最后要生成的圖片是怎樣的是難以想象的,所以朴素的監督學習方法可能不會生效,

Content Loss

根據輸入圖片和輸出圖片的像素差別可以比較損失

\(l_{content} = \frac{1}{2}\sum (C_c-T_c)^2\)

Style Loss

從中間提取多個特征層來衡量損失。

利用$Gram$ \(Matrix\)(格拉姆矩陣)可以衡量風格的相關性,對於一個實矩陣$X$,矩陣$XX^T$是$X$的行向量的格拉姆矩陣

\(l_{style}=\sum wi(Ts-Ss)^2\)

總的損失函數

\(L_{total(S,C,T)}=\alpha l_{content}(C,T)+\beta L_{style}(S,T)\)


代碼
from PIL import Image
import matplotlib.pyplot as plt
import numpy as np

import torch
import torch.optim as optim
from torchvision import transforms, models

vgg = models.vgg19(pretrained=True).features	#使用預訓練的VGG19,features表示只提取不包括全連接層的部分

for i in vgg.parameters():
    i.requires_grad_(False)		#不要求訓練VGG的參數

定義一個顯示圖片的函數

def load_img(path, max_size=400,shape=None):
    img = Image.open(path).convert('RGB')
    
    if(max(img.size)) > max_size:	#規定圖像的最大尺寸
        size = max_size
    else:
        size = max(img.size)
    
    if shape is not None:
        size = shape
    transform = transforms.Compose([
        transforms.Resize(size),
        transforms.ToTensor(),
        transforms.Normalize((0.485, 0.456, 0.406),
                             (0.229, 0.224, 0.225))
    ])
    '''刪除alpha通道(jpg), 轉為png,補足另一個維度-batch'''
    img = transform(img)[:3,:,:].unsqueeze(0)
    return img

載入圖像

content  = load_img('./images/turtle.jpg')
style = load_img('./images/wave.jpg', shape=content.shape[-2:])		#讓兩張圖尺寸一樣

'''轉換為plt可以畫出來的形式'''
def im_convert(tensor):
    img = tensor.clone().detach()
    img = img.numpy().squeeze()
    img = img.transpose(1,2,0)
    img = img * np.array((0.229, 0.224, 0.225)) + np.array((0.485, 0.456, 0.406))
    img = img.clip(0,1)
    return img

使用的圖像為(左邊為Content Image,右邊為Style Image):

定義幾個待會要用到的函數

def get_features(img, model, layers=None):
    '''獲取特征層'''
    if layers is None:
        layers = {
            '0':'conv1_1',
            '5':'conv2_1',
            '10':'conv3_1',
            '19':'conv4_1',
            '21':'conv4_2',    #content層
            '28':'conv5_1'
        }
    
    features = {}
    x = img
    for name, layer in model._modules.items():
        x = layer(x)
        if name in layers:
            features[layers[name]] = x
    
    return features

def gram_matrix(tensor):
    '''計算Gram matrix'''
    _, d, h, w = tensor.size()  #第一個是batch_size
    
    tensor = tensor.view(d, h*w)
    
    gram = torch.mm(tensor, tensor.t())
    
    return gram    

content_features = get_features(content, vgg)
style_features = get_features(style, vgg)

style_grams = {layer:gram_matrix(style_features[layer]) for layer in style_features}

target = content.clone().requires_grad_(True)

'''定義不同層的權重'''
style_weights = {
    'conv1_1': 1,
    'conv2_1': 0.8,
    'conv3_1': 0.5,
    'conv4_1': 0.3,
    'conv5_1': 0.1,
}
'''定義2種損失對應的權重'''
content_weight = 1
style_weight = 1e6

訓練過程

show_every = 400
optimizer = optim.Adam([target], lr=0.003)
steps = 2000

for ii in range(steps):
    target_features = get_features(target, vgg)
    
    content_loss = torch.mean((target_features['conv4_2'] - content_features['conv4_2'])**2)   
    style_loss = 0
    '''加上每一層的gram_matrix矩陣的損失'''
    for layer in style_weights:
        target_feature = target_features[layer]
        target_gram = gram_matrix(target_feature)
        _, d, h, w = target_feature.shape
        style_gram = style_grams[layer]
        layer_style_loss = style_weights[layer] * torch.mean((target_gram - style_gram)**2)
        style_loss += layer_style_loss/(d*h*w)     #加到總的style_loss里,除以大小
        
    total_loss = content_weight * content_loss + style_weight * style_loss
    
    optimizer.zero_grad()
    total_loss.backward()
    optimizer.step()
    
    if ii % show_every == 0 :
        print('Total Loss:',total_loss.item())
        plt.imshow(im_convert(target))
        plt.show()

將輸入的圖像和最后得到的混合圖作比較:

沒有達到最好的效果,還有可以優化的空間√

參考:
  1. Image Style Transfer Using Convolutional Neural Networks論文
  2. Udacity——PyTorch Scholarship Challenge


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM