VGG網絡的Pytorch官方實現過程解讀


在Pytorch中,已經實現了一部分經典的網絡模型,這其中就包括VGG。

VGG的代碼在哪里?

你可以在以下路徑中發現該文件:

D:\Python\Anaconda3\envs\torch\lib\site-packages\torchvision\models\vgg.py

envs 以前的路徑由你安裝的路徑決定。

調用時,如下:

import torchvision.models as models
vgg16 = models.vgg16(pretrained=pretrained) # 帶預訓練權重的VGG16

你也可以將鼠標放在 vgg16 文字上方,按住 Ctrl 的同時,點擊它,跳轉到該文件中。

完整代碼

該文件已經被我注釋過,完整代碼如下:

import torch
import torch.nn as nn
from .utils import load_state_dict_from_url

# ------------------------------------------------------------------------------
# 暴露接口
__all__ = [
    'VGG', 'vgg11', 'vgg11_bn', 'vgg13', 'vgg13_bn', 'vgg16', 'vgg16_bn',
    'vgg19_bn', 'vgg19',
]

# ------------------------------------------------------------------------------
# 預訓練權重下載地址
model_urls = {
    'vgg11': 'https://download.pytorch.org/models/vgg11-bbd30ac9.pth',
    'vgg13': 'https://download.pytorch.org/models/vgg13-c768596a.pth',
    'vgg16': 'https://download.pytorch.org/models/vgg16-397923af.pth',
    'vgg19': 'https://download.pytorch.org/models/vgg19-dcbb9e9d.pth',
    'vgg11_bn': 'https://download.pytorch.org/models/vgg11_bn-6002323d.pth',
    'vgg13_bn': 'https://download.pytorch.org/models/vgg13_bn-abd245e5.pth',
    'vgg16_bn': 'https://download.pytorch.org/models/vgg16_bn-6c64b313.pth',
    'vgg19_bn': 'https://download.pytorch.org/models/vgg19_bn-c79401a0.pth',
}

# ------------------------------------------------------------------------------

class VGG(nn.Module):
    '''
    VGG通用網絡模型
    輸入features為網絡的特征提取部分網絡層列表
    分類數為 1000
    '''
    def __init__(self, features, num_classes=1000, init_weights=True):
        super(VGG, self).__init__()

        # 特征提取部分
        self.features = features

        # 自適應平均池化,特征圖池化到 7×7 大小
        self.avgpool = nn.AdaptiveAvgPool2d((7, 7))

        # 分類部分
        self.classifier = nn.Sequential(
            nn.Linear(512 * 7 * 7, 4096),   # 512*7*7 --> 4096
            nn.ReLU(True),
            nn.Dropout(),
            nn.Linear(4096, 4096),          # 4096 --> 4096
            nn.ReLU(True),
            nn.Dropout(),
            nn.Linear(4096, num_classes),   # 4096 --> 1000
        )

        # 權重初始化
        if init_weights:
            self._initialize_weights()

    def forward(self, x):

        # 特征提取
        x = self.features(x)
        # 自適應平均池化
        x = self.avgpool(x)
        # 特征圖展平成向量
        x = torch.flatten(x, 1)
        # 分類器分類輸出
        x = self.classifier(x)
        return x

    def _initialize_weights(self):
        '''
        權重初始化
        '''
        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                # 卷積層使用 kaimming 初始化
                nn.init.kaiming_normal_(
                    m.weight, mode='fan_out', nonlinearity='relu')
                # 偏置初始化為0
                if m.bias is not None:
                    nn.init.constant_(m.bias, 0)
            # 批歸一化層權重初始化為1 
            elif isinstance(m, nn.BatchNorm2d):
                nn.init.constant_(m.weight, 1)
                nn.init.constant_(m.bias, 0)
            # 全連接層權重初始化
            elif isinstance(m, nn.Linear):
                nn.init.normal_(m.weight, 0, 0.01)
                nn.init.constant_(m.bias, 0)


# ------------------------------------------------------------------------------
def make_layers(cfg, batch_norm=False):
    '''
    根據配置表,返回模型層列表
    '''
    layers = [] # 層列表初始化

    in_channels = 3 # 輸入3通道圖像

    # 遍歷配置列表
    for v in cfg:
        if v == 'M': # 添加池化層
            layers += [nn.MaxPool2d(kernel_size=2, stride=2)]
        else: # 添加卷積層

            # 3×3 卷積
            conv2d = nn.Conv2d(in_channels, v, kernel_size=3, padding=1)

            # 卷積-->批歸一化(可選)--> ReLU激活
            if batch_norm:
                layers += [conv2d, nn.BatchNorm2d(v), nn.ReLU(inplace=True)]
            else:
                layers += [conv2d, nn.ReLU(inplace=True)]

            # 通道數方面,下一層輸入即為本層輸出
            in_channels = v

    # 以sequencial類型返回模型層列表
    return nn.Sequential(*layers)


# 網絡參數配置表
'''
數字代表通道數,如 64 表示輸出 64 通道特征圖,對應於論文中的 Conv3-64;
M 代表最大池化操作,對應於論文中的 maxpool 
A-LRN使用了局部歸一化響應,C網絡存在1×1卷積,這兩個網絡比較特殊,所以排除在配置表中
'''
cfgs = {
    'A': [64, 'M', 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],
    'B': [64, 64, 'M', 128, 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],
    'D': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512, 'M', 512, 512, 512, 'M'],
    'E': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 256, 'M', 512, 512, 512, 512, 'M', 512, 512, 512, 512, 'M'],
}

# ------------------------------------------------------------------------------

def _vgg(arch, cfg, batch_norm, pretrained, progress, **kwargs):
    '''
    通用網絡構造器,主要實現網絡模型生成,以及預訓練權重的導入
    '''
    if pretrained:
        kwargs['init_weights'] = False
    model = VGG(make_layers(cfgs[cfg], batch_norm=batch_norm), **kwargs)

    if pretrained:
        state_dict = load_state_dict_from_url(model_urls[arch],
                                              progress=progress)
        model.load_state_dict(state_dict)
    return model

# ------------------------------------------------------------------------------

def vgg11(pretrained=False, progress=True, **kwargs):
    r"""VGG 11-layer model (configuration "A") from
    `"Very Deep Convolutional Networks For Large-Scale Image Recognition" <https://arxiv.org/pdf/1409.1556.pdf>`_

    Args:
        pretrained (bool): If True, returns a model pre-trained on ImageNet
        progress (bool): If True, displays a progress bar of the download to stderr
    """
    return _vgg('vgg11', 'A', False, pretrained, progress, **kwargs)


def vgg11_bn(pretrained=False, progress=True, **kwargs):
    r"""VGG 11-layer model (configuration "A") with batch normalization
    `"Very Deep Convolutional Networks For Large-Scale Image Recognition" <https://arxiv.org/pdf/1409.1556.pdf>`_

    Args:
        pretrained (bool): If True, returns a model pre-trained on ImageNet
        progress (bool): If True, displays a progress bar of the download to stderr
    """
    return _vgg('vgg11_bn', 'A', True, pretrained, progress, **kwargs)


def vgg13(pretrained=False, progress=True, **kwargs):
    r"""VGG 13-layer model (configuration "B")
    `"Very Deep Convolutional Networks For Large-Scale Image Recognition" <https://arxiv.org/pdf/1409.1556.pdf>`_

    Args:
        pretrained (bool): If True, returns a model pre-trained on ImageNet
        progress (bool): If True, displays a progress bar of the download to stderr
    """
    return _vgg('vgg13', 'B', False, pretrained, progress, **kwargs)


def vgg13_bn(pretrained=False, progress=True, **kwargs):
    r"""VGG 13-layer model (configuration "B") with batch normalization
    `"Very Deep Convolutional Networks For Large-Scale Image Recognition" <https://arxiv.org/pdf/1409.1556.pdf>`_

    Args:
        pretrained (bool): If True, returns a model pre-trained on ImageNet
        progress (bool): If True, displays a progress bar of the download to stderr
    """
    return _vgg('vgg13_bn', 'B', True, pretrained, progress, **kwargs)


def vgg16(pretrained=False, progress=True, **kwargs):
    r"""VGG 16-layer model (configuration "D")
    `"Very Deep Convolutional Networks For Large-Scale Image Recognition" <https://arxiv.org/pdf/1409.1556.pdf>`_

    Args:
        pretrained (bool): If True, returns a model pre-trained on ImageNet
        progress (bool): If True, displays a progress bar of the download to stderr
    """
    return _vgg('vgg16', 'D', False, pretrained, progress, **kwargs)


def vgg16_bn(pretrained=False, progress=True, **kwargs):
    r"""VGG 16-layer model (configuration "D") with batch normalization
    `"Very Deep Convolutional Networks For Large-Scale Image Recognition" <https://arxiv.org/pdf/1409.1556.pdf>`_

    Args:
        pretrained (bool): If True, returns a model pre-trained on ImageNet
        progress (bool): If True, displays a progress bar of the download to stderr
    """
    return _vgg('vgg16_bn', 'D', True, pretrained, progress, **kwargs)


def vgg19(pretrained=False, progress=True, **kwargs):
    r"""VGG 19-layer model (configuration "E")
    `"Very Deep Convolutional Networks For Large-Scale Image Recognition" <https://arxiv.org/pdf/1409.1556.pdf>`_

    Args:
        pretrained (bool): If True, returns a model pre-trained on ImageNet
        progress (bool): If True, displays a progress bar of the download to stderr
    """
    return _vgg('vgg19', 'E', False, pretrained, progress, **kwargs)


def vgg19_bn(pretrained=False, progress=True, **kwargs):
    r"""VGG 19-layer model (configuration 'E') with batch normalization
    `"Very Deep Convolutional Networks For Large-Scale Image Recognition" <https://arxiv.org/pdf/1409.1556.pdf>`_

    Args:
        pretrained (bool): If True, returns a model pre-trained on ImageNet
        progress (bool): If True, displays a progress bar of the download to stderr
    """
    return _vgg('vgg19_bn', 'E', True, pretrained, progress, **kwargs)

實現原理

下面,簡單分析一下它是如何實現VGG論文中的VGG11到VGG19這幾種網絡的。

整體框圖

首先看下面這張圖,整個代碼的邏輯關系如下:

原文回顧

根據論文中各個網絡結構的參數,可發現,A-LRN和C網絡比較特殊,一個使用了局部歸一化響應,另一個使用了 1×1 卷積,剩下的網絡結構的組件都是通用的,如卷積、池化、全連接等。因此pytorch官方選擇將 A、B、D、E作為 VGG 調用的全部網絡。

實現過程分析

根據網絡結構和參數設置,構造 cfgs 字典,存放這些結構參數:

cfgs = {
    'A': [64, 'M', 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],
    'B': [64, 64, 'M', 128, 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],
    'D': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512, 'M', 512, 512, 512, 'M'],
    'E': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 256, 'M', 512, 512, 512, 512, 'M', 512, 512, 512, 512, 'M'],
}

其中,數字代表卷積層輸出特征圖通道數,M 代表最大池化層。

根據 cfgs 的配置參數,使用 make_layers 函數,來自動創建 Sequential 網絡層列表。

def make_layers(cfg, batch_norm=False):
    '''
    根據配置表,返回模型層列表
    '''
    layers = [] # 層列表初始化

    in_channels = 3 # 輸入3通道圖像

    # 遍歷配置列表
    for v in cfg:
        if v == 'M': # 添加池化層
            layers += [nn.MaxPool2d(kernel_size=2, stride=2)]
        else: # 添加卷積層

            # 3×3 卷積
            conv2d = nn.Conv2d(in_channels, v, kernel_size=3, padding=1)

            # 卷積-->批歸一化(可選)--> ReLU激活
            if batch_norm:
                layers += [conv2d, nn.BatchNorm2d(v), nn.ReLU(inplace=True)]
            else:
                layers += [conv2d, nn.ReLU(inplace=True)]

            # 通道數方面,下一層輸入即為本層輸出
            in_channels = v

    # 以sequencial類型返回模型層列表
    return nn.Sequential(*layers)

這里卷積層全部為 3×3 的卷積核,然后批歸一化為可選項。

make_layers 返回的 Sequential 對應的就是網絡的特征提取部分的網絡結構,因此,進一步地在 VGG這個類中,來構造一個通用的VGG網絡模型。

class VGG(nn.Module):
    '''
    VGG通用網絡模型
    輸入features為網絡的特征提取部分網絡層列表
    分類數為 1000
    '''
    def __init__(self, features, num_classes=1000, init_weights=True):
        super(VGG, self).__init__()

        # 特征提取部分
        self.features = features

        # 自適應平均池化,特征圖池化到 7×7 大小
        self.avgpool = nn.AdaptiveAvgPool2d((7, 7))

        # 分類部分
        self.classifier = nn.Sequential(
            nn.Linear(512 * 7 * 7, 4096),   # 512*7*7 --> 4096
            nn.ReLU(True),
            nn.Dropout(),
            nn.Linear(4096, 4096),          # 4096 --> 4096
            nn.ReLU(True),
            nn.Dropout(),
            nn.Linear(4096, num_classes),   # 4096 --> 1000
        )

        # 權重初始化
        if init_weights:
            self._initialize_weights()

    def forward(self, x):

        # 特征提取
        x = self.features(x)
        # 自適應平均池化
        x = self.avgpool(x)
        # 特征圖展平成向量
        x = torch.flatten(x, 1)
        # 分類器分類輸出
        x = self.classifier(x)
        return x

    def _initialize_weights(self):
        '''
        權重初始化
        '''
        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                # 卷積層使用 kaimming 初始化
                nn.init.kaiming_normal_(
                    m.weight, mode='fan_out', nonlinearity='relu')
                # 偏置初始化為0
                if m.bias is not None:
                    nn.init.constant_(m.bias, 0)
            # 批歸一化層權重初始化為1 
            elif isinstance(m, nn.BatchNorm2d):
                nn.init.constant_(m.weight, 1)
                nn.init.constant_(m.bias, 0)
            # 全連接層權重初始化
            elif isinstance(m, nn.Linear):
                nn.init.normal_(m.weight, 0, 0.01)
                nn.init.constant_(m.bias, 0)

這里 Pytorch 官方做了一個小改動,那就是在特征提取部分完成后,使用一個自適應池化層將特征圖的寬和高池化到 7×7 大小,這樣做的目的是讓下一步的 flatten 操作得到的是一個固定長度的向量,從而讓網絡能夠接受任意尺寸的圖像輸入(原文中的輸入圖像尺寸為224×224)。展平后的向量輸入到由三個全連接層構成的分類器中進行分類,這里 Pytorch 官方加入了兩次 dropout 操作。此外,權重的初始化也進行了定義。

為了實現VGG11到VGG19的這幾種網絡,設計了一個 _vgg 函數,來自動生成網絡模型,以及預訓練權重的導入。

def _vgg(arch, cfg, batch_norm, pretrained, progress, **kwargs):
    '''
    通用網絡構造器,主要實現網絡模型生成,以及預訓練權重的導入
    '''
    if pretrained:
        kwargs['init_weights'] = False
    model = VGG(make_layers(cfgs[cfg], batch_norm=batch_norm), **kwargs)

    if pretrained:
        state_dict = load_state_dict_from_url(model_urls[arch],
                                              progress=progress)
        model.load_state_dict(state_dict)
    return model

預訓練權重的下載地址定義在了 model_urls 里,如果你的網絡不是很好,那么在調用時,會下載的很慢。假如你已經下載了對應的權重文件,那么也可以改寫這個路徑,以免重新下載。

model_urls = {
    'vgg11': 'https://download.pytorch.org/models/vgg11-bbd30ac9.pth',
    'vgg13': 'https://download.pytorch.org/models/vgg13-c768596a.pth',
    'vgg16': 'https://download.pytorch.org/models/vgg16-397923af.pth',
    'vgg19': 'https://download.pytorch.org/models/vgg19-dcbb9e9d.pth',
    'vgg11_bn': 'https://download.pytorch.org/models/vgg11_bn-6002323d.pth',
    'vgg13_bn': 'https://download.pytorch.org/models/vgg13_bn-abd245e5.pth',
    'vgg16_bn': 'https://download.pytorch.org/models/vgg16_bn-6c64b313.pth',
    'vgg19_bn': 'https://download.pytorch.org/models/vgg19_bn-c79401a0.pth',
}

最后,為每一種網絡分別寫了對應的函數,來實現網絡的生成,例如vgg16網絡,不帶BN層和帶BN層的兩種:

def vgg16(pretrained=False, progress=True, **kwargs):
    r"""VGG 16-layer model (configuration "D")
    `"Very Deep Convolutional Networks For Large-Scale Image Recognition" <https://arxiv.org/pdf/1409.1556.pdf>`_

    Args:
        pretrained (bool): If True, returns a model pre-trained on ImageNet
        progress (bool): If True, displays a progress bar of the download to stderr
    """
    return _vgg('vgg16', 'D', False, pretrained, progress, **kwargs)


def vgg16_bn(pretrained=False, progress=True, **kwargs):
    r"""VGG 16-layer model (configuration "D") with batch normalization
    `"Very Deep Convolutional Networks For Large-Scale Image Recognition" <https://arxiv.org/pdf/1409.1556.pdf>`_

    Args:
        pretrained (bool): If True, returns a model pre-trained on ImageNet
        progress (bool): If True, displays a progress bar of the download to stderr
    """
    return _vgg('vgg16_bn', 'D', True, pretrained, progress, **kwargs)

心得

  • 使用 nn.Sequential 能夠讓網絡結構自定義更方便快捷,但僅限於這種具有線性堆疊結構的網絡
  • 通過設置網絡結構參數表,來定義一個網絡的骨架,是很好的建模范式,值得借鑒
  • 學會發現網絡結構之間的相似性,然后提煉通用的構造函數,這樣可以大大節省開發時間,尤其是在需要設計大量的對照網絡時,這種優勢更明顯。


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM