pytorch-cifar10分類網絡結構


cifar10主要是由32x32的三通道彩色圖, 總共10個類別,這里我們使用殘差網絡構造網絡結構

網絡結構:

 

第一層:首先經過一個卷積,歸一化,激活 32x32x16 -> 32x32x16

第二層:  通過一多個殘差模型

殘差模塊的網絡構造:

           如果stride != 1 or in_channel != out_channel, 就構造downsample網絡結構進行降采樣操作

           利用殘差模塊進行第一次殘差卷積, 將downsample傳入

          連續進行多次的殘差卷積

from torchvision import transforms
from torch import nn
# 首先對圖片進行數據轉換

train_transform = transforms.Compose([
    transforms.Scale(40), # 相當於是resize操作,
    transforms.RandomHorizontalFlip(), # 表示進行左右的翻轉
    transforms.RandomCrop(32), #表示進行隨機的裁剪
    transforms.ToTensor(), # 將數據轉換為tensor格式
    transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]) # 進行-均值 / 標准差, 將數據轉換為-1, 1 之間

])

test_transform = transforms.Compose([
    transforms.ToTensor(),
    transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])
])

def conv3x3(in_channels, out_channels, stride=1):
    return nn.Conv2d(in_channels,
                     out_channels,
                     kernel_size=3,
                     stride=stride,
                     padding=1,
                     bias=False)

class ResidualBlock(nn.Module):
    def __init__(self, in_channels, out_channels, stride=1, downsample=None):
        super(ResidualBlock, self).__init__()
        self.conv1 = conv3x3(in_channels, out_channels, stride=1)
        self.bn = nn.BatchNorm2d(out_channels)
        self.relu = nn.ReLU(True)
        self.conv2 = conv3x3(out_channels, out_channels, stride=1)
        self.bn = nn.BatchNorm2d(out_channels)
        self.downsample = downsample

    def forward(self, x):
        residual = x
        out = self.conv1(x)
        out = self.bn(x)
        out = self.relu(x)
        out = self.conv2(x)
        out = self.bn(x)
        if self.downsample:
            residual = self.downsample(x)
        out += residual
        return self.relu(out)


class ResNet(nn.Module):
    def __init__(self, block, layers, num_classes=10):
        super(ResNet, self).__init__()
        self.in_channels = 16
        self.conv = conv3x3(3, 16)
        self.bn = nn.BatchNorm2d(self.in_channels)
        self.relu = nn.ReLU(True)
        self.layers1 = self.make_block(block, 16, layers[0])
        self.layers2 = self.make_block(block, 32, layers[0])
        self.layers3 = self.make_block(block, 64, layers[1])
        self.avg_pool = nn.AvgPool2d(8)
        self.fc = nn.Linear(64, num_classes)

    def make_block(self, block, out_channels, blocks, stride=1):
        downsample = None
        if stride != 1 or out_channels != self.in_channels:
            downsample = nn.Sequential(conv3x3(self.in_channels, out_channels, stride=stride),
            nn.BatchNorm2d(out_channels))
        layers = []
        layers.append(block(self.in_channels, out_channels, stride=stride, downsample = downsample))
        for i in blocks:
            layers.append(block(self.out_channels, out_channels, stride=stride, downsample=downsample))

        return nn.Sequential(*layers)

    def forward(self, x):
        out = self.conv(x)
        out = self.bn(out)
        out = self.relu(out)
        out = self.layers1(out)
        out= self.layers2(out)
        out = self.layers3(out)
        out = self.avg_pool(out)
        out = self.fc(out)

        return out

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM