【貓狗數據集】使用預訓練的resnet18模型


數據集下載地址:

鏈接:https://pan.baidu.com/s/1l1AnBgkAAEhh0vI5_loWKw
提取碼:2xq4

創建數據集:https://www.cnblogs.com/xiximayou/p/12398285.html

讀取數據集:https://www.cnblogs.com/xiximayou/p/12422827.html

進行訓練:https://www.cnblogs.com/xiximayou/p/12448300.html

保存模型並繼續進行訓練:https://www.cnblogs.com/xiximayou/p/12452624.html

加載保存的模型並測試:https://www.cnblogs.com/xiximayou/p/12459499.html

划分驗證集並邊訓練邊驗證:https://www.cnblogs.com/xiximayou/p/12464738.html

使用學習率衰減策略並邊訓練邊測試:https://www.cnblogs.com/xiximayou/p/12468010.html

利用tensorboard可視化訓練和測試過程:https://www.cnblogs.com/xiximayou/p/12482573.html

從命令行接收參數:https://www.cnblogs.com/xiximayou/p/12488662.html

使用top1和top5准確率來衡量模型:https://www.cnblogs.com/xiximayou/p/12489069.html

epoch、batchsize、step之間的關系:https://www.cnblogs.com/xiximayou/p/12405485.html

 

之前都是從頭開始訓練模型,本節我們要使用預訓練的模型來進行訓練。

只需要在train.py中加上:

  if baseline:
    model =torchvision.models.resnet18(pretrained=False)
    model.fc = nn.Linear(model.fc.in_features,2,bias=False)
  else: print("使用預訓練的resnet18模型") model=torchvision.models.resnet18(pretrained=True) for i in model.state_dict(): print(i) model.fc = nn.Linear(model.fc.in_features,2,bias=False) print(model)
使用預訓練的resnet18模型
conv1.weight
bn1.weight
bn1.bias
bn1.running_mean
bn1.running_var
bn1.num_batches_tracked
layer1.0.conv1.weight
layer1.0.bn1.weight
layer1.0.bn1.bias
layer1.0.bn1.running_mean
layer1.0.bn1.running_var
layer1.0.bn1.num_batches_tracked
layer1.0.conv2.weight
layer1.0.bn2.weight
layer1.0.bn2.bias
layer1.0.bn2.running_mean
layer1.0.bn2.running_var
layer1.0.bn2.num_batches_tracked
layer1.1.conv1.weight
layer1.1.bn1.weight
layer1.1.bn1.bias
layer1.1.bn1.running_mean
layer1.1.bn1.running_var
layer1.1.bn1.num_batches_tracked
layer1.1.conv2.weight
layer1.1.bn2.weight
layer1.1.bn2.bias
layer1.1.bn2.running_mean
layer1.1.bn2.running_var
layer1.1.bn2.num_batches_tracked
layer2.0.conv1.weight
layer2.0.bn1.weight
layer2.0.bn1.bias
layer2.0.bn1.running_mean
layer2.0.bn1.running_var
layer2.0.bn1.num_batches_tracked
layer2.0.conv2.weight
layer2.0.bn2.weight
layer2.0.bn2.bias
layer2.0.bn2.running_mean
layer2.0.bn2.running_var
layer2.0.bn2.num_batches_tracked
layer2.0.downsample.0.weight
layer2.0.downsample.1.weight
layer2.0.downsample.1.bias
layer2.0.downsample.1.running_mean
layer2.0.downsample.1.running_var
layer2.0.downsample.1.num_batches_tracked
layer2.1.conv1.weight
layer2.1.bn1.weight
layer2.1.bn1.bias
layer2.1.bn1.running_mean
layer2.1.bn1.running_var
layer2.1.bn1.num_batches_tracked
layer2.1.conv2.weight
layer2.1.bn2.weight
layer2.1.bn2.bias
layer2.1.bn2.running_mean
layer2.1.bn2.running_var
layer2.1.bn2.num_batches_tracked
layer3.0.conv1.weight
layer3.0.bn1.weight
layer3.0.bn1.bias
layer3.0.bn1.running_mean
layer3.0.bn1.running_var
layer3.0.bn1.num_batches_tracked
layer3.0.conv2.weight
layer3.0.bn2.weight
layer3.0.bn2.bias
layer3.0.bn2.running_mean
layer3.0.bn2.running_var
layer3.0.bn2.num_batches_tracked
layer3.0.downsample.0.weight
layer3.0.downsample.1.weight
layer3.0.downsample.1.bias
layer3.0.downsample.1.running_mean
layer3.0.downsample.1.running_var
layer3.0.downsample.1.num_batches_tracked
layer3.1.conv1.weight
layer3.1.bn1.weight
layer3.1.bn1.bias
layer3.1.bn1.running_mean
layer3.1.bn1.running_var
layer3.1.bn1.num_batches_tracked
layer3.1.conv2.weight
layer3.1.bn2.weight
layer3.1.bn2.bias
layer3.1.bn2.running_mean
layer3.1.bn2.running_var
layer3.1.bn2.num_batches_tracked
layer4.0.conv1.weight
layer4.0.bn1.weight
layer4.0.bn1.bias
layer4.0.bn1.running_mean
layer4.0.bn1.running_var
layer4.0.bn1.num_batches_tracked
layer4.0.conv2.weight
layer4.0.bn2.weight
layer4.0.bn2.bias
layer4.0.bn2.running_mean
layer4.0.bn2.running_var
layer4.0.bn2.num_batches_tracked
layer4.0.downsample.0.weight
layer4.0.downsample.1.weight
layer4.0.downsample.1.bias
layer4.0.downsample.1.running_mean
layer4.0.downsample.1.running_var
layer4.0.downsample.1.num_batches_tracked
layer4.1.conv1.weight
layer4.1.bn1.weight
layer4.1.bn1.bias
layer4.1.bn1.running_mean
layer4.1.bn1.running_var
layer4.1.bn1.num_batches_tracked
layer4.1.conv2.weight
layer4.1.bn2.weight
layer4.1.bn2.bias
layer4.1.bn2.running_mean
layer4.1.bn2.running_var
layer4.1.bn2.num_batches_tracked
fc.weight
fc.bias
ResNet(
  (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
  (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  (relu): ReLU(inplace=True)
  (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
  (layer1): Sequential(
    (0): BasicBlock(
      (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (1): BasicBlock(
      (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
  )
  (layer2): Sequential(
    (0): BasicBlock(
      (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (downsample): Sequential(
        (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): BasicBlock(
      (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
  )
  (layer3): Sequential(
    (0): BasicBlock(
      (conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (downsample): Sequential(
        (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): BasicBlock(
      (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
  )
  (layer4): Sequential(
    (0): BasicBlock(
      (conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (downsample): Sequential(
        (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): BasicBlock(
      (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
  )
  (avgpool): AdaptiveAvgPool2d(output_size=(1, 1))
  (fc): Linear(in_features=512, out_features=2, bias=False)
)

接下來來看看如何凍結某些層,不讓其在訓練的時候進行梯度更新。

首先我們輸出下信息看看結構:

i=0
for
child in model.children():
i+=1
print("第{}個child".format(str(i)))
print(child)
第1個child
Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
第2個child
BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
第3個child
ReLU(inplace=True)
第4個child
MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
第5個child
Sequential(
  (0): BasicBlock(
    (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (relu): ReLU(inplace=True)
    (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  )
  (1): BasicBlock(
    (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (relu): ReLU(inplace=True)
    (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  )
)
第6個child
Sequential(
  (0): BasicBlock(
    (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
    (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (relu): ReLU(inplace=True)
    (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (downsample): Sequential(
      (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)
      (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
  )
  (1): BasicBlock(
    (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (relu): ReLU(inplace=True)
    (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  )
)
第7個child
Sequential(
  (0): BasicBlock(
    (conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
    (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (relu): ReLU(inplace=True)
    (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (downsample): Sequential(
      (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)
      (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
  )
  (1): BasicBlock(
    (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (relu): ReLU(inplace=True)
    (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  )
)
第8個child
Sequential(
  (0): BasicBlock(
    (conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
    (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (relu): ReLU(inplace=True)
    (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (downsample): Sequential(
      (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
      (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
  )
  (1): BasicBlock(
    (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (relu): ReLU(inplace=True)
    (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  )
)
第9個child
AdaptiveAvgPool2d(output_size=(1, 1))
第10個child
Linear(in_features=512, out_features=2, bias=False)

我們凍結前面的7個child,只更新第8、9、10個child的參數。可這么定義:

    print("使用預訓練的resnet18模型")
    model=torchvision.models.resnet18(pretrained=True)
    model.fc = nn.Linear(model.fc.in_features,2,bias=False)
    i=0
    for child in model.children():
      i+=1
      #print("第{}個child".format(str(i)))
      #print(child)
      if i<=7:
        for param in child.parameters():
          param.requires_grad=False
    #我們打印下是否是設置成功
    for name, param in model.named_parameters():
      if param.requires_grad:
        print("需要梯度:", name)
      else:
        print("不需要梯度:", name)

接下來我們還要在優化器中過濾掉不需要更新參數的層:

  optimizer = torch.optim.SGD(filter(lambda p: p.requires_grad, model.parameters()), lr=0.1, momentum=0.9,
                            weight_decay=1*1e-4)

結果:

使用預訓練的resnet18模型
不需要梯度: conv1.weight
不需要梯度: bn1.weight
不需要梯度: bn1.bias
不需要梯度: layer1.0.conv1.weight
不需要梯度: layer1.0.bn1.weight
不需要梯度: layer1.0.bn1.bias
不需要梯度: layer1.0.conv2.weight
不需要梯度: layer1.0.bn2.weight
不需要梯度: layer1.0.bn2.bias
不需要梯度: layer1.1.conv1.weight
不需要梯度: layer1.1.bn1.weight
不需要梯度: layer1.1.bn1.bias
不需要梯度: layer1.1.conv2.weight
不需要梯度: layer1.1.bn2.weight
不需要梯度: layer1.1.bn2.bias
不需要梯度: layer2.0.conv1.weight
不需要梯度: layer2.0.bn1.weight
不需要梯度: layer2.0.bn1.bias
不需要梯度: layer2.0.conv2.weight
不需要梯度: layer2.0.bn2.weight
不需要梯度: layer2.0.bn2.bias
不需要梯度: layer2.0.downsample.0.weight
不需要梯度: layer2.0.downsample.1.weight
不需要梯度: layer2.0.downsample.1.bias
不需要梯度: layer2.1.conv1.weight
不需要梯度: layer2.1.bn1.weight
不需要梯度: layer2.1.bn1.bias
不需要梯度: layer2.1.conv2.weight
不需要梯度: layer2.1.bn2.weight
不需要梯度: layer2.1.bn2.bias
不需要梯度: layer3.0.conv1.weight
不需要梯度: layer3.0.bn1.weight
不需要梯度: layer3.0.bn1.bias
不需要梯度: layer3.0.conv2.weight
不需要梯度: layer3.0.bn2.weight
不需要梯度: layer3.0.bn2.bias
不需要梯度: layer3.0.downsample.0.weight
不需要梯度: layer3.0.downsample.1.weight
不需要梯度: layer3.0.downsample.1.bias
不需要梯度: layer3.1.conv1.weight
不需要梯度: layer3.1.bn1.weight
不需要梯度: layer3.1.bn1.bias
不需要梯度: layer3.1.conv2.weight
不需要梯度: layer3.1.bn2.weight
不需要梯度: layer3.1.bn2.bias
需要梯度: layer4.0.conv1.weight
需要梯度: layer4.0.bn1.weight
需要梯度: layer4.0.bn1.bias
需要梯度: layer4.0.conv2.weight
需要梯度: layer4.0.bn2.weight
需要梯度: layer4.0.bn2.bias
需要梯度: layer4.0.downsample.0.weight
需要梯度: layer4.0.downsample.1.weight
需要梯度: layer4.0.downsample.1.bias
需要梯度: layer4.1.conv1.weight
需要梯度: layer4.1.bn1.weight
需要梯度: layer4.1.bn1.bias
需要梯度: layer4.1.conv2.weight
需要梯度: layer4.1.bn2.weight
需要梯度: layer4.1.bn2.bias
需要梯度: fc.weight

拓展:如果是我們自己定義的模型和預訓練的模型不一致應該怎么加載參數呢?

這里以以resnet50為例,這里我們再新定義一個卷積神經網絡:

# coding=UTF-8
import torchvision.models as models
import torch
import torch.nn as nn
import math
import torch.utils.model_zoo as model_zoo
 
class CNN(nn.Module):
 
    def __init__(self, block, layers, num_classes=2):
        self.inplanes = 64
        super(ResNet, self).__init__()
        self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3,
                               bias=False)
        self.bn1 = nn.BatchNorm2d(64)
        self.relu = nn.ReLU(inplace=True)
        self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
        self.layer1 = self._make_layer(block, 64, layers[0])
        self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
        self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
        self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
        self.avgpool = nn.AvgPool2d(7, stride=1)
        #新增一個反卷積層
        self.convtranspose1 = nn.ConvTranspose2d(2048, 2048, kernel_size=3, stride=1, padding=1, output_padding=0, groups=1, bias=False, dilation=1) #新增一個最大池化層
        self.maxpool2 = nn.MaxPool2d(kernel_size=3, stride=1, padding=1) #去掉原來的fc層,新增一個fclass層
        self.fclass = nn.Linear(2048, num_classes) for m in self.modules():
            if isinstance(m, nn.Conv2d):
                n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
                m.weight.data.normal_(0, math.sqrt(2. / n))
            elif isinstance(m, nn.BatchNorm2d):
                m.weight.data.fill_(1)
                m.bias.data.zero_()
 
    def _make_layer(self, block, planes, blocks, stride=1):
        downsample = None
        if stride != 1 or self.inplanes != planes * block.expansion:
            downsample = nn.Sequential(
                nn.Conv2d(self.inplanes, planes * block.expansion,
                          kernel_size=1, stride=stride, bias=False),
                nn.BatchNorm2d(planes * block.expansion),
            )
 
        layers = []
        layers.append(block(self.inplanes, planes, stride, downsample))
        self.inplanes = planes * block.expansion
        for i in range(1, blocks):
            layers.append(block(self.inplanes, planes))
 
        return nn.Sequential(*layers)
 
    def forward(self, x):
        x = self.conv1(x)
        x = self.bn1(x)
        x = self.relu(x)
        x = self.maxpool(x)
 
        x = self.layer1(x)
        x = self.layer2(x)
        x = self.layer3(x)
        x = self.layer4(x)
 
        x = self.avgpool(x)
        #新加層的forward
 x = x.view(x.size(0), -1) x = self.convtranspose1(x) x = self.maxpool2(x) x = x.view(x.size(0), -1) x = self.fclass(x) return x
 
#加載model
resnet50 = models.resnet50(pretrained=True)
cnn = CNN(Bottleneck, [3, 4, 6, 3])
#讀取參數
#取出預訓練模型的參數 pretrained_dict = resnet50.state_dict()
#取出本模型的參數 model_dict
= cnn.state_dict() # 將pretrained_dict里不屬於model_dict的鍵剔除掉 pretrained_dict = {k: v for k, v in pretrained_dict.items() if k in model_dict} # 更新現有的model_dict model_dict.update(pretrained_dict) # 加載我們真正需要的state_dict cnn.load_state_dict(model_dict) # print(resnet50) print(cnn)

下面也摘取了一些使用部分預訓練模型初始化網絡的方法:

方式一: 自己網絡和預訓練網絡結構一致的層,使用預訓練網絡對應層的參數批量初始化

model_dict = model.state_dict()                                    # 取出自己網絡的參數字典
pretrained_dict = torch.load("I:/迅雷下載/alexnet-owt-4df8aa71.pth")# 加載預訓練網絡的參數字典
# 取出預訓練網絡的參數字典
keys = []
for k, v in pretrained_dict.items():
       keys.append(k)
i = 0
 
# 自己網絡和預訓練網絡結構一致的層,使用預訓練網絡對應層的參數初始化
for k, v in model_dict.items():
    if v.size() == pretrained_dict[keys[i]].size():
         model_dict[k] = pretrained_dict[keys[i]]
         #print(model_dict[k])
         i = i + 1
model.load_state_dict(model_dict)

方式二:自己網絡和預訓練網絡結構一致的層,按層初始化

# 加粗自己定義一個網絡叫CNN
model = CNN()
model_dict = model.state_dict()                                    # 取出自己網絡的參數
 
for k, v in model_dict.items():                                    # 查看自己網絡參數各層叫什么名稱
       print(k)
 
pretrained_dict = torch.load("I:/迅雷下載/alexnet-owt-4df8aa71.pth")# 加載預訓練網絡的參數
for k, v in pretrained_dict.items():                                    # 查看預訓練網絡參數各層叫什么名稱
       print(k)
 
 
# 對應層賦值初始化
model_dict['conv1.0.weight'] = pretrained_dict['features.0.weight'] # 將自己網絡的conv1.0層的權重初始化為預訓練網絡features.0層的權重
model_dict['conv1.0.bias'] = pretrained_dict['features.0.bias']    # 將自己網絡的conv1.0層的偏置項初始化為預訓練網絡features.0層的偏置項
 
model_dict['conv2.1.weight'] = pretrained_dict['features.3.weight']
model_dict['conv1.1.bias'] = pretrained_dict['features.3.bias']
 
model_dict['conv2.1.weight'] = pretrained_dict['features.6.weight']
model_dict['conv2.1.bias'] = pretrained_dict['features.6.bias']
 
... ...

 

下一節補充下計算數據集的標准差和方差,在數據增強時對數據進行標准化的時候用。

 

參考:

https://blog.csdn.net/feizai1208917009/article/details/103598233

https://blog.csdn.net/Arthur_Holmes/article/details/103493886?depth_1-utm_source=distribute.pc_relevant.none-task&utm_source=distribute.pc_relevant.none-task

https://blog.csdn.net/whut_ldz/article/details/78845947


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM