DEX-6-caffe模型轉成pytorch模型辦法


在python2.7環境下

文件下載位置:https://data.vision.ee.ethz.ch/cvl/rrothe/imdb-wiki/

1.可視化模型文件prototxt

1)在線可視化

網址為:https://ethereon.github.io/netscope/#/editor

將prototxt文件的內容復制到左邊,然后按shift-enter鍵即可:

 

2)本地可視化

先安裝:

(deeplearning2) userdeMacBook-Pro:~ user$ brew install graphviz

再安裝:

(deeplearning2) userdeMacBook-Pro:~ user$ pip install pydot

前提是安裝好了caffe,然后運行:

(deeplearning2) userdeMacBook-Pro:face_data user$ python /Users/user/caffe/python/draw_net.py /Users/user/pytorch/face_data/age_train.prototxt age_train.png --rankdir=BT
Drawing net to age_train.png

參數意思:

  • 參數1:網絡模型的prototxt文件
  • 參數2:保存的圖片路徑及名字
  • 參數3:--rankdir=x , x 有四種選項,分別是LR, RL, TB, BT 。用來表示網絡的方向,分別是從左到右,從右到左,從上到小,從下到上。默認為LR

這是更改輸入、損失和精確度層的圖,為:

 

沒更改過的為:

(deeplearning2) userdeMacBook-Pro:face_data user$ python /Users/user/caffe/python/draw_net.py /Users/user/pytorch/face_data/age_train.prototxt.txt age_train_2.png --rankdir=BT
Drawing net to age_train_2.png

圖為:

 

2.網絡結構說明

name: "VGG_ILSVRC_16_layers" # 聲明該網絡結構的名稱
# 下面先是指明使用的數據集
layer {                                         #指明用於訓練的數據,訓練集
  top: "data"
  type: "ImageData"
  top: "label"
  name: "data"
  transform_param {
    mirror: true
    crop_size: 224
    mean_file: "imagenet_mean.binaryproto"
  }
  image_data_param {
    source: "train.txt"                       #指明存放位置
    batch_size: 10
    new_height: 256
    new_width: 256 
  }
  include: { phase: TRAIN }                       #指明用於訓練
}
layer {                                         #指明用於驗證的數據,驗證集
  top: "data"
  top: "label"
  name: "data"
  type: "ImageData"
  image_data_param {
    new_height: 256
    new_width: 256
    source: "train.txt"
    batch_size: 10
  }
  transform_param {
    crop_size: 224
    mirror: false
    mean_file: "imagenet_mean.binaryproto"
  }
  include: { 
    phase: TEST
    stage: "test-on-train"
 }
}

layer {                                         #指明用於測試的數據,測試集
  top: "data"
  top: "label"
  name: "data"
  type: "ImageData"
  image_data_param {
    new_height: 256
    new_width: 256
    source: "test.txt"
    batch_size: 10
  }
  transform_param {
    crop_size: 224
    mirror: false
    mean_file: "imagenet_mean.binaryproto"
  }
  include: { 
    phase: TEST
    stage: "test-on-test"
 }
}

#下面就是網絡結構了,這里以第一個卷積層為例

layer {
  bottom: "data"            #該層的上一層為data數據層,即其輸入
  top: "conv1_1"            #該層的輸出
  name: "conv1_1"           #該層的名稱conv1_1
 param {                    #權重weight的參數
    lr_mult: 1              #權重weight的學習率
    decay_mult: 1           #權重weight的衰減系數
  }
  param {                   #偏置bias的參數
    lr_mult: 2              #偏置bias的學習率
    decay_mult: 0           #偏置bias的衰減系數
  }
  type: "Convolution"       #該層的類型,為卷積
  convolution_param {       #卷積層的參數
    num_output: 64          #卷積核個數,即輸出的channels數
    pad: 1                  #padding大小
    kernel_size: 3          #卷積核的大小
  }
}
layer {
  bottom: "conv1_1"
  top: "conv1_1"
  name: "relu1_1"
  type: "ReLU"              #該層的類型,為激活函數,無參數
}
layer {
  bottom: "conv1_1"
  top: "conv1_2"
  name: "conv1_2"           #該層的名稱conv1_1
 param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  type: "Convolution"
  convolution_param {
    num_output: 64
    pad: 1
    kernel_size: 3
  }
}
layer {
  bottom: "conv1_2"
  top: "conv1_2"
  name: "relu1_2"
  type: "ReLU"              #該層的類型,為激活函數,無參數
}
layer {
  bottom: "conv1_2"
  top: "pool1"
  name: "pool1"
  type: "Pooling"           #該層的類型,為池化層
  pooling_param {
    pool: MAX               #為最大池化層
    kernel_size: 2          #卷積核的大小
    stride: 2               #步長大小
  }
}

# 中間其他卷積層省略
...

# 全連接層
layer {
  bottom: "pool5"
  top: "fc6"
  name: "fc6"
 param {
    lr_mult: 10
    decay_mult: 1
  }
  param {
    lr_mult: 20
    decay_mult: 0
  }
  type: "InnerProduct"       #該層的類型,為全連接層
  inner_product_param {      #輸出個數為4096
    num_output: 4096
  }
}
layer {
  bottom: "fc6"
  top: "fc6"
  name: "relu6"
  type: "ReLU"
}
layer {
  bottom: "fc6"
  top: "fc6"
  name: "drop6"
  type: "Dropout"             #該層的類型,為dropout層
  dropout_param {
    dropout_ratio: 0.5        #dropout比率為0.5
  }
}
layer {
  bottom: "fc6"
  top: "fc7"
  name: "fc7"
 param {
    lr_mult: 10
    decay_mult: 1
  }
  param {
    lr_mult: 20
    decay_mult: 0
  }
  type: "InnerProduct"
  inner_product_param {
    num_output: 4096
  }
}
layer {
  bottom: "fc7"
  top: "fc7"
  name: "relu7"
  type: "ReLU"
}
layer {
  bottom: "fc7"
  top: "fc7"
  name: "drop7"
  type: "Dropout"
  dropout_param {
    dropout_ratio: 0.5
  }
}
layer {
  bottom: "fc7"
  top: "fc8-101"
  name: "fc8-101"
 param {
    lr_mult: 10
    decay_mult: 1
  }
  param {
    lr_mult: 20
    decay_mult: 0
  }
  type: "InnerProduct"
  inner_product_param {
    num_output: 101
  weight_filler {             #權重初始化方法參數
      type: "gaussian"        #使用的初始化方法為高斯分布
      std: 0.01               #標准差為0.01
    }
    bias_filler {             #偏置初始化方法參數
      type: "constant"        #初始化為定值0
      value: 0
    }
  }
}

layer {
  bottom: "fc8-101"
  bottom: "label"
  name: "loss"
  type: "SoftmaxWithLoss"      #最后對結果求損失,使用的方法為SoftmaxWithLoss
  include: { phase: TRAIN }
}

# 如果是測試則僅使用Softmax求概率

layer {
  name: "prob"
  type: "Softmax"
  bottom: "fc8-101"
  top: "prob"
    include {
    phase: TEST
  }
}


# 當進行驗證時進行的求top1、top5和top10的操作,求精確度
layer {
  name: "accuracy_train_top01"
  type: "Accuracy"
  bottom: "fc8-101"
  bottom: "label"
  top: "accuracy_train_top01"
  include {
    phase: TEST
    stage: "test-on-train"
  }
}

layer {
  name: "accuracy_train_top05"
  type: "Accuracy"
  bottom: "fc8-101"
  bottom: "label"
  top: "accuracy_train_top05"
  accuracy_param {
    top_k: 5
  }
  include {
    phase: TEST
    stage: "test-on-train"
  }
}

layer {
  name: "accuracy_train_top10"
  type: "Accuracy"
  bottom: "fc8-101"
  bottom: "label"
  top: "accuracy_train_top10"
  accuracy_param {
    top_k: 10
  }
  include {
    phase: TEST
    stage: "test-on-train"
  }
}

#測試時也一樣,求top1、top5和top10的操作,求精確度
layer {
  name: "accuracy_test_top01"
  type: "Accuracy"
  bottom: "fc8-101"
  bottom: "label"
  top: "accuracy_test_top01"
  include {
    phase: TEST
    stage: "test-on-test"
  }
}

layer {
  name: "accuracy_test_top05"
  type: "Accuracy"
  bottom: "fc8-101"
  bottom: "label"
  top: "accuracy_test_top05"
  accuracy_param {
    top_k: 5
  }
  include {
    phase: TEST
    stage: "test-on-test"
  }
}

layer {
  name: "accuracy_test_top10"
  type: "Accuracy"
  bottom: "fc8-101"
  bottom: "label"
  top: "accuracy_test_top10"
  accuracy_param {
    top_k: 10
  }
  include {
    phase: TEST
    stage: "test-on-test"
  }
}

 

3.自己定義的VGG16模型-models.py

然后我自己定義的pytorch模型為:

# coding:utf-8
import torchvision.models as models
import torch.nn as nn
import torch
from torch.autograd import Variable
import torch.nn.functional as F

# 0-100歲
class AgeModel(nn.Module):
    def __init__(self, classes=101):
        super(AgeModel,self).__init__()
        model = models.vgg16()
        layers = list(model.children())
        # 一共只有3層,卷積層,池化層和全連接層
        # print(len(layers)) #3
        self.model = nn.Sequential(*layers[:-1])
        # -1層為整個連接層
        # print(layers[1]) #池化層AdaptiveAvgPool2d
        # print(layers[-1]) #全連接層
        self.fullCon = nn.Sequential(nn.Linear(7*7*512, 4096),
                                   nn.ReLU(True),
                                   nn.Dropout(0.5, inplace=True),
                                   nn.Linear(4096, 4096),
                                   nn.ReLU(True),
                                   nn.Dropout(0.5, inplace=True),
                                   nn.Linear(4096, classes) #將最后一層全連接層改成自己的輸出101
                              )

    def forward(self, x):
        x = self.model(x)
        x = x.view(x.size(0), -1)
        x = self.fullCon(x)
        x = F.softmax(x, dim=1)
        return x

def test():
    net = AgeModel()
    for module in net.children():
        print(module)
    output = net(Variable(torch.randn(2,3,224,224)))
    print('output :', output.size())
    print(type(output))


if __name__ == '__main__':
    test()

 

4.下載對應的caffemodel模型並轉換

(deeplearning2) userdeMacBook-Pro:pytorch-DEX-master user$ curl -o age_model_imdb.caffemodel https://data.vision.ee.ethz.ch/cvl/rrothe/imdb-wiki/static/dex_imdb_wiki.caffemodel

然后轉換代碼為:

# coding:utf-8
import caffe
import torch

from models import AgeModel


def convert_age():
    age_torch_net = AgeModel()

    # 導入caffe模型,caffe.TRAIN表示使用訓練模式,caffe.TEST表示使用測試模式
    # 不同模式使用的prototxt文件是不同的
    caffe_net = caffe.Net('age_train.prototxt', "age_model_imdb.caffemodel", caffe.TRAIN)
    # caffe_net = caffe.Net('age.prototxt', "age_model_imdb.caffemodel", caffe.TEST)

    # 對應得到的是權重weight和偏置bias的參數
    caffe_params = caffe_net.params
    print(caffe_params)

    # #查看索引表示
    # print(age_torch_net.model)
    # print(age_torch_net.model[0][1])
    #
    # print(age_torch_net.fullCon)
    # print(age_torch_net.fullCon[1])

    mappings = {
        'conv1_1': age_torch_net.model[0][0],
        'conv1_2': age_torch_net.model[0][2],
        'conv2_1': age_torch_net.model[0][5],
        'conv2_2': age_torch_net.model[0][7],
        'conv3_1': age_torch_net.model[0][10],
        'conv3_2': age_torch_net.model[0][12],
        'conv3_3': age_torch_net.model[0][14],
        'conv4_1': age_torch_net.model[0][17],
        'conv4_2': age_torch_net.model[0][19],
        'conv4_3': age_torch_net.model[0][21],
        'conv5_1': age_torch_net.model[0][24],
        'conv5_2': age_torch_net.model[0][26],
        'conv5_3': age_torch_net.model[0][28],
        'fc6': age_torch_net.fullCon[0],
        'fc7': age_torch_net.fullCon[3],
        'fc8-101': age_torch_net.fullCon[6],
    }

    for k, layer in mappings.items():
        # # 第一個迭代:k=conv1_1,layer=Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        # print(k, layer)

        # 將caffe模型中的參數信息導入定義好的pytorch模型中
        layer.weight.data.copy_(torch.from_numpy(caffe_params[k][0].data)) # 權重weight參數
        layer.bias.data.copy_(torch.from_numpy(caffe_params[k][1].data)) # 偏置bias的參數
    # 然后僅保存模型參數即可
    torch.save(age_torch_net.state_dict(), './age.pth')


if __name__ == '__main__':
    convert_age()

 

過程中出現了錯誤:

RuntimeError: unidentifiable C++ exception
Segmentation fault: 11

因為:

caffe_net = caffe.Net('age_train.prototxt.txt', "dex_imdb_wiki.caffemodel", caffe.TRAIN)

這里我下載時是age_train.prototxt.txt,但是運行是導入是age_train.prototxt,添加.txt即可,或者更改格式

 

然后還出現錯誤:

(deeplearning2) userdeMacBook-Pro:face_data user$ python convert.py
Traceback (most recent call last):
  File "convert.py", line 2, in <module> import caffe File "/Users/user/caffe/python/caffe/__init__.py", line 1, in <module> from .pycaffe import Net, SGDSolver, NesterovSolver, AdaGradSolver, RMSPropSolver, AdaDeltaSolver, AdamSolver, NCCL, Timer File "/Users/user/caffe/python/caffe/pycaffe.py", line 13, in <module> from ._caffe import Net, SGDSolver, NesterovSolver, AdaGradSolver, \ ImportError: dlopen(/Users/user/caffe/python/caffe/_caffe.so, 2): Library not loaded: @rpath/libpython2.7.dylib Referenced from: /Users/user/caffe/python/caffe/_caffe.so Reason: image not found (deeplearning2) userdeMacBook-Pro:face_data user$ python convert.py Traceback (most recent call last): File "convert.py", line 2, in <module> import caffe File "/Users/user/caffe/python/caffe/__init__.py", line 1, in <module> from .pycaffe import Net, SGDSolver, NesterovSolver, AdaGradSolver, RMSPropSolver, AdaDeltaSolver, AdamSolver, NCCL, Timer File "/Users/user/caffe/python/caffe/pycaffe.py", line 13, in <module> from ._caffe import Net, SGDSolver, NesterovSolver, AdaGradSolver, \ ImportError: dlopen(/Users/user/caffe/python/caffe/_caffe.so, 2): Library not loaded: @rpath/libpython2.7.dylib Referenced from: /Users/user/caffe/python/caffe/_caffe.so Reason: image not found

因為我是使用anaconda3的python2.7環境,其下面已經有libpython2.7.dylib,這里報錯的原因是@rpath的導向中沒有anaconda3的python2.7環境的路徑,使用下面的命令添加即可:

(deeplearning2) userdeMacBook-Pro:caffe user$ install_name_tool -add_rpath /anaconda3/envs/deeplearning2/lib /Users/user/caffe/python/caffe/_caffe.so

 

還出現錯誤:

I0812 14:26:19.981719 245106112 layer_factory.hpp:77] Creating layer data F0812 14:26:19.981747 245106112 layer_factory.hpp:81] Check failed: registry.count(type) == 1 (0 vs. 1) Unknown layer type: ImageData (known types: AbsVal, Accuracy, ArgMax, BNLL, BatchNorm, BatchReindex, Bias, Clip, Concat, ContrastiveLoss, Convolution, Crop, Data, Deconvolution, Dropout, DummyData, ELU, Eltwise, Embed, EuclideanLoss, Exp, Filter, Flatten, HDF5Data, HDF5Output, HingeLoss, Im2col, InfogainLoss, InnerProduct, Input, LRN, LSTM, LSTMUnit, Log, MVN, MemoryData, MultinomialLogisticLoss, PReLU, Parameter, Pooling, Power, RNN, ReLU, Reduction, Reshape, SPP, Scale, Sigmoid, SigmoidCrossEntropyLoss, Silence, Slice, Softmax, SoftmaxWithLoss, Split, Swish, TanH, Threshold, Tile) *** Check failure stack trace: *** Abort trap: 6

這是因為在配置Makefile.config時沒有取消掉下面一行的注釋,所以讀取不了python寫的layer:

# Uncomment to support layers written in Python (will link against Python libs) WITH_PYTHON_LAYER := 1

將該行去掉,然后重新編譯caffe,即make clean后,再從make all-make test-make runtest-make pycaffe,並且還要跟上面一樣在~/.bash_profile文件的path中添加~/caffe/python這個caffe路徑,並source /etc/profile

但是這樣仍然沒有解決問題,后面發現如果你的報錯是:

Check failed: registry.count(type) == 1 (0 vs. 1) Unknown layer type:python

才使用上面的方法,但是注釋掉這個總沒有錯。這個更改我已經合並到上面的完整配置代碼中了,你如果跟着上面配置走應該就不會遇見該問題

 

后面查看該caffe模型的文件,其使用了ImageData來導入數據,但是我沒有他指定的文件,所以將其的prototxt文件數據導入部分刪除,改為:

input: "data"
input_dim: 10
input_dim: 3
input_dim: 224
input_dim: 224

把后面的一些損失和精確度內容也刪除

然后再運行即可:

(deeplearning2) wanghuideMacBook-Pro:face_data wanghui$ python convert.py 
WARNING: Logging before InitGoogleLogging() is written to STDERR
W0812 16:24:50.179870 287970752 _caffe.cpp:139] DEPRECATION WARNING - deprecated use of Python interface
W0812 16:24:50.180747 287970752 _caffe.cpp:140] Use this instead (with the named "weights" parameter):
W0812 16:24:50.180763 287970752 _caffe.cpp:142] Net('age_train.prototxt', 0, weights='dex_imdb_wiki.caffemodel')
I0812 16:24:50.188591 287970752 upgrade_proto.cpp:69] Attempting to upgrade input file specified using deprecated input fields: age_train.prototxt
I0812 16:24:50.188863 287970752 upgrade_proto.cpp:72] Successfully upgraded file specified using deprecated input fields.
W0812 16:24:50.188877 287970752 upgrade_proto.cpp:74] Note that future Caffe releases will only support input layers and not input fields.
I0812 16:24:50.189504 287970752 net.cpp:53] Initializing net from parameters: 
name: "VGG_ILSVRC_16_layers"
state {
  phase: TRAIN
  level: 0
}
layer {
  name: "input"
  type: "Input"
  top: "data"
  input_param {
    shape {
      dim: 10
      dim: 3
      dim: 224
      dim: 224
    }
  }
}
layer {
  name: "conv1_1"
  type: "Convolution"
...
layer {
  name: "fc8-101"
  type: "InnerProduct"
  bottom: "fc7"
  top: "fc8-101"
  param {
    lr_mult: 10
    decay_mult: 1
  }
  param {
    lr_mult: 20
    decay_mult: 0
  }
  inner_product_param {
    num_output: 101
    weight_filler {
      type: "gaussian"
      std: 0.01
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
I0812 16:24:50.190675 287970752 layer_factory.hpp:77] Creating layer input
I0812 16:24:50.190943 287970752 net.cpp:86] Creating Layer input
I0812 16:24:50.190958 287970752 net.cpp:382] input -> data
I0812 16:24:50.191754 287970752 net.cpp:124] Setting up input
I0812 16:24:50.191768 287970752 net.cpp:131] Top shape: 10 3 224 224 (1505280)
I0812 16:24:50.192245 287970752 net.cpp:139] Memory required for data: 6021120
I0812 16:24:50.192253 287970752 layer_factory.hpp:77] Creating layer conv1_1
I0812 16:24:50.192262 287970752 net.cpp:86] Creating Layer conv1_1
I0812 16:24:50.192267 287970752 net.cpp:408] conv1_1 <- data
I0812 16:24:50.192273 287970752 net.cpp:382] conv1_1 -> conv1_1
I0812 16:24:50.192760 287970752 net.cpp:124] Setting up conv1_1
I0812 16:24:50.192771 287970752 net.cpp:131] Top shape: 10 64 224 224 (32112640)
I0812 16:24:50.192780 287970752 net.cpp:139] Memory required for data: 134471680
I0812 16:24:50.192903 287970752 layer_factory.hpp:77] Creating layer relu1_1
I0812 16:24:50.192927 287970752 net.cpp:86] Creating Layer relu1_1
I0812 16:24:50.192940 287970752 net.cpp:408] relu1_1 <- conv1_1
I0812 16:24:50.192955 287970752 net.cpp:369] relu1_1 -> conv1_1 (in-place)
I0812 16:24:50.192987 287970752 net.cpp:124] Setting up relu1_1
I0812 16:24:50.193004 287970752 net.cpp:131] Top shape: 10 64 224 224 (32112640)
I0812 16:24:50.193015 287970752 net.cpp:139] Memory required for data: 262922240
I0812 16:24:50.193022 287970752 layer_factory.hpp:77] Creating layer conv1_2
I0812 16:24:50.193029 287970752 net.cpp:86] Creating Layer conv1_2
I0812 16:24:50.193034 287970752 net.cpp:408] conv1_2 <- conv1_1
I0812 16:24:50.193042 287970752 net.cpp:382] conv1_2 -> conv1_2
I0812 16:24:50.193079 287970752 net.cpp:124] Setting up conv1_2
I0812 16:24:50.193085 287970752 net.cpp:131] Top shape: 10 64 224 224 (32112640)
I0812 16:24:50.193092 287970752 net.cpp:139] Memory required for data: 391372800
I0812 16:24:50.193099 287970752 layer_factory.hpp:77] Creating layer relu1_2
I0812 16:24:50.193106 287970752 net.cpp:86] Creating Layer relu1_2
I0812 16:24:50.193111 287970752 net.cpp:408] relu1_2 <- conv1_2
I0812 16:24:50.193116 287970752 net.cpp:369] relu1_2 -> conv1_2 (in-place)
I0812 16:24:50.193122 287970752 net.cpp:124] Setting up relu1_2
I0812 16:24:50.193126 287970752 net.cpp:131] Top shape: 10 64 224 224 (32112640)
I0812 16:24:50.193132 287970752 net.cpp:139] Memory required for data: 519823360
I0812 16:24:50.193137 287970752 layer_factory.hpp:77] Creating layer pool1
I0812 16:24:50.193142 287970752 net.cpp:86] Creating Layer pool1
I0812 16:24:50.193147 287970752 net.cpp:408] pool1 <- conv1_2
I0812 16:24:50.193152 287970752 net.cpp:382] pool1 -> pool1
I0812 16:24:50.193373 287970752 net.cpp:124] Setting up pool1
I0812 16:24:50.193385 287970752 net.cpp:131] Top shape: 10 64 112 112 (8028160)
I0812 16:24:50.193394 287970752 net.cpp:139] Memory required for data: 551936000
I0812 16:24:50.193400 287970752 layer_factory.hpp:77] Creating layer conv2_1
I0812 16:24:50.193408 287970752 net.cpp:86] Creating Layer conv2_1
I0812 16:24:50.193414 287970752 net.cpp:408] conv2_1 <- pool1
I0812 16:24:50.193421 287970752 net.cpp:382] conv2_1 -> conv2_1
I0812 16:24:50.193460 287970752 net.cpp:124] Setting up conv2_1
I0812 16:24:50.193466 287970752 net.cpp:131] Top shape: 10 128 112 112 (16056320)
I0812 16:24:50.193473 287970752 net.cpp:139] Memory required for data: 616161280
I0812 16:24:50.193481 287970752 layer_factory.hpp:77] Creating layer relu2_1
I0812 16:24:50.193487 287970752 net.cpp:86] Creating Layer relu2_1
I0812 16:24:50.193492 287970752 net.cpp:408] relu2_1 <- conv2_1
I0812 16:24:50.193497 287970752 net.cpp:369] relu2_1 -> conv2_1 (in-place)
I0812 16:24:50.193504 287970752 net.cpp:124] Setting up relu2_1
I0812 16:24:50.193508 287970752 net.cpp:131] Top shape: 10 128 112 112 (16056320)
I0812 16:24:50.193514 287970752 net.cpp:139] Memory required for data: 680386560
I0812 16:24:50.193519 287970752 layer_factory.hpp:77] Creating layer conv2_2
I0812 16:24:50.193526 287970752 net.cpp:86] Creating Layer conv2_2
I0812 16:24:50.193531 287970752 net.cpp:408] conv2_2 <- conv2_1
I0812 16:24:50.193536 287970752 net.cpp:382] conv2_2 -> conv2_2
I0812 16:24:50.193588 287970752 net.cpp:124] Setting up conv2_2
I0812 16:24:50.193596 287970752 net.cpp:131] Top shape: 10 128 112 112 (16056320)
I0812 16:24:50.193604 287970752 net.cpp:139] Memory required for data: 744611840
I0812 16:24:50.193610 287970752 layer_factory.hpp:77] Creating layer relu2_2
I0812 16:24:50.193616 287970752 net.cpp:86] Creating Layer relu2_2
I0812 16:24:50.193620 287970752 net.cpp:408] relu2_2 <- conv2_2
I0812 16:24:50.193625 287970752 net.cpp:369] relu2_2 -> conv2_2 (in-place)
I0812 16:24:50.193631 287970752 net.cpp:124] Setting up relu2_2
I0812 16:24:50.193635 287970752 net.cpp:131] Top shape: 10 128 112 112 (16056320)
I0812 16:24:50.193641 287970752 net.cpp:139] Memory required for data: 808837120
I0812 16:24:50.193645 287970752 layer_factory.hpp:77] Creating layer pool2
I0812 16:24:50.193651 287970752 net.cpp:86] Creating Layer pool2
I0812 16:24:50.193655 287970752 net.cpp:408] pool2 <- conv2_2
I0812 16:24:50.193661 287970752 net.cpp:382] pool2 -> pool2
I0812 16:24:50.193668 287970752 net.cpp:124] Setting up pool2
I0812 16:24:50.193672 287970752 net.cpp:131] Top shape: 10 128 56 56 (4014080)
I0812 16:24:50.193678 287970752 net.cpp:139] Memory required for data: 824893440
I0812 16:24:50.193682 287970752 layer_factory.hpp:77] Creating layer conv3_1
I0812 16:24:50.193688 287970752 net.cpp:86] Creating Layer conv3_1
I0812 16:24:50.193694 287970752 net.cpp:408] conv3_1 <- pool2
I0812 16:24:50.193699 287970752 net.cpp:382] conv3_1 -> conv3_1
I0812 16:24:50.193792 287970752 net.cpp:124] Setting up conv3_1
I0812 16:24:50.193799 287970752 net.cpp:131] Top shape: 10 256 56 56 (8028160)
I0812 16:24:50.193805 287970752 net.cpp:139] Memory required for data: 857006080
I0812 16:24:50.193814 287970752 layer_factory.hpp:77] Creating layer relu3_1
I0812 16:24:50.193819 287970752 net.cpp:86] Creating Layer relu3_1
I0812 16:24:50.193825 287970752 net.cpp:408] relu3_1 <- conv3_1
I0812 16:24:50.193832 287970752 net.cpp:369] relu3_1 -> conv3_1 (in-place)
I0812 16:24:50.193838 287970752 net.cpp:124] Setting up relu3_1
I0812 16:24:50.193843 287970752 net.cpp:131] Top shape: 10 256 56 56 (8028160)
I0812 16:24:50.193850 287970752 net.cpp:139] Memory required for data: 889118720
I0812 16:24:50.193855 287970752 layer_factory.hpp:77] Creating layer conv3_2
I0812 16:24:50.193861 287970752 net.cpp:86] Creating Layer conv3_2
I0812 16:24:50.193866 287970752 net.cpp:408] conv3_2 <- conv3_1
I0812 16:24:50.193871 287970752 net.cpp:382] conv3_2 -> conv3_2
I0812 16:24:50.194085 287970752 net.cpp:124] Setting up conv3_2
I0812 16:24:50.194097 287970752 net.cpp:131] Top shape: 10 256 56 56 (8028160)
I0812 16:24:50.194105 287970752 net.cpp:139] Memory required for data: 921231360
I0812 16:24:50.194113 287970752 layer_factory.hpp:77] Creating layer relu3_2
I0812 16:24:50.194119 287970752 net.cpp:86] Creating Layer relu3_2
I0812 16:24:50.194124 287970752 net.cpp:408] relu3_2 <- conv3_2
I0812 16:24:50.194130 287970752 net.cpp:369] relu3_2 -> conv3_2 (in-place)
I0812 16:24:50.194137 287970752 net.cpp:124] Setting up relu3_2
I0812 16:24:50.194141 287970752 net.cpp:131] Top shape: 10 256 56 56 (8028160)
I0812 16:24:50.194147 287970752 net.cpp:139] Memory required for data: 953344000
I0812 16:24:50.194151 287970752 layer_factory.hpp:77] Creating layer conv3_3
I0812 16:24:50.194159 287970752 net.cpp:86] Creating Layer conv3_3
I0812 16:24:50.194162 287970752 net.cpp:408] conv3_3 <- conv3_2
I0812 16:24:50.194169 287970752 net.cpp:382] conv3_3 -> conv3_3
I0812 16:24:50.194345 287970752 net.cpp:124] Setting up conv3_3
I0812 16:24:50.194353 287970752 net.cpp:131] Top shape: 10 256 56 56 (8028160)
I0812 16:24:50.194360 287970752 net.cpp:139] Memory required for data: 985456640
I0812 16:24:50.194366 287970752 layer_factory.hpp:77] Creating layer relu3_3
I0812 16:24:50.194373 287970752 net.cpp:86] Creating Layer relu3_3
I0812 16:24:50.194377 287970752 net.cpp:408] relu3_3 <- conv3_3
I0812 16:24:50.194383 287970752 net.cpp:369] relu3_3 -> conv3_3 (in-place)
I0812 16:24:50.194388 287970752 net.cpp:124] Setting up relu3_3
I0812 16:24:50.194393 287970752 net.cpp:131] Top shape: 10 256 56 56 (8028160)
I0812 16:24:50.194398 287970752 net.cpp:139] Memory required for data: 1017569280
I0812 16:24:50.194403 287970752 layer_factory.hpp:77] Creating layer pool3
I0812 16:24:50.194411 287970752 net.cpp:86] Creating Layer pool3
I0812 16:24:50.194416 287970752 net.cpp:408] pool3 <- conv3_3
I0812 16:24:50.194422 287970752 net.cpp:382] pool3 -> pool3
I0812 16:24:50.194428 287970752 net.cpp:124] Setting up pool3
I0812 16:24:50.194433 287970752 net.cpp:131] Top shape: 10 256 28 28 (2007040)
I0812 16:24:50.194438 287970752 net.cpp:139] Memory required for data: 1025597440
I0812 16:24:50.194443 287970752 layer_factory.hpp:77] Creating layer conv4_1
I0812 16:24:50.194450 287970752 net.cpp:86] Creating Layer conv4_1
I0812 16:24:50.194454 287970752 net.cpp:408] conv4_1 <- pool3
I0812 16:24:50.194460 287970752 net.cpp:382] conv4_1 -> conv4_1
I0812 16:24:50.194834 287970752 net.cpp:124] Setting up conv4_1
I0812 16:24:50.194847 287970752 net.cpp:131] Top shape: 10 512 28 28 (4014080)
I0812 16:24:50.194854 287970752 net.cpp:139] Memory required for data: 1041653760
I0812 16:24:50.194861 287970752 layer_factory.hpp:77] Creating layer relu4_1
I0812 16:24:50.194867 287970752 net.cpp:86] Creating Layer relu4_1
I0812 16:24:50.194874 287970752 net.cpp:408] relu4_1 <- conv4_1
I0812 16:24:50.194880 287970752 net.cpp:369] relu4_1 -> conv4_1 (in-place)
I0812 16:24:50.194885 287970752 net.cpp:124] Setting up relu4_1
I0812 16:24:50.194890 287970752 net.cpp:131] Top shape: 10 512 28 28 (4014080)
I0812 16:24:50.194895 287970752 net.cpp:139] Memory required for data: 1057710080
I0812 16:24:50.194900 287970752 layer_factory.hpp:77] Creating layer conv4_2
I0812 16:24:50.194905 287970752 net.cpp:86] Creating Layer conv4_2
I0812 16:24:50.194911 287970752 net.cpp:408] conv4_2 <- conv4_1
I0812 16:24:50.194917 287970752 net.cpp:382] conv4_2 -> conv4_2
I0812 16:24:50.195816 287970752 net.cpp:124] Setting up conv4_2
I0812 16:24:50.195835 287970752 net.cpp:131] Top shape: 10 512 28 28 (4014080)
I0812 16:24:50.195843 287970752 net.cpp:139] Memory required for data: 1073766400
I0812 16:24:50.195854 287970752 layer_factory.hpp:77] Creating layer relu4_2
I0812 16:24:50.195863 287970752 net.cpp:86] Creating Layer relu4_2
I0812 16:24:50.195868 287970752 net.cpp:408] relu4_2 <- conv4_2
I0812 16:24:50.195874 287970752 net.cpp:369] relu4_2 -> conv4_2 (in-place)
I0812 16:24:50.195881 287970752 net.cpp:124] Setting up relu4_2
I0812 16:24:50.195886 287970752 net.cpp:131] Top shape: 10 512 28 28 (4014080)
I0812 16:24:50.195892 287970752 net.cpp:139] Memory required for data: 1089822720
I0812 16:24:50.195896 287970752 layer_factory.hpp:77] Creating layer conv4_3
I0812 16:24:50.195904 287970752 net.cpp:86] Creating Layer conv4_3
I0812 16:24:50.195907 287970752 net.cpp:408] conv4_3 <- conv4_2
I0812 16:24:50.195914 287970752 net.cpp:382] conv4_3 -> conv4_3
I0812 16:24:50.197036 287970752 net.cpp:124] Setting up conv4_3
I0812 16:24:50.197062 287970752 net.cpp:131] Top shape: 10 512 28 28 (4014080)
I0812 16:24:50.197072 287970752 net.cpp:139] Memory required for data: 1105879040
I0812 16:24:50.197079 287970752 layer_factory.hpp:77] Creating layer relu4_3
I0812 16:24:50.197088 287970752 net.cpp:86] Creating Layer relu4_3
I0812 16:24:50.197094 287970752 net.cpp:408] relu4_3 <- conv4_3
I0812 16:24:50.197100 287970752 net.cpp:369] relu4_3 -> conv4_3 (in-place)
I0812 16:24:50.197108 287970752 net.cpp:124] Setting up relu4_3
I0812 16:24:50.197111 287970752 net.cpp:131] Top shape: 10 512 28 28 (4014080)
I0812 16:24:50.197116 287970752 net.cpp:139] Memory required for data: 1121935360
I0812 16:24:50.197120 287970752 layer_factory.hpp:77] Creating layer pool4
I0812 16:24:50.197126 287970752 net.cpp:86] Creating Layer pool4
I0812 16:24:50.197130 287970752 net.cpp:408] pool4 <- conv4_3
I0812 16:24:50.197136 287970752 net.cpp:382] pool4 -> pool4
I0812 16:24:50.197144 287970752 net.cpp:124] Setting up pool4
I0812 16:24:50.197149 287970752 net.cpp:131] Top shape: 10 512 14 14 (1003520)
I0812 16:24:50.197154 287970752 net.cpp:139] Memory required for data: 1125949440
I0812 16:24:50.197158 287970752 layer_factory.hpp:77] Creating layer conv5_1
I0812 16:24:50.197166 287970752 net.cpp:86] Creating Layer conv5_1
I0812 16:24:50.197171 287970752 net.cpp:408] conv5_1 <- pool4
I0812 16:24:50.197176 287970752 net.cpp:382] conv5_1 -> conv5_1
I0812 16:24:50.198091 287970752 net.cpp:124] Setting up conv5_1
I0812 16:24:50.198110 287970752 net.cpp:131] Top shape: 10 512 14 14 (1003520)
I0812 16:24:50.198118 287970752 net.cpp:139] Memory required for data: 1129963520
I0812 16:24:50.198124 287970752 layer_factory.hpp:77] Creating layer relu5_1
I0812 16:24:50.198132 287970752 net.cpp:86] Creating Layer relu5_1
I0812 16:24:50.198137 287970752 net.cpp:408] relu5_1 <- conv5_1
I0812 16:24:50.198143 287970752 net.cpp:369] relu5_1 -> conv5_1 (in-place)
I0812 16:24:50.198149 287970752 net.cpp:124] Setting up relu5_1
I0812 16:24:50.198153 287970752 net.cpp:131] Top shape: 10 512 14 14 (1003520)
I0812 16:24:50.198159 287970752 net.cpp:139] Memory required for data: 1133977600
I0812 16:24:50.198163 287970752 layer_factory.hpp:77] Creating layer conv5_2
I0812 16:24:50.198170 287970752 net.cpp:86] Creating Layer conv5_2
I0812 16:24:50.198174 287970752 net.cpp:408] conv5_2 <- conv5_1
I0812 16:24:50.198179 287970752 net.cpp:382] conv5_2 -> conv5_2
I0812 16:24:50.199494 287970752 net.cpp:124] Setting up conv5_2
I0812 16:24:50.199513 287970752 net.cpp:131] Top shape: 10 512 14 14 (1003520)
I0812 16:24:50.199522 287970752 net.cpp:139] Memory required for data: 1137991680
I0812 16:24:50.199528 287970752 layer_factory.hpp:77] Creating layer relu5_2
I0812 16:24:50.199539 287970752 net.cpp:86] Creating Layer relu5_2
I0812 16:24:50.199545 287970752 net.cpp:408] relu5_2 <- conv5_2
I0812 16:24:50.199551 287970752 net.cpp:369] relu5_2 -> conv5_2 (in-place)
I0812 16:24:50.199558 287970752 net.cpp:124] Setting up relu5_2
I0812 16:24:50.199563 287970752 net.cpp:131] Top shape: 10 512 14 14 (1003520)
I0812 16:24:50.199568 287970752 net.cpp:139] Memory required for data: 1142005760
I0812 16:24:50.199571 287970752 layer_factory.hpp:77] Creating layer conv5_3
I0812 16:24:50.199579 287970752 net.cpp:86] Creating Layer conv5_3
I0812 16:24:50.199582 287970752 net.cpp:408] conv5_3 <- conv5_2
I0812 16:24:50.199589 287970752 net.cpp:382] conv5_3 -> conv5_3
I0812 16:24:50.200711 287970752 net.cpp:124] Setting up conv5_3
I0812 16:24:50.200731 287970752 net.cpp:131] Top shape: 10 512 14 14 (1003520)
I0812 16:24:50.200739 287970752 net.cpp:139] Memory required for data: 1146019840
I0812 16:24:50.200747 287970752 layer_factory.hpp:77] Creating layer relu5_3
I0812 16:24:50.200753 287970752 net.cpp:86] Creating Layer relu5_3
I0812 16:24:50.200758 287970752 net.cpp:408] relu5_3 <- conv5_3
I0812 16:24:50.200764 287970752 net.cpp:369] relu5_3 -> conv5_3 (in-place)
I0812 16:24:50.200790 287970752 net.cpp:124] Setting up relu5_3
I0812 16:24:50.200794 287970752 net.cpp:131] Top shape: 10 512 14 14 (1003520)
I0812 16:24:50.200800 287970752 net.cpp:139] Memory required for data: 1150033920
I0812 16:24:50.200804 287970752 layer_factory.hpp:77] Creating layer pool5
I0812 16:24:50.200810 287970752 net.cpp:86] Creating Layer pool5
I0812 16:24:50.200814 287970752 net.cpp:408] pool5 <- conv5_3
I0812 16:24:50.200820 287970752 net.cpp:382] pool5 -> pool5
I0812 16:24:50.200830 287970752 net.cpp:124] Setting up pool5
I0812 16:24:50.200836 287970752 net.cpp:131] Top shape: 10 512 7 7 (250880)
I0812 16:24:50.200855 287970752 net.cpp:139] Memory required for data: 1151037440
I0812 16:24:50.200860 287970752 layer_factory.hpp:77] Creating layer fc6
I0812 16:24:50.200866 287970752 net.cpp:86] Creating Layer fc6
I0812 16:24:50.200870 287970752 net.cpp:408] fc6 <- pool5
I0812 16:24:50.200876 287970752 net.cpp:382] fc6 -> fc6
I0812 16:24:50.257817 287970752 net.cpp:124] Setting up fc6
I0812 16:24:50.257875 287970752 net.cpp:131] Top shape: 10 4096 (40960)
I0812 16:24:50.257887 287970752 net.cpp:139] Memory required for data: 1151201280
I0812 16:24:50.257897 287970752 layer_factory.hpp:77] Creating layer relu6
I0812 16:24:50.257906 287970752 net.cpp:86] Creating Layer relu6
I0812 16:24:50.257912 287970752 net.cpp:408] relu6 <- fc6
I0812 16:24:50.257920 287970752 net.cpp:369] relu6 -> fc6 (in-place)
I0812 16:24:50.257927 287970752 net.cpp:124] Setting up relu6
I0812 16:24:50.257931 287970752 net.cpp:131] Top shape: 10 4096 (40960)
I0812 16:24:50.257936 287970752 net.cpp:139] Memory required for data: 1151365120
I0812 16:24:50.257941 287970752 layer_factory.hpp:77] Creating layer drop6
I0812 16:24:50.257947 287970752 net.cpp:86] Creating Layer drop6
I0812 16:24:50.257952 287970752 net.cpp:408] drop6 <- fc6
I0812 16:24:50.257957 287970752 net.cpp:369] drop6 -> fc6 (in-place)
I0812 16:24:50.257974 287970752 net.cpp:124] Setting up drop6
I0812 16:24:50.257978 287970752 net.cpp:131] Top shape: 10 4096 (40960)
I0812 16:24:50.257983 287970752 net.cpp:139] Memory required for data: 1151528960
I0812 16:24:50.257987 287970752 layer_factory.hpp:77] Creating layer fc7
I0812 16:24:50.257994 287970752 net.cpp:86] Creating Layer fc7
I0812 16:24:50.257998 287970752 net.cpp:408] fc7 <- fc6
I0812 16:24:50.258003 287970752 net.cpp:382] fc7 -> fc7
I0812 16:24:50.303862 287970752 net.cpp:124] Setting up fc7
I0812 16:24:50.303889 287970752 net.cpp:131] Top shape: 10 4096 (40960)
I0812 16:24:50.303900 287970752 net.cpp:139] Memory required for data: 1151692800
I0812 16:24:50.303908 287970752 layer_factory.hpp:77] Creating layer relu7
I0812 16:24:50.303918 287970752 net.cpp:86] Creating Layer relu7
I0812 16:24:50.303923 287970752 net.cpp:408] relu7 <- fc7
I0812 16:24:50.303942 287970752 net.cpp:369] relu7 -> fc7 (in-place)
I0812 16:24:50.303951 287970752 net.cpp:124] Setting up relu7
I0812 16:24:50.303956 287970752 net.cpp:131] Top shape: 10 4096 (40960)
I0812 16:24:50.303961 287970752 net.cpp:139] Memory required for data: 1151856640
I0812 16:24:50.303966 287970752 layer_factory.hpp:77] Creating layer drop7
I0812 16:24:50.303972 287970752 net.cpp:86] Creating Layer drop7
I0812 16:24:50.303977 287970752 net.cpp:408] drop7 <- fc7
I0812 16:24:50.303982 287970752 net.cpp:369] drop7 -> fc7 (in-place)
I0812 16:24:50.303987 287970752 net.cpp:124] Setting up drop7
I0812 16:24:50.303992 287970752 net.cpp:131] Top shape: 10 4096 (40960)
I0812 16:24:50.303997 287970752 net.cpp:139] Memory required for data: 1152020480
I0812 16:24:50.304002 287970752 layer_factory.hpp:77] Creating layer fc8-101
I0812 16:24:50.304008 287970752 net.cpp:86] Creating Layer fc8-101
I0812 16:24:50.304013 287970752 net.cpp:408] fc8-101 <- fc7
I0812 16:24:50.304019 287970752 net.cpp:382] fc8-101 -> fc8-101
I0812 16:24:50.312885 287970752 net.cpp:124] Setting up fc8-101
I0812 16:24:50.312916 287970752 net.cpp:131] Top shape: 10 101 (1010)
I0812 16:24:50.312927 287970752 net.cpp:139] Memory required for data: 1152024520
I0812 16:24:50.312934 287970752 net.cpp:202] fc8-101 does not need backward computation.
I0812 16:24:50.312940 287970752 net.cpp:202] drop7 does not need backward computation.
I0812 16:24:50.312944 287970752 net.cpp:202] relu7 does not need backward computation.
I0812 16:24:50.312949 287970752 net.cpp:202] fc7 does not need backward computation.
I0812 16:24:50.312953 287970752 net.cpp:202] drop6 does not need backward computation.
I0812 16:24:50.312958 287970752 net.cpp:202] relu6 does not need backward computation.
I0812 16:24:50.312963 287970752 net.cpp:202] fc6 does not need backward computation.
I0812 16:24:50.312968 287970752 net.cpp:202] pool5 does not need backward computation.
I0812 16:24:50.312973 287970752 net.cpp:202] relu5_3 does not need backward computation.
I0812 16:24:50.312978 287970752 net.cpp:202] conv5_3 does not need backward computation.
I0812 16:24:50.312981 287970752 net.cpp:202] relu5_2 does not need backward computation.
I0812 16:24:50.312986 287970752 net.cpp:202] conv5_2 does not need backward computation.
I0812 16:24:50.312991 287970752 net.cpp:202] relu5_1 does not need backward computation.
I0812 16:24:50.312995 287970752 net.cpp:202] conv5_1 does not need backward computation.
I0812 16:24:50.313000 287970752 net.cpp:202] pool4 does not need backward computation.
I0812 16:24:50.313005 287970752 net.cpp:202] relu4_3 does not need backward computation.
I0812 16:24:50.313009 287970752 net.cpp:202] conv4_3 does not need backward computation.
I0812 16:24:50.313014 287970752 net.cpp:202] relu4_2 does not need backward computation.
I0812 16:24:50.313019 287970752 net.cpp:202] conv4_2 does not need backward computation.
I0812 16:24:50.313024 287970752 net.cpp:202] relu4_1 does not need backward computation.
I0812 16:24:50.313027 287970752 net.cpp:202] conv4_1 does not need backward computation.
I0812 16:24:50.313032 287970752 net.cpp:202] pool3 does not need backward computation.
I0812 16:24:50.313037 287970752 net.cpp:202] relu3_3 does not need backward computation.
I0812 16:24:50.313041 287970752 net.cpp:202] conv3_3 does not need backward computation.
I0812 16:24:50.313046 287970752 net.cpp:202] relu3_2 does not need backward computation.
I0812 16:24:50.313050 287970752 net.cpp:202] conv3_2 does not need backward computation.
I0812 16:24:50.313055 287970752 net.cpp:202] relu3_1 does not need backward computation.
I0812 16:24:50.313060 287970752 net.cpp:202] conv3_1 does not need backward computation.
I0812 16:24:50.313064 287970752 net.cpp:202] pool2 does not need backward computation.
I0812 16:24:50.313069 287970752 net.cpp:202] relu2_2 does not need backward computation.
I0812 16:24:50.313073 287970752 net.cpp:202] conv2_2 does not need backward computation.
I0812 16:24:50.313078 287970752 net.cpp:202] relu2_1 does not need backward computation.
I0812 16:24:50.313082 287970752 net.cpp:202] conv2_1 does not need backward computation.
I0812 16:24:50.313087 287970752 net.cpp:202] pool1 does not need backward computation.
I0812 16:24:50.313092 287970752 net.cpp:202] relu1_2 does not need backward computation.
I0812 16:24:50.313097 287970752 net.cpp:202] conv1_2 does not need backward computation.
I0812 16:24:50.313102 287970752 net.cpp:202] relu1_1 does not need backward computation.
I0812 16:24:50.313107 287970752 net.cpp:202] conv1_1 does not need backward computation.
I0812 16:24:50.313110 287970752 net.cpp:202] input does not need backward computation.
I0812 16:24:50.313114 287970752 net.cpp:244] This network produces output fc8-101
I0812 16:24:50.313129 287970752 net.cpp:257] Network initialization done.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:537] Reading dangerously large protocol message.  If the message turns out to be larger than 2147483647 bytes, parsing will be halted for security reasons.  To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:78] The total number of bytes read was 538700275
I0812 16:24:51.490310 287970752 net.cpp:746] Ignoring source layer data
I0812 16:24:51.597520 287970752 net.cpp:746] Ignoring source layer loss
OrderedDict([('conv1_1', <caffe._caffe.BlobVec object at 0x13cafcf30>), ('conv1_2', <caffe._caffe.BlobVec object at 0x13cafcfa0>), ('conv2_1', <caffe._caffe.BlobVec object at 0x158bf0050>), ('conv2_2', <caffe._caffe.BlobVec object at 0x158bf00c0>), ('conv3_1', <caffe._caffe.BlobVec object at 0x158bf0130>), ('conv3_2', <caffe._caffe.BlobVec object at 0x158bf01a0>), ('conv3_3', <caffe._caffe.BlobVec object at 0x158bf0210>), ('conv4_1', <caffe._caffe.BlobVec object at 0x158bf0280>), ('conv4_2', <caffe._caffe.BlobVec object at 0x158bf02f0>), ('conv4_3', <caffe._caffe.BlobVec object at 0x158bf0360>), ('conv5_1', <caffe._caffe.BlobVec object at 0x158bf03d0>), ('conv5_2', <caffe._caffe.BlobVec object at 0x158bf0440>), ('conv5_3', <caffe._caffe.BlobVec object at 0x158bf04b0>), ('fc6', <caffe._caffe.BlobVec object at 0x158bf0520>), ('fc7', <caffe._caffe.BlobVec object at 0x158bf0590>), ('fc8-101', <caffe._caffe.BlobVec object at 0x158bf0600>)])
View Code

然后就成功生成了age_train.pth模型

 

5.測試

使用保存好的pytorch模型來測試圖片的年齡:

from PIL import Image
from torchvision import transforms as T
import torch
from models import AgeModel

def expected_age(outputs):
    age_output = []

    for batch, output in enumerate(outputs):
        age = 0
        for i, j in enumerate(output):
            age += i*j
        age_output.append(age.item())

    return torch.FloatTensor(age_output)

def test(img_path, model_path):
    transfrom = T.Compose([
        T.Resize(256),
        T.CenterCrop(224),
        T.ToTensor(),
        T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
    ])
    img = Image.open(img_path)
    img = transfrom(img).unsqueeze(0)

    model = AgeModel()
    model.load_state_dict(torch.load(model_path))
    age = model(img)
    print(age)
    print(age.size())
    age = expected_age(age)
    print(age)
    print(age.size())

if __name__ == '__main__':
    img_path = './Tom_Hanks_54745.png'
    model_path = './age_train.pth'
    test(img_path, model_path)

返回:

(deeplearning2) userdeMacBook-Pro:face_data user$ python check_model.py 
tensor([[0.0032, 0.0039, 0.0027, 0.0020, 0.0026, 0.0031, 0.0029, 0.0035, 0.0051,
         0.0046, 0.0056, 0.0069, 0.0073, 0.0065, 0.0071, 0.0071, 0.0090, 0.0143,
         0.0225, 0.0269, 0.0288, 0.0373, 0.0335, 0.0343, 0.0282, 0.0303, 0.0256,
         0.0263, 0.0227, 0.0210, 0.0210, 0.0176, 0.0140, 0.0144, 0.0135, 0.0121,
         0.0107, 0.0112, 0.0110, 0.0088, 0.0109, 0.0092, 0.0086, 0.0140, 0.0096,
         0.0106, 0.0123, 0.0128, 0.0139, 0.0132, 0.0110, 0.0111, 0.0144, 0.0127,
         0.0110, 0.0153, 0.0103, 0.0113, 0.0117, 0.0136, 0.0147, 0.0124, 0.0096,
         0.0091, 0.0091, 0.0097, 0.0070, 0.0072, 0.0068, 0.0077, 0.0067, 0.0069,
         0.0062, 0.0065, 0.0056, 0.0047, 0.0045, 0.0045, 0.0048, 0.0049, 0.0040,
         0.0031, 0.0031, 0.0026, 0.0033, 0.0024, 0.0023, 0.0020, 0.0020, 0.0021,
         0.0020, 0.0016, 0.0021, 0.0016, 0.0016, 0.0014, 0.0014, 0.0013, 0.0015,
         0.0013, 0.0020]], grad_fn=<SoftmaxBackward>)
(1, 101)
tensor([39.2897])
(1,)

該圖片為:

 

中間有出現一個錯誤:

RuntimeError: The size of tensor a (4) must match the size of tensor b (3) at non-singleton dimension 0

然后查看輸入的圖像,發現其為RGBA格式的:

<PIL.PngImagePlugin.PngImageFile image mode=RGBA size=524x473 at 0x10138FFD0>

所以要將其更改為RGB格式:

img = img.convert("RGB")

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM