Pytorch_模型轉Caffe(三)pytorch轉caffemodel


Pytorch_模型轉Caffe(三)pytorch轉caffemodel

  • 模型轉換基於GitHub上xxradon的代碼進行優化,在此對作者表示感謝。GitHub地址:https://github.com/xxradon/PytorchToCaffe
  • 本文基於AlexNet網絡對MNIST手寫字體分類生成的模型*.pth進行轉換

1. Pytorch下生成模型

  • 調用torchvision.models.alexnet下的alexnet網絡
  • 修改網絡輸入層數 1 ,輸出類別數量 10
  • classifier下的dropout位置需要調整

  • 通過一下代碼訓練手寫數字識別,最終生成模型mnist_alexnet_model.pth(這里保存了整個網絡和權重)
import time
import torch
from torch import nn, optim
import torchvision
import pytorch_deep as pyd
from torchvision.models.alexnet import alexnet

net = alexnet(False)
device = torch.device('cuda' if torch.cuda.is_available() else'cpu')
def load_data_fashion_mnist(batch_size = 256,resize=None,num_workers = 0):
    trans = []
    if resize:
        trans.append(torchvision.transforms.Resize(size=resize))
    trans.append(torchvision.transforms.ToTensor())
    transform = torchvision.transforms.Compose(trans)
    mnist_train = torchvision.datasets.FashionMNIST(root='./MNIST', train=True, download=True,
                                                    transform=transform)
    mnist_test = torchvision.datasets.FashionMNIST(root='./MNIST', train=False, download=True,
                                                   transform=transform)
    train_iter = torch.utils.data.DataLoader(mnist_train, batch_size=batch_size, shuffle=True, num_workers=num_workers)
    test_iter = torch.utils.data.DataLoader(mnist_test, batch_size=batch_size, shuffle=False, num_workers=num_workers)
    return train_iter,test_iter

batch_size = 128
# 如出現“out of memory”的報錯信息,可減⼩batch_size或resize
train_iter, test_iter = load_data_fashion_mnist(batch_size,resize=224)
lr, num_epochs = 0.001, 5
optimizer = torch.optim.Adam(net.parameters(), lr=lr)
pyd.train_ch5(net, train_iter, test_iter, batch_size, optimizer,device, num_epochs)

2. pth轉換成caffemodel和prototxt

  • git clone下載GitHub源碼,進入example下的Alexnet實例
  • 主要用到以下兩個文件,一個是加載網絡模型,一個是進行prototxt和caffemodel的轉換
  • 先看alexnet_pytorch_to_caffe.py
import sys
sys.path.insert(0,'.')
import torch
from torch.autograd import Variable
from torchvision.models.alexnet import alexnet
import pytorch_to_caffe_alexNet
import cv2
if __name__=='__main__':
    name='alexnet'
    pth_path = '***/mnist_alexnet_model.pth'
    net  = torch.load(pth_path)
    net.eval()
    input=Variable(torch.FloatTensor(torch.ones([1,1,224,224])))
    input = input.cuda()
    pytorch_to_caffe_alexNet.save_prototxt('{}.prototxt'.format(name))
    pytorch_to_caffe_alexNet.save_caffemodel('{}.caffemodel'.format(name))
  • 如果直接運行發現會報錯,我這里的錯誤出現在dropout層轉化的位置,修改其bottomtop傳參
  • 修改完dropout,運行正常,能夠生產caffemodel和prototxt,但prototxt網絡結構有問題,還是前后層銜接不對
  • 參照原版deploy.prototxt進行layer的修改,最終輸出了正確的結果

3. pytorch_to_caffe_alexNet.py剖析

  • 該文件就是對pth文件進行解析,獲得layer的名稱和每層的權重偏差,並以caffe的格式進行存儲
  • 修改了pytorch Function中的函數,讓其在前向傳播的時候自動將該層的參數保存到caffe
  • 很多層的前后銜接不對,都需要強制進行修改
  • 下面是修改的部分函數
def _dropout(raw,input,p=0.5, training=False, inplace=False):
    x=raw(input,p, training, False)
    layer_name=log.add_layer(name='dropout')
    log.add_blobs([x],name='dropout_blob')
    bottom_top_name = 'fc_blob' + layer_name[-1]
    layer=caffe_net.Layer_param(name=layer_name,type='Dropout',
                                bottom=[bottom_top_name],top=[bottom_top_name])
    layer.param.dropout_param.dropout_ratio = p
    log.cnet.add_layer(layer)
    return x
def _linear(raw,input, weight, bias=None):
    x=raw(input,weight,bias)
    layer_name=log.add_layer(name='fc')
    top_blobs=log.add_blobs([x],name='fc_blob')
    bottom_name = 'ave_pool_blob1' if top_blobs[-1][-1] =='1' else 'fc_blob'+str(int(top_blobs[-1][-1])-1)
    layer=caffe_net.Layer_param(name=layer_name,type='InnerProduct',
                                bottom=[bottom_name],top=top_blobs)
    layer.fc_param(x.size()[1],has_bias=bias is not None)
    if bias is not None:
        layer.add_data(weight.cpu().data.numpy(),bias.cpu().data.numpy())
    else:
        layer.add_data(weight.cpu().data.numpy())
    log.cnet.add_layer(layer)
    return x

4. 用轉換后的模型進行推理

  • 在caffe 下進行測試 test_alexnet.sh
#!/bin/bash
set -e
./build/examples/cpp_classification/classification.bin \
/home/****/alexnet.prototxt \
/home/****/alexnet.caffemodel \
examples/mnist/mnist_mean.binaryproto \
examples/mnist/mnist_label.txt \
data/mnist/1.png;

目前推理結果不太准,但整個過程都已經跑通

5. prototxt注意問題

  • 推理過程發現每次的結果都不一樣,發現prototxt中每個卷積層下都有初始化權重的偏差,將其統統刪除

  • 池化層下的 ceil_mode: false也是多余項,刪除即可

至此已完成Pytorch到caffemodle的轉換
這只是初步嘗試通過,接下來要進行YOLOv4的轉換,應該會遇到更多的問題,加油!


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM