MNN配置


1、github鏈接:https://github.com/alibaba/MNN/tree/master/tools/converter

2、教程

(1)使用教程:https://www.bookstack.cn/read/MNN-zh/tools-converter-README_CN.md

(2)參考博客:https://blog.csdn.net/qq_37643960/article/details/97028743

(3)github的項目中的readme部分也有講解;

安裝過程:

編譯安裝MNN動態庫和Convert轉換工具,命令如下:

cd /MNN/
mkdir build
cd build
cmake .. -DMNN_BUILD_CONVERTER=true
make -j4

之后build文件夾中就會出現benchmark.out和MNNConvert可執行文件;

測試benchmark.out:

./benchmark.out ../benchmark/models/ 10 0

 

其中10表示前向傳播10次,最后結果取平均值;0表示使用CPU;(執行推理的計算設備,有效值為 0(浮點 CPU)、1(Metal)、3(浮點OpenCL)、6(OpenGL),7(Vulkan))

測試MNNConvert:

./MNNConvert -h

 

測試:

第一步:將pytorch模型轉換為onnx模型

import torch
import torchvision

dummy_input = torch.randn(10, 3, 224, 224, device='cuda')
model = torchvision.models.alexnet(pretrained=True).cuda()

# Providing input and output names sets the display names for values
# within the model's graph. Setting these does not change the semantics
# of the graph; it is only for readability.
#
# The inputs to the network consist of the flat list of inputs (i.e.
# the values you would pass to the forward() method) followed by the
# flat list of parameters. You can partially specify names, i.e. provide
# a list here shorter than the number of inputs to the model, and we will
# only set that subset of names, starting from the beginning.
input_names = [ "actual_input_1" ] + [ "learned_%d" % i for i in range(16) ]
output_names = [ "output1" ]

torch.onnx.export(model, dummy_input, "alexnet.onnx", verbose=True, input_names=input_names, output_names=output_names)

 

第二步:將onnx模型轉換為mnn模型

./MNNConvert -f ONNX --modelFile alexnet.onnx --MNNModel alexnet.mnn --bizCode MNN

 

第三步:使用benchmark.out測試前向傳播時間

./benchmark.out ./models/ 10 0

 PS:在/MNN/source/shape/ShapeSqueeze.cpp 80L中:注釋掉那個NanAssert(),在新版函數中已經將它注釋掉了;(要不然會報reshape的error)


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM