Opencv調用深度學習模型


https://blog.csdn.net/lovelyaiq/article/details/79929393

https://blog.csdn.net/qq_29462849/article/details/85272575

Opencv調用深度學習模型

版權聲明:本文為博主原創文章,未經博主允許不得轉載。 https://blog.csdn.net/lovelyaiq/article/details/79929393

  OpenCv 從V3.3版本開始支持調用深度學習模型,例如Caffe, Tensorflow, darknet等.詳細見下圖,具體的使用方法,可以參考官網: 
https://docs.opencv.org/3.4.1/d6/d0f/group__dnn.html 
這里寫圖片描述
  目前Opencv可以支持的網絡有GoogLeNet, ResNet-50,MobileNet-SSD from Caffe等,具體的可以參考:https://github.com/opencv/opencv/wiki/ChangeLog,里面有對dnn模塊的詳細介紹. 
  在github上,Opencv也有關於dnn模塊的使用例子:https://github.com/opencv/opencv/tree/3.4.1/samples/dnn 
  這里只使用Python接口的Opencv 對Yolo V2(目前Opencv還不支持Yolo V3, 期待下一個版本支持)和Tensorflow訓練出來的ssd_inception_v2_coco模型進行說明.

Yolo V2模型:

import cv2 import numpy as np cap = cv2.VideoCapture('solidYellowLeft.mp4') def read_cfg_model(): model_path = '/home/scyang/TiRan/WorkSpace/others/darknet/cfg/yolov2.weights' cfg_path = '/home/scyang/TiRan/WorkSpace/others/darknet/cfg/yolov2.cfg' yolo_net = cv2.dnn.readNet(model_path, cfg_path, 'darknet') while True: flag, img = cap.read() if flag: yolo_net.setInput(cv2.dnn.blobFromImage(img, 1.0/127.5, (416, 416), (127.5, 127.5, 127.5), False, False)) cvOut = yolo_net.forward() for detection in cvOut: confidence = np.max(detection[5:]) if confidence > 0: classIndex = np.argwhere(detection == confidence)[0][0] - 5 x_center = detection[0] * cols y_center = detection[1] * rows width = detection[2] * cols height = detection[3] * rows start = (int(x_center - width/2), int(y_center - height/2)) end = (int(x_center + width/2), int(y_center + height/2)) cv2.rectangle(img,start, end , (23, 230, 210), thickness=2) else: break cv2.imshow('show', img) cv2.waitKey(10)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28

  這里需要對cvOut的結果說明一下:cvOut的前4個表示檢測到的矩形框信息,第5位表示背景,從第6位開始代表檢測到的目標置信度及目標屬於那個類。 
  因此,下面兩處的作用是,從5位開始獲取結果中目標的置信度及目標屬於那個類。

confidence = np.max(detection[5:])
classIndex = np.argwhere(detection == confidence)[0][0] - 5
  • 1
  • 2

結果的截圖如下: 
這里寫圖片描述

Tensorflow模型

cvNet = cv2.dnn.readNetFromTensorflow('model/ssd_inception_v2_coco_2017_11_17.pb','model/ssd_inception_v2_coco_2017_11_17.pbtxt') while True: flag, img = cap.read() if flag: rows = img.shape[0] cols = img.shape[1] width = height = 300 image = cv2.resize(img, ((int(cols * height / rows), width))) img = image[0:height, image.shape[1] - width:image.shape[1]] cvNet.setInput(cv2.dnn.blobFromImage(img, 1.0/127.5, (300, 300), (127.5, 127.5, 127.5), swapRB=True, crop=False)) cvOut = cvNet.forward() # Network produces output blob with a shape 1x1xNx7 where N is a number of # detections and an every detection is a vector of values # [batchId, classId, confidence, left, top, right, bottom] for detection in cvOut[0,0,:,:]: score = float(detection[2]) if score > 0.3: rows = cols = 300 # print(detection) left = detection[3] * cols top = detection[4] * rows right = detection[5] * cols bottom = detection[6] * rows cv2.rectangle(img, (int(left), int(top)), (int(right), int(bottom)), (23, 230, 210), thickness=2) cv2.imshow('img', img) cv2.waitKey(10) else: break
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31

效果如下: 
這里寫圖片描述 
  使用方法和Yolo的類似,從最終的效果可以看出,ssd_inception_v2模型要比V2好。 
注:blobFromImage的詳細介紹及使用方法,可以參考某大神的博客:https://www.pyimagesearch.com/2017/11/06/deep-learning-opencvs-blobfromimage-works/。這里就不在多述了,要學會站在巨人的肩膀上

OpenCV4.0 Mask RCNN 實例分割示例 C+/Python實現

點擊我愛計算機視覺標星,更快獲取cvmL新技術

前幾天OpenCV4.0-Alpha發布,其中新增實例分割Mask RCNN模型是這次發布的亮點之一。

圖像實例分割即將圖像中目標檢測出來並進行像素級分割。

昨天learnopencv.com博主Satya Mallick發表博文,詳述了使用新版OpenCV加載TensorFlow Object Detection Model Zone中的Mask RCNN模型實現目標檢測與實例分割的應用。使用C++/Python實現的代碼示例,都開源了。

先來看看作者發布的結果視頻:

 

從視頻可以看出,2.5GHZ i7 處理器每幀推斷時間大約幾百到2000毫秒。

TensorFlow Object Detection Model Zone中現在有四個使用不同骨干網(InceptionV2, ResNet50, ResNet101 和 Inception-ResnetV2)的Mask RCNN模型,這些模型都是在MSCOCO 數據庫上訓練出來的,其中使用Inception的模型是這四個中最快的。Satya Mallick博文中正是使用了該模型。

Mask RCNN網絡架構

 

OpenCV使用Mask RCNN目標檢測與實例分割流程:

1)下載模型。

地址:

http://download.tensorflow.org/models/object_detection/

現有的四個模型:

 

2)參數初始化。

 

設置目標檢測的置信度閾值和Mask二值化分割閾值。

3)加載Mask RCNN模型、類名稱與可視化顏色值。

mscoco_labels.names包含MSCOCO所有標注對象的類名稱。

colors.txt是在圖像上標出某實例時其所屬類顯示的顏色值。

frozen_inference_graph.pb模型權重。

mask_rcnn_inception_v2_coco_2018_01_28.pbtxt文本圖文件,告訴OpenCV如何加載模型權重。

OpenCV已經給定工具可以從給定模型權重提取出文本圖文件。詳見:

https://github.com/opencv/opencv/wiki/TensorFlow-Object-Detection-API

 

OpenCV支持CPU和OpenCL推斷,但OpenCL只支持Intel自家GPU,Satya設置了CPU推斷模式(cv.dnn.DNN_TARGET_CPU)。

4)讀取圖像、視頻或者攝像頭數據。

5)對每一幀數據計算處理。

主要步驟如圖:

 

6)提取目標包圍框和Mask,並繪制結果。

 

C++/Python代碼下載:

https://github.com/spmallick/learnopencv/tree/master/Mask-RCNN

原博文地址:

https://www.learnopencv.com/deep-learning-based-object-detection-and-instance-segmentation-using-mask-r-cnn-in-opencv-python-c/

【點贊與轉發】就是一種鼓勵

 

C++調用mask rcnn進行實時檢測--opencv4.0

介紹

Opencv在前面的幾個版本中已經支持caffe、tensorflow、pytorch訓練的幾種模型,包括分類和物體檢測模型(SSD、Yolo),針對tensorflow,opencv與tensorflow object detection api對接,可以通過該api訓練模型,然后通過opencv調用,這樣就可以把python下的環境移植到C++中。

關於tensorflow object detection api,后面博文會詳細介紹

數據准備與環境配置

基於mask_rcnn_inception_v2_coco_2018_01_28的frozen_inference_graph.pb,這個模型在tensorflow object detection api中可以找到,然后需要對應的mask_rcnn_inception_v2_coco_2018_01_28.pbtxt,以及colors.txt,mscoco_labels.names。

opencv必須是剛發布的4.0版本,該版本支持mask rcnn和faster rcnn,低版本不支持哦,注意opencv4.0中在配置環境時,include下少了一個opencv文件夾,只有opencv2,這是正常的。

好了,廢話不多說了,直接上源代碼,該代碼調用usb攝像頭進行實時檢測,基於單幅圖像的檢測修改下代碼即可。


#include <fstream>
#include <sstream>
#include <iostream>
#include <string.h>

#include <opencv2/dnn.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>


using namespace cv;
using namespace dnn;
using namespace std;

// Initialize the parameters
float confThreshold = 0.5; // Confidence threshold
float maskThreshold = 0.3; // Mask threshold

vector<string> classes;
vector<Scalar> colors;

// Draw the predicted bounding box
void drawBox(Mat& frame, int classId, float conf, Rect box, Mat& objectMask);

// Postprocess the neural network's output for each frame
void postprocess(Mat& frame, const vector<Mat>& outs);

int main()
{
	// Load names of classes
	string classesFile = "./mask_rcnn_inception_v2_coco_2018_01_28/mscoco_labels.names";
	ifstream ifs(classesFile.c_str());
	string line;
	while (getline(ifs, line)) classes.push_back(line);

	// Load the colors
	string colorsFile = "./mask_rcnn_inception_v2_coco_2018_01_28/colors.txt";
	ifstream colorFptr(colorsFile.c_str());
	while (getline(colorFptr, line)) 
	{
		char* pEnd;
		double r, g, b;
		r = strtod(line.c_str(), &pEnd);
		g = strtod(pEnd, NULL);
		b = strtod(pEnd, NULL);
		Scalar color = Scalar(r, g, b, 255.0);
		colors.push_back(Scalar(r, g, b, 255.0));
	}

	// Give the configuration and weight files for the model
	String textGraph = "./mask_rcnn_inception_v2_coco_2018_01_28/mask_rcnn_inception_v2_coco_2018_01_28.pbtxt";
	String modelWeights = "./mask_rcnn_inception_v2_coco_2018_01_28/frozen_inference_graph.pb";

	// Load the network
	Net net = readNetFromTensorflow(modelWeights, textGraph);
	net.setPreferableBackend(DNN_BACKEND_OPENCV);
	net.setPreferableTarget(DNN_TARGET_CPU);

	// Open a video file or an image file or a camera stream.
	string str, outputFile;
	VideoCapture cap(0);//根據攝像頭端口id不同,修改下即可
	//VideoWriter video;
	Mat frame, blob;

	// Create a window
	static const string kWinName = "Deep learning object detection in OpenCV";
	namedWindow(kWinName, WINDOW_NORMAL);

	// Process frames.
	while (waitKey(1) < 0)
	{
		// get frame from the video
		cap >> frame;

		// Stop the program if reached end of video
		if (frame.empty()) 
		{
			cout << "Done processing !!!" << endl;
			cout << "Output file is stored as " << outputFile << endl;
			waitKey(3000);
			break;
		}
		// Create a 4D blob from a frame.
		blobFromImage(frame, blob, 1.0, Size(frame.cols, frame.rows), Scalar(), true, false);
		//blobFromImage(frame, blob);

		//Sets the input to the network
		net.setInput(blob);

		// Runs the forward pass to get output from the output layers
		std::vector<String> outNames(2);
		outNames[0] = "detection_out_final";
		outNames[1] = "detection_masks";
		vector<Mat> outs;
		net.forward(outs, outNames);

		// Extract the bounding box and mask for each of the detected objects
		postprocess(frame, outs);

		// Put efficiency information. The function getPerfProfile returns the overall time for inference(t) and the timings for each of the layers(in layersTimes)
		vector<double> layersTimes;
		double freq = getTickFrequency() / 1000;
		double t = net.getPerfProfile(layersTimes) / freq;
		string label = format("Mask-RCNN on 2.5 GHz Intel Core i7 CPU, Inference time for a frame : %0.0f ms", t);
		putText(frame, label, Point(0, 15), FONT_HERSHEY_SIMPLEX, 0.5, Scalar(0, 0, 0));

		// Write the frame with the detection boxes
		Mat detectedFrame;
		frame.convertTo(detectedFrame, CV_8U);

		imshow(kWinName, frame);

	}
	cap.release();
	return 0;
}

// For each frame, extract the bounding box and mask for each detected object
void postprocess(Mat& frame, const vector<Mat>& outs)
{
	Mat outDetections = outs[0];
	Mat outMasks = outs[1];

	// Output size of masks is NxCxHxW where
	// N - number of detected boxes
	// C - number of classes (excluding background)
	// HxW - segmentation shape
	const int numDetections = outDetections.size[2];
	const int numClasses = outMasks.size[1];

	outDetections = outDetections.reshape(1, outDetections.total() / 7);
	for (int i = 0; i < numDetections; ++i)
	{
		float score = outDetections.at<float>(i, 2);
		if (score > confThreshold)
		{
			// Extract the bounding box
			int classId = static_cast<int>(outDetections.at<float>(i, 1));
			int left = static_cast<int>(frame.cols * outDetections.at<float>(i, 3));
			int top = static_cast<int>(frame.rows * outDetections.at<float>(i, 4));
			int right = static_cast<int>(frame.cols * outDetections.at<float>(i, 5));
			int bottom = static_cast<int>(frame.rows * outDetections.at<float>(i, 6));

			left = max(0, min(left, frame.cols - 1));
			top = max(0, min(top, frame.rows - 1));
			right = max(0, min(right, frame.cols - 1));
			bottom = max(0, min(bottom, frame.rows - 1));
			Rect box = Rect(left, top, right - left + 1, bottom - top + 1);

			// Extract the mask for the object
			Mat objectMask(outMasks.size[2], outMasks.size[3], CV_32F, outMasks.ptr<float>(i, classId));

			// Draw bounding box, colorize and show the mask on the image
			drawBox(frame, classId, score, box, objectMask);

		}
	}
}

// Draw the predicted bounding box, colorize and show the mask on the image
void drawBox(Mat& frame, int classId, float conf, Rect box, Mat& objectMask)
{
	//Draw a rectangle displaying the bounding box
	rectangle(frame, Point(box.x, box.y), Point(box.x + box.width, box.y + box.height), Scalar(255, 178, 50), 3);

	//Get the label for the class name and its confidence
	string label = format("%.2f", conf);
	if (!classes.empty())
	{
		CV_Assert(classId < (int)classes.size());
		label = classes[classId] + ":" + label;
	}

	//Display the label at the top of the bounding box
	int baseLine;
	Size labelSize = getTextSize(label, FONT_HERSHEY_SIMPLEX, 0.5, 1, &baseLine);
	box.y = max(box.y, labelSize.height);
	rectangle(frame, Point(box.x, box.y - round(1.5*labelSize.height)), Point(box.x + round(1.5*labelSize.width), box.y + baseLine), Scalar(255, 255, 255), FILLED);
	putText(frame, label, Point(box.x, box.y), FONT_HERSHEY_SIMPLEX, 0.75, Scalar(0, 0, 0), 1);

	Scalar color = colors[classId%colors.size()];

	// Resize the mask, threshold, color and apply it on the image
	resize(objectMask, objectMask, Size(box.width, box.height));
	Mat mask = (objectMask > maskThreshold);
	Mat coloredRoi = (0.3 * color + 0.7 * frame(box));
	coloredRoi.convertTo(coloredRoi, CV_8UC3);

	// Draw the contours on the image
	vector<Mat> contours;
	Mat hierarchy;
	mask.convertTo(mask, CV_8U);
	findContours(mask, contours, hierarchy, RETR_CCOMP, CHAIN_APPROX_SIMPLE);
	drawContours(coloredRoi, contours, -1, color, 5, LINE_8, hierarchy, 100);
	coloredRoi.copyTo(frame(box), mask);

}

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147
  • 148
  • 149
  • 150
  • 151
  • 152
  • 153
  • 154
  • 155
  • 156
  • 157
  • 158
  • 159
  • 160
  • 161
  • 162
  • 163
  • 164
  • 165
  • 166
  • 167
  • 168
  • 169
  • 170
  • 171
  • 172
  • 173
  • 174
  • 175
  • 176
  • 177
  • 178
  • 179
  • 180
  • 181
  • 182
  • 183
  • 184
  • 185
  • 186
  • 187
  • 188
  • 189
  • 190
  • 191
  • 192
  • 193
  • 194
  • 195
  • 196
  • 197
  • 198
  • 199

實驗結果

在這里插入圖片描述
在這里插入圖片描述
在這里插入圖片描述

不過檢測速度很慢,I7-8700k,GTX1060下需要1s每幀,達不到實時性要求。。。

實驗數據

本博文所有的數據可以從這里下載:opencv調用mask rcnn數據

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM