TensorFlow models - object detection API 安裝


tensorflow 的 models 模塊非常有用,不僅實現了各種模型,也包括了 原作者 訓練好的模型及其使用方法,本文 以 object detection 為例 來說明如何使用 訓練好 的模型;

 

首先呢,還是建議 去 官網 看看使用方法,因為 tensorflow 的版本混亂,網上教程針對的版本各不相同,所以各種坑;

下面是正題,本文針對 windows 操作系統;

 

第一步:下載 models 模塊,解壓

https://github.com/tensorflow/models

 

第二步:安裝 protoc

https://github.com/protocolbuffers/protobuf/releases 從這里下載,選擇適合自己的版本;

下載后復制到 models 所在的文件夾下,解壓,生成 bin、include;

將 bin 下的 protoc.exe 復制到 C:\Windows\System32 文件夾下;

cmd 運行 protoc,出現如下界面,說明安裝成功;

 

第三步:編譯 protoc

在 models/research 下運行 Windows PowerShell               【運行 PowerShell

運行命令

protoc object_detection/protos/*.proto --python_out=.

運行完成后,檢查 object_detection/protos 文件夾,如果每個 proto 文件都變成了 py 文件,表示編譯成功

 

第四步:添加環境變量

添加這兩個目錄

...\models\research
...\models\research\slim

至於怎么添加,你可以用常規的設置 環境變量的方式,官方是 PYTHONPATH;

網上有 添加 .pth 文件,我實驗未成功;

 

第五步:測試 API 是否安裝成功

python object_detection/builders/model_builder_test.py

出現上圖表示成功;

 

第六步:執行已經訓練好的模型

執行 object_detection/object_detection_tutorial.ipynb 文件    【執行方法 我的博客

 

 

或者自己寫

import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile

from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image


# # This is needed to display the images.
# %matplotlib inline

# This is needed since the notebook is stored in the object_detection folder.
sys.path.append("..")
sys.path.append("../..")
print(sys.path)

# from utils import label_map_util
# from utils import visualization_utils as vis_util
from research.object_detection.utils import label_map_util
from research.object_detection.utils import visualization_utils as vis_util

# What model to download.
MODEL_NAME = 'ssd_mobilenet_v1_coco_11_06_2017'
MODEL_FILE = MODEL_NAME + '.tar.gz'
DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/'

# Path to frozen detection graph. This is the actual model that is used for the object detection.
PATH_TO_CKPT = MODEL_NAME + '/frozen_inference_graph.pb'

# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt')

NUM_CLASSES = 90

# download model
# opener = urllib.request.URLopener()
# print(DOWNLOAD_BASE + MODEL_FILE)
# opener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE)
tar_file = tarfile.open(MODEL_FILE)
for file in tar_file.getmembers():
    file_name = os.path.basename(file.name)
    if 'frozen_inference_graph.pb' in file_name:
        tar_file.extract(file, os.getcwd())

# Load a (frozen) Tensorflow model into memory.
detection_graph = tf.Graph()
with detection_graph.as_default():
    od_graph_def = tf.GraphDef()
    with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
        serialized_graph = fid.read()
        od_graph_def.ParseFromString(serialized_graph)
        tf.import_graph_def(od_graph_def, name='')
# Loading label map
label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES,
                                                            use_display_name=True)
category_index = label_map_util.create_category_index(categories)


# Helper code
def load_image_into_numpy_array(image):
    (im_width, im_height) = image.size
    return np.array(image.getdata()).reshape(
        (im_height, im_width, 3)).astype(np.uint8)


# For the sake of simplicity we will use only 2 images:
# image1.jpg
# image2.jpg
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = 'test_images'
TEST_IMAGE_PATHS = [os.path.join(PATH_TO_TEST_IMAGES_DIR, 'image{}.jpg'.format(i)) for i in range(1, 3)]

# Size, in inches, of the output images.
IMAGE_SIZE = (12, 8)

with detection_graph.as_default():
    with tf.Session(graph=detection_graph) as sess:
        # Definite input and output Tensors for detection_graph
        image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
        # Each box represents a part of the image where a particular object was detected.
        detection_boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
        # Each score represent how level of confidence for each of the objects.
        # Score is shown on the result image, together with the class label.
        detection_scores = detection_graph.get_tensor_by_name('detection_scores:0')
        detection_classes = detection_graph.get_tensor_by_name('detection_classes:0')
        num_detections = detection_graph.get_tensor_by_name('num_detections:0')
        for image_path in TEST_IMAGE_PATHS:
            image = Image.open(image_path)
            # the array based representation of the image will be used later in order to prepare the
            # result image with boxes and labels on it.
            image_np = load_image_into_numpy_array(image)
            # Expand dimensions since the model expects images to have shape: [1, None, None, 3]
            image_np_expanded = np.expand_dims(image_np, axis=0)
            image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
            # Each box represents a part of the image where a particular object was detected.
            boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
            # Each score represent how level of confidence for each of the objects.
            # Score is shown on the result image, together with the class label.
            scores = detection_graph.get_tensor_by_name('detection_scores:0')
            classes = detection_graph.get_tensor_by_name('detection_classes:0')
            num_detections = detection_graph.get_tensor_by_name('num_detections:0')
            # Actual detection.
            (boxes, scores, classes, num_detections) = sess.run(
                [boxes, scores, classes, num_detections],
                feed_dict={image_tensor: image_np_expanded})
            # Visualization of the results of a detection.
            vis_util.visualize_boxes_and_labels_on_image_array(
                image_np,
                np.squeeze(boxes),
                np.squeeze(classes).astype(np.int32),
                np.squeeze(scores),
                category_index,
                use_normalized_coordinates=True,
                line_thickness=8)
            Image.fromarray(image_np).save('%sob.jpg'%image_path)
            plt.figure(figsize=IMAGE_SIZE)
            plt.imshow(image_np)
            plt.show()

 

 

 

參考資料:

https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md  官網

https://www.jianshu.com/p/6f3ea0d82fae  物體檢測TensorFlow Object Detection API (一)安裝

https://www.jb51.net/article/162968.htm  windows10下安裝TensorFlow Object Detection API的步驟

https://www.cnblogs.com/2dogslife/p/10264325.html  Tensorflow Object Detection API 安裝

https://blog.csdn.net/qq_38593211/article/details/82822162  TensorFlow Object Detection API 超詳細教程和踩坑過程(安裝)

https://blog.csdn.net/jiangsujiangjiang/article/details/93401790?depth_1-utm_source=distribute.pc_relevant.none-task-blog-BlogCommendFromBaidu-2&utm_source=distribute.pc_relevant.none-task-blog-BlogCommendFromBaidu-2


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM