dlib下訓練自己的物體檢測器--手的檢測


之前我們在Linux上安裝了dlib(http://www.cnblogs.com/take-fetter/p/8318602.html),也成功的完成了之前的人臉檢測程序,

今天我們來一起學習怎樣使用dlib創建屬於自己的簡單的物體識別器(這里以手的檢測為例,特別感謝https://handmap.github.io/dlib-classifier-for-object-detection/)

  • imglab的介紹與安裝

imglab是dlib提供的個工具,位於github dlib開源項目的tools目錄下.imglab是一個簡單的圖形工具,用對象邊界來標注圖像盒子和可選的零件位置。 一般來說,你可以在需要時使用它

以訓練物體檢測器(例如臉部檢測器),因為它允許你輕松創建所需的訓練數據集。

(源碼位於https://github.com/davisking/dlib/tree/master/tools/imglab 如果有興趣使用的話建議先下載整個dlib項目並安裝dlib后再對本工具進行編譯)

   編譯依次使用

 cd dlib/tools/imglab
    mkdir build
    cd build
    cmake ..
    cmake --build . --config Release

   不建議使用readme.txt中關於sudo make install的命令,因為我使用之后出現了無法顯示圖像的錯誤

             或http://www.robots.ox.ac.uk/~vgg/data/hands/的相關數據集)

  使用cmake后的build文件目錄下(windows則位於release目錄中)完成如下操作

  使用

./imglab -c mydataset.xml 圖片目錄

  創建mydataset.xml完成創建mydataset.xml 和image_metadata_stylesheet.xsl的樣式表

  使用

./imglab mydataset.xml

  會打開一個窗口,這里就需要對每張圖片進行位置的框選,在Next Label中輸入框選信息,並對每張圖片進行框選(按住shift並鼠標左鍵點擊拖動畫框)

  在將對圖片全標注后,在files選項中點擊save,我們便可以關閉窗口,此時打開mydataset.xml可以看到其中包含了圖片信息,如圖

  

之后將mydataset.xml 和image_metadata_stylesheet.xsl放入圖片目錄中,運行如下代碼進行訓練(可能會出現圖片目錄出錯的情況,這里需要對mydataset.xml中的圖片位置進行確認)

代碼改自dlib的python_examples,如果要自己嘗試,建議先認真看下github中的代碼(https://github.com/davisking/dlib/blob/master/python_examples/train_object_detector.py)

運行程序需使用scikit-image使用pip install scikit-image 安裝

import os
import sys
import glob

import dlib
from skimage import io


# In this example we are going to train a face detector based on the small
# faces dataset in the examples/faces directory.  This means you need to supply
# the path to this faces folder as a command line argument so we will know
# where it is.
if len(sys.argv) != 2:
    print(
        "Give the path to the examples/faces directory as the argument to this "
        "program. For example, if you are in the python_examples folder then "
        "execute this program by running:\n"
        "    ./train_object_detector.py ../examples/faces")
    exit()
faces_folder = sys.argv[1]


# Now let's do the training.  The train_simple_object_detector() function has a
# bunch of options, all of which come with reasonable default values.  The next
# few lines goes over some of these options.
options = dlib.simple_object_detector_training_options()
# Since faces are left/right symmetric we can tell the trainer to train a
# symmetric detector.  This helps it get the most value out of the training
# data.
options.add_left_right_image_flips = True
# The trainer is a kind of support vector machine and therefore has the usual
# SVM C parameter.  In general, a bigger C encourages it to fit the training
# data better but might lead to overfitting.  You must find the best C value
# empirically by checking how well the trained detector works on a test set of
# images you haven't trained on.  Don't just leave the value set at 5.  Try a
# few different C values and see what works best for your data.
options.C = 5
# Tell the code how many CPU cores your computer has for the fastest training.
options.num_threads = 4
options.be_verbose = True


training_xml_path = os.path.join(faces_folder, "palm-landmarks.xml")
testing_xml_path = os.path.join(faces_folder, "testing.xml")
# This function does the actual training.  It will save the final detector to
# detector.svm.  The input is an XML file that lists the images in the training
# dataset and also contains the positions of the face boxes.  To create your
# own XML files you can use the imglab tool which can be found in the
# tools/imglab folder.  It is a simple graphical tool for labeling objects in
# images with boxes.  To see how to use it read the tools/imglab/README.txt
# file.  But for this example, we just use the training.xml file included with
# dlib.
dlib.train_simple_object_detector(training_xml_path, "detector.svm", options)

接下來就是等待訓練完成(當然在這里說下,數據集不宜過大,會導致內存不足而OS自動殺死線程/進程的情況),options中的參數很多需要自行根據情況調節的

訓練完成后會生成detector.svm文件,使用如下程序進行一個簡單的測試:

import imutils
import dlib
import cv2
import time

detector = dlib.simple_object_detector("detector_from_author.svm")

image = cv2.imread('test0.jpg')
image = imutils.resize(image, width=500)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

rects = detector(gray, 1)
#win_det = dlib.image_window()
#win_det.set_image(detector)

#win = dlib.image_window()

for (k, d) in enumerate(rects):
    print("Detection {}: Left: {} Top: {} Right: {} Bottom: {}".format(
        k, d.left(), d.top(), d.right(), d.bottom()))
    cv2.rectangle(image, (d.left(), d.top()), (d.right(), d.bottom()), (0, 255, 0), 2)

#win.add_overlay(rects)
cv2.imshow("Output", image)
cv2.waitKey(0)

運行結果

可以看到完成了手的檢測。

 

 

后記:

  1. 訓練時間很長,希望能耐心等待
  2. 再次特別感謝Nathan Glover以及他的教程https://handmap.github.io/dlib-classifier-for-object-detection/
  3. 如果要制作精度很高的檢測器,並不建議使用本方法,因為我們最終生成的svm文件相比於dlib作者的人臉識別檢測器而言相差甚遠。
  4. 我認為dlib提供的imglab功能很少,不適用於大規模的需要高精度的識別情況(不過人臉識別還是很不錯的)
  5. 對於需要高精度高准確率的物體識別,使用Tensorflow Object Detection API應該更為合適(https://github.com/tensorflow/models/tree/master/research/object_detection)

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM