廣告圖片分析——labelImg+Tensorflow object detection API+tensorboard可視化


背景

在外實習的時候接到了公司給的畢業設計題目,其中數據由我們組老大幫忙爬取並存入數據庫中,我負責做聚類算法分析。

花了24h將所有圖片下載完成后,我提取其中200份(現階段先運行一遍后可能會增加一些)並使用labellmg為圖片打上標簽,作為訓練集。

前置需求

1、首先安裝配置好TensorFlow

參考地址

2、TensorFlow模型源碼

git地址: https://github.com/tensorflow/models

通過pip安裝pillow,jupyter,matplotlib,lxml 如下:

pip install pillow

 

3、安裝tensorboard:

注意 因為tensorboard所需要的pycocotools不支持Windows所以不能直接pip安裝,不過git上的大牛提供了windows版本。安裝命令如下:

pip install git+https://github.com/philferriere/cocoapi.git#subdirectory=PythonAPI

 

這里有可能報錯:

 

我的解決方法:

直接下載https://github.com/philferriere/cocoapi.git#subdirectory=PythonAPI

 

 

 

然后解壓到本地F:\coco\cocoapi-master

用notebook++打開 修改F:\coco\cocoapi-master\PythonAPI\setup.py為如下

 

 

 然后再anaconda Promat中 cd到這個目錄,並執行

python setup.py install

 

 

 這樣 pycocotools就安裝完成了。

 

4、編譯Protobuf,生產py文件

需要先安裝Google的protobuf,下載protoc-3.4.0-win32.zip
打開cmd窗口,cd到models/research/目錄下(老版本沒有research目錄),執行如下:

protoc object_detection/protos/*.proto --python_out=.

 

然后在F:\Tensorflow\models\research\object_detection\protos中會生成一大堆python文件 如下圖所示:

 

 

最后測試一下:

python object_detection/builders/model_builder_test.py

 

注意:如果出現No module named 'object_detection' 這是因為沒有將工作目錄添加到環境變量中

 

 

 添加完成后就OK了。

開始訓練自己的數據集

1.標注自己的樣本圖片

這里我們會使用到labelImg這個工具:

LabelImg是一個圖形圖像注釋工具。

它用python編寫的,用QT作為圖形界面。

注釋被按照ImageNet所使用的PASCAL VOC格式存成XML文件。

git地址:https://github.com/tzutalin/labelImg

 

樣例:

 

其標注后的xml文件內容

-<annotation>

<folder>Train</folder>

<filename>ImageSets101.jpg</filename>

<path>F:\Image\download\Train\ImageSets101.jpg</path>


-<source>

<database>Unknown</database>

</source>


-<size>

<width>1000</width>

<height>1000</height>

<depth>3</depth>

</size>

<segmented>0</segmented>


-<object>

<name>glasses</name>

<pose>Unspecified</pose>

<truncated>0</truncated>

<difficult>0</difficult>


-<bndbox>

<xmin>24</xmin>

<ymin>301</ymin>

<xmax>960</xmax>

<ymax>716</ymax>

</bndbox>

</object>

</annotation>

 

 

2.標注完成后,需要將xml轉為csv文件,使用xml_to_csv.py,生成eval.csv驗證集和train.csv訓練集

代碼如下:

import os
import glob
import pandas as pd
import xml.etree.ElementTree as ET


def xml_to_csv(path):
    xml_list = []
    # 讀取注釋文件
    for xml_file in glob.glob(path + '/*.xml'):
        tree = ET.parse(xml_file)
        root = tree.getroot()
        for member in root.findall('object'):
            value = (root.find('filename').text,
                     int(root.find('size')[0].text),
                     int(root.find('size')[1].text),
                     member[0].text,
                     int(member[4][0].text),
                     int(member[4][1].text),
                     int(member[4][2].text),
                     int(member[4][3].text)
                     )
            xml_list.append(value)
    column_name = ['filename', 'width', 'height', 'class', 'xmin', 'ymin', 'xmax', 'ymax']

    # 將所有數據分為樣本集和驗證集,一般按照3:1的比例
    train_list = xml_list[0: int(len(xml_list) * 0.67)]
    eval_list = xml_list[int(len(xml_list) * 0.67) + 1: ]

    # 保存為CSV格式
    train_df = pd.DataFrame(train_list, columns=column_name)
    eval_df = pd.DataFrame(eval_list, columns=column_name)
    train_df.to_csv('F:/Image/download/Train/train.csv', index=None)
    eval_df.to_csv('F:/Image/download/Train/eval.csv', index=None)


def main():
    path = 'F:/Image/download/Train'
    xml_to_csv(path)
    print('Successfully converted xml to csv.')

main()

3.生成TFrecord文件

Created on Tue Jan 16 01:04:55 2018
@author: Xiang Guo
由CSV文件生成TFRecord文件
"""
 
"""
Usage:
  # From tensorflow/models/
  # Create train data:
  python generate_tfrecord.py --csv_input=data/tv_vehicle_labels.csv  --output_path=train.record
  # Create test data:
  python generate_tfrecord.py --csv_input=data/test_labels.csv  --output_path=test.record
"""
 
 
 
import os
import io
import pandas as pd
import tensorflow.compat.v1 as tf
 
from PIL import Image
from object_detection.utils import dataset_util
from collections import namedtuple, OrderedDict
import sys
sys.path.append("F:/models/research")
sys.path.append("F:/models/research/object_detection/utils")
os.environ['PYTHONPATH'] += 'F:/Tensorflow/models/research/:F:/Tensorflow/models/research/slim/'
os.chdir('F:/Tensorflow/models/research/object_detection')
 
flags = tf.app.flags
flags.DEFINE_string('csv_input', '', 'Path to the CSV input')
flags.DEFINE_string('output_path', '', 'Path to output TFRecord')
FLAGS = flags.FLAGS

 
# TO-DO replace this with label map
#注意將對應的label改成自己的類別!!!!!!!!!!
#需要打開F:\Image\Labellmg\data\predefined_classes.txt文件,將打上的標簽一一對應
def class_text_to_int(row_label): if row_label == 'clothes': return 1 elif row_label == 'pants': return 2 elif row_label == 'roads': return 3 elif row_label == 'sports': return 4 elif row_label == 'accessories': return 5 elif row_label == 'man': return 6 elif row_label == 'shoes': return 7 elif row_label == 'drink': return 8 elif row_label == 'poster': return 9 elif row_label == 'baby': return 10 elif row_label == 'bag': return 11 elif row_label == 'text': return 12 elif row_label == 'cosmetic': return 13 elif row_label == 'furniture': return 14 elif row_label == 'light': return 15 elif row_label == 'plants': return 16 elif row_label == 'book': return 17 elif row_label == 'hat': return 18 elif row_label == 'glasses': return 19 elif row_label == 'food': return 20 elif row_label == 'tools': return 21 elif row_label == 'hands and feet': return 22 elif row_label == 'toy': return 23 elif row_label == 'sock': return 24 elif row_label == 'house': return 25 elif row_label == 'door': return 26 elif row_label == 'dog': return 27 elif row_label == 'painting': return 28 elif row_label == 'woman': return 29 elif row_label == 'health': return 30 elif row_label == 'computer': return 31 elif row_label == 'phone': return 32 elif row_label == 'watch': return 33 elif row_label == 'car': return 34 else: None def split(df, group): data = namedtuple('data', ['filename', 'object']) gb = df.groupby(group) return [data(filename, gb.get_group(x)) for filename, x in zip(gb.groups.keys(), gb.groups)] def create_tf_example(group, path): #with tf.gfile.GFile(os.path.join(path, '{}'.format(group.filename)), 'rb') as fid: print(os.path.join(path, '{}'.format(group.filename))) with open(os.path.join(path, '{}'.format(group.filename)), 'rb') as fid: encoded_jpg = fid.read() print(encoded_jpg) encoded_jpg_io = io.BytesIO(encoded_jpg) image = Image.open(encoded_jpg_io) width, height = image.size filename = group.filename.encode('utf-8') image_format = b'jpg' xmins = [] xmaxs = [] ymins = [] ymaxs = [] classes_text = [] classes = [] for index, row in group.object.iterrows(): xmins.append(row['xmin'] / width) xmaxs.append(row['xmax'] / width) ymins.append(row['ymin'] / height) ymaxs.append(row['ymax'] / height) classes_text.append(row['class'].encode('utf8')) classes.append(class_text_to_int(row['class'])) tf_example = tf.train.Example(features=tf.train.Features(feature={ 'image/height': dataset_util.int64_feature(height), 'image/width': dataset_util.int64_feature(width), 'image/filename': dataset_util.bytes_feature(filename), 'image/source_id': dataset_util.bytes_feature(filename), 'image/encoded': dataset_util.bytes_feature(encoded_jpg), 'image/format': dataset_util.bytes_feature(image_format), 'image/object/bbox/xmin': dataset_util.float_list_feature(xmins), 'image/object/bbox/xmax': dataset_util.float_list_feature(xmaxs), 'image/object/bbox/ymin': dataset_util.float_list_feature(ymins), 'image/object/bbox/ymax': dataset_util.float_list_feature(ymaxs), 'image/object/class/text': dataset_util.bytes_list_feature(classes_text), #'image/object/class/label': dataset_util.int64_list_feature(classes), })) return tf_example def main(csv_input, output_path, imgPath): #print(output_path) writer = tf.python_io.TFRecordWriter(output_path) #print(writer) path = imgPath #print(path) examples = pd.read_csv(csv_input) # print(examples) grouped = split(examples, 'filename') # print(grouped) for group in grouped: tf_example = create_tf_example(group, path) writer.write(tf_example.SerializeToString()) writer.close() print('Successfully created the TFRecords: {}'.format(output_path)) if __name__ == '__main__': imgPath = 'F:/Image/download/Train' # 生成train.record文件 output_path = 'F:/Image/output/record/train.record' csv_input = 'F:/Image/download/Train/train.csv' main(csv_input, output_path, imgPath) # 生成驗證文件 eval.record output_path = 'F:/Image/output/record/eval.record' csv_input = 'F:/Image/download/Train/eval.csv' main(csv_input, output_path, imgPath)

 

在這一步中,我遇到了很多問題,比如csv文件對應的路徑錯誤,以及csv文件中后綴名重復導致的編碼錯誤。總的來說就是要細心細心再細心!

開始訓練

1.編寫label_map.pbtxt文件

 

2.配置需要模型的conifg文件。

這里我使用了ssd_mobilenet_v1_coco,所以修改其config

注意 接下來的操作很關鍵:

#======================================

model {
  ssd {
#首先修改num_classes,將數量改為你需要判斷的類型
    num_classes: 34
    box_coder {
      faster_rcnn_box_coder {
        y_scale: 10.0
        x_scale: 10.0
        height_scale: 5.0
        width_scale: 5.0
      }
    }

#======================================

#其次,因為我們是重新訓練模型,所以這里注釋掉模型檢測點,並將from_detection_checkpoint該為false

  #fine_tune_checkpoint: "PATH_TO_BE_CONFIGURED/model.ckpt"
  from_detection_checkpoint: false
num_steps: 200000  # 訓練次數可以改小,因為我就是在跑的時候寫下的這篇博客:)。

#======================================

#最后修改train和eval的路徑,以及eval_config中的num_examples個數。

train_input_reader: {
  tf_record_input_reader {
#第一處修改
    input_path: "F:/Image/output/record/train.record"
  }
  label_map_path:
#第二處修改 "F:/Tensorflow/models/research/object_detection/data/label_map.pbtxt"
}
eval_config: {
#第三處修改
  num_examples: 47
  # Note: The below line limits the evaluation process to 10 evaluations.
  # Remove the below line to evaluate indefinitely.
  max_evals: 10
}
eval_input_reader: {
  tf_record_input_reader {
#第四處修改
    input_path: "F:/Image/output/record/eval.record"
  }
  label_map_path: 
#第五處修改
"F:/Tensorflow/models/research/object_detection/data/label_map.pbtxt"
  shuffle: false
  num_readers: 1
}

3.開始訓練

使用代碼如下,根據工作目錄修改train.py文件的定位

python legacy/train.py --logtostderr --train_dir=F:/TensorFlow/models/test/training/ --pipeline_config_path=F:/Tensorflow/models/research/models/ssd_mobilenet_v1_coco.config

 

接下來就是等啊等……我在寫這篇博客時,才剛跑完五分之一。

 

 

 

 

 

 

 可選:Tensorboard的使用

之前我們已經安裝好了pycocotools以及tensorboard,但是tensorboard不能獨立使用,這時候我們就要從F:\Tensorflow\models\research\object_detection\legacy中找到eval.py文件。

使用方法與train.py類似。

python F:\Tensorflow\models\research\object_detection\eval.py --logtostderr --eval_dir=F:/TensorFlow/models/test/eval/ --pipeline_config_path=F:/Tensorflow/models/research/models/ssd_mobilenet_v1_coco.config --checkpoint_dir=F:/TensorFlow/models/test/training/

這里如果出現錯誤,比如在image文件讀到10時出現錯誤:

 

如果有這樣的報錯,需要再修改config文件中eval_config下的num_examples,修改到可以運行為止。

eval.py的原理說不太清,但是可以看成是train的一種驗證方法。

先運行train.py,再運行eval.py,然后運行

tensorboard --logdir=F://TensorFlow//models//test//eval

 

在瀏覽器中打開http://localhost:6006/即可。

要注意的是,在tensorboard中不是立馬就會有結果,而是訓練保存一個checkpoint,tensorboard讀取一個模型,更新一次參數,所以相當於有幾個checkpoint,在曲線中就會有幾個點,需要耐心等待,不要着急。

運行tensorboard成功后可以見到下圖:

 

 

Loss/RPNLoss/localization_loss/mul_1: Localization Loss or the Loss of the Bounding Box regressor for the RPN

Loss/RPNLoss/objectness_loss/mul_1: Loss of the Classifier that classifies if a bounding box is an object of interest or background

The losses for the Final Classifier:

Loss/BoxClassifierLoss/classification_loss/mul_1: Loss for the classification of detected objects into various classes: Cat, Dog, Airplane etc

Loss/BoxClassifierLoss/localization_loss/mul_1: Localization Loss or the Loss of the Bounding Box regressor

 

 

感慨一下,做了這么多東西花費了我將近一星期的時間,很多問題的解決方法其實很簡單,但是因為經驗和細心都有欠缺 所以多花了一些時間。不過大多數的坑跨過就真的跨過了,希望等跑完這20w后能更順利點的把剩下的圖片分析完畢吧。

感謝github和csdn上的各位前輩。其中大部分文獻和代碼來自:https://blog.csdn.net/RobinTomps/article/details/78115628。謝謝各位花費時間看我完成的一個小玩意,希望大家能一起進步嗷。


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2026 CODEPRJ.COM