『TensorFlow』SSD源碼學習_其四:數據介紹及TFR文件生成


Fork版本項目地址:SSD

一、數據格式介紹

數據文件夾命名為VOC2012,內部有5個子文件夾,如下,

我們的檢測任務中使用JPEGImages文件夾和Annotations文件夾。

JPEGImages文件夾中包含了PASCAL VOC所提供的所有的圖片信息,包括了訓練圖片和測試圖片。

這些圖像都是以“年份_編號.jpg”格式命名的。
圖片的像素尺寸大小不一,但是橫向圖的尺寸大約在500*375左右,縱向圖的尺寸大約在375*500左右,基本不會偏差超過100。(在之后的訓練中,第一步就是將這些圖片都resize到300*300或是500*500,所有原始圖片不能離這個標准過遠。)
這些圖像就是用來進行訓練和測試驗證的圖像數據。
Annotations文件夾中存放的是xml格式的標簽文件,每一個xml文件都對應於JPEGImages文件夾中的一張圖片。
xml文件的具體格式如下:(對於2007_000392.jpg)
<annotation>
	<folder>VOC2012</folder>                           
	<filename>2007_000392.jpg</filename>             //文件名
	<source>                                         //圖像來源(不重要)
		<database>The VOC2007 Database</database>
		<annotation>PASCAL VOC2007</annotation>
		
	</source>
	<size>					                   //圖像尺寸(長寬以及通道數)						
		<width>500</width>
		<height>332</height>
		<depth>3</depth>
	</size>
	<segmented>1</segmented>		              //是否用於分割(在圖像物體識別中01無所謂)
	<object>                                       //檢測目標
		<name>horse</name>                        //物體類別
		<pose>Right</pose>                        //拍攝角度
		<truncated>0</truncated>                  //是否被截斷(0表示完整)
		<difficult>0</difficult>                  //目標是否難以識別(0表示容易識別)
		<bndbox>                                  //bounding-box(包含左上角和右下角xy坐標)
			<xmin>100</xmin>
			<ymin>96</ymin>
			<xmax>355</xmax>
			<ymax>324</ymax>
		</bndbox>
	</object>
	<object>                                      //多檢測目標
		<name>person</name>
		<pose>Unspecified</pose>
		<truncated>0</truncated>
		<difficult>0</difficult>
		<bndbox>
			<xmin>198</xmin>
			<ymin>58</ymin>
			<xmax>286</xmax>
			<ymax>197</ymax>
		</bndbox>
	</object>
</annotation>

二、TFR數據生成流程

 為了加快數據的讀取,框架將數據及標簽預先讀取並寫入tfrecord中,這一部分獨立於網絡或者說訓練結構之外,我們單獨介紹這一部分。

啟動命令如下,注意需要提前建好OUTPUT_DIR文件夾否則會報錯(運行命令時去掉注釋),

DATASET_DIR=./VOC2012/
OUTPUT_DIR=./tfrecords
python tf_convert_data.py \
    --dataset_name=pascalvoc \  # 數據集名稱,實際作者就實現了這一個數據集的預處理方法
    --dataset_dir=${DATASET_DIR} \
    --output_name=voc_2012_train  # tfr文件名,為了兼容后面的程序,命名格式較為固定
    --output_dir=${OUTPUT_DIR}

腳本tf_convert_data.py

這個腳本主要用於和命令行交互,核心功能就一句調用命令:

# './VOC2012/' './tfrecords' 'voc2012_tfr'
pascalvoc_to_tfrecords.run(FLAGS.dataset_dir, FLAGS.output_dir, FLAGS.output_name)

腳本datasets.pascalvoc_to_tfrecords.py

run函數是tfr書寫的核心函數,在這個函數中,我們確定具體的每一個tfr文件名,循環的讀取圖片和標簽數據名稱,按照指定的容量取書寫到每一個tfr文件。

def run(dataset_dir, output_dir, name='voc_train', shuffling=False):
    """Runs the conversion operation.
    Args:
      dataset_dir: The dataset directory where the dataset is stored.
      output_dir: Output directory.
    """
    if not tf.gfile.Exists(dataset_dir):
        tf.gfile.MakeDirs(dataset_dir)

    # Dataset filenames, and shuffling.
    # './VOC2012/' 'Annotations/'
    path = os.path.join(dataset_dir, DIRECTORY_ANNOTATIONS)
    filenames = sorted(os.listdir(path))  # 無路徑文件名
    if shuffling:
        random.seed(RANDOM_SEED)
        random.shuffle(filenames)

    # Process dataset files.
    i = 0
    fidx = 0
    while i < len(filenames):  # 循環文件名
        # Open new TFRecord file.
        tf_filename = _get_output_filename(output_dir, name, fidx)  # 獲取輸出文件名
        with tf.python_io.TFRecordWriter(tf_filename) as tfrecord_writer:
            j = 0
            while i < len(filenames) and j < SAMPLES_PER_FILES:  # 一個文件200張圖
                sys.stdout.write('\r>> Converting image %d/%d' % (i+1, len(filenames)))
                sys.stdout.flush()  # 這兩句的輸出不會生成多行報告,而是在同一行不斷更新數字

                filename = filenames[i]
                img_name = filename[:-4]  # 圖片名稱,去掉字符'.jpg'
                _add_to_tfrecord(dataset_dir, img_name, tfrecord_writer)  # 獲取數據並書寫
                i += 1
                j += 1
            fidx += 1

    # Finally, write the labels file:
    # labels_to_class_names = dict(zip(range(len(_CLASS_NAMES)), _CLASS_NAMES))
    # dataset_utils.write_label_file(labels_to_class_names, dataset_dir)
    print('\nFinished converting the Pascal VOC dataset!')

 這其中,確定具體的每一個tfr文件名函數_get_output_filename很簡單,而由文件名讀取數據並書寫進tfr函數也就分為讀文件和寫文件兩步驟,都很直觀,

def _add_to_tfrecord(dataset_dir, name, tfrecord_writer):
    """Loads data from image and annotations files and add them to a TFRecord.

    Args:
      dataset_dir: Dataset directory;
      name: Image name to add to the TFRecord;
      tfrecord_writer: The TFRecord writer to use for writing.
    """
    image_data, shape, bboxes, labels, labels_text, difficult, truncated = \
        _process_image(dataset_dir, name)  # 由文件名讀取數據
    example = _convert_to_example(image_data, labels, labels_text,
                                  bboxes, shape, difficult, truncated)  # 書寫tfr
    tfrecord_writer.write(example.SerializeToString())


def _get_output_filename(output_dir, name, idx):
    return '%s/%s_%03d.tfrecord' % (output_dir, name, idx)

下面是讀取圖片、標簽數據以及書寫example的兩個函數,實際工作中就是這樣每次讀取一個圖片文件及其對應的標注文件並處理,

def _process_image(directory, name):
    """
    將圖片數據存儲為bytes,
    :param directory: voc文件夾
    :param name: 圖片名
    :return: 需要寫入tfr的數據
    """
    # Read the image file.
    # DIRECTORY_IMAGES = 'JPEGImages/'
    filename = directory + DIRECTORY_IMAGES + name + '.jpg'
    image_data = tf.gfile.FastGFile(filename, 'rb').read()  # 源碼中'rb'錯寫成'r'

    # Read the XML annotation file.
    filename = os.path.join(directory, DIRECTORY_ANNOTATIONS, name + '.xml')
    tree = ET.parse(filename)
    root = tree.getroot()

    # Image shape.
    size = root.find('size')
    shape = [int(size.find('height').text),
             int(size.find('width').text),
             int(size.find('depth').text)]
    # Find annotations.
    bboxes = []
    labels = []
    labels_text = []
    difficult = []
    truncated = []
    for obj in root.findall('object'):
        label = obj.find('name').text
        labels.append(int(VOC_LABELS[label][0]))
        labels_text.append(label.encode('ascii'))

        if obj.find('difficult'):
            difficult.append(int(obj.find('difficult').text))
        else:
            difficult.append(0)
        if obj.find('truncated'):
            truncated.append(int(obj.find('truncated').text))
        else:
            truncated.append(0)

        bbox = obj.find('bndbox')
        bboxes.append((float(bbox.find('ymin').text) / shape[0],
                       float(bbox.find('xmin').text) / shape[1],
                       float(bbox.find('ymax').text) / shape[0],
                       float(bbox.find('xmax').text) / shape[1]
                       ))
    return image_data, shape, bboxes, labels, labels_text, difficult, truncated


def _convert_to_example(image_data, labels, labels_text, bboxes, shape,
                        difficult, truncated):
    """Build an Example proto for an image example.

    Args:
      image_data: string, JPEG encoding of RGB image;
      labels: list of integers, identifier for the ground truth;
      labels_text: list of strings, human-readable labels;
      bboxes: list of bounding boxes; each box is a list of integers;
          specifying [xmin, ymin, xmax, ymax]. All boxes are assumed to belong
          to the same label as the image label.
      shape: 3 integers, image shapes in pixels.
    Returns:
      Example proto
    """
    xmin = []
    ymin = []
    xmax = []
    ymax = []
    for b in bboxes:
        assert len(b) == 4
        # pylint: disable=expression-not-assigned
        [l.append(point) for l, point in zip([ymin, xmin, ymax, xmax], b)]
        # pylint: enable=expression-not-assigned

    image_format = b'JPEG'
    example = tf.train.Example(features=tf.train.Features(feature={
            'image/height': int64_feature(shape[0]),
            'image/width': int64_feature(shape[1]),
            'image/channels': int64_feature(shape[2]),
            'image/shape': int64_feature(shape),
            'image/object/bbox/xmin': float_feature(xmin),
            'image/object/bbox/xmax': float_feature(xmax),
            'image/object/bbox/ymin': float_feature(ymin),
            'image/object/bbox/ymax': float_feature(ymax),
            'image/object/bbox/label': int64_feature(labels),
            'image/object/bbox/label_text': bytes_feature(labels_text),
            'image/object/bbox/difficult': int64_feature(difficult),
            'image/object/bbox/truncated': int64_feature(truncated),
            'image/format': bytes_feature(image_format),  # 圖像編碼格式
            'image/encoded': bytes_feature(image_data)}))  # 二進制圖像數據
    return example

至此,數據預處理tfr文件生成步驟就完成了。

附錄、Example feature生成函數

具體的example feature生成函數比較簡單,為了完整性,下面給出來,位於程序腳本datasets.dataset_utils.py中:

def int64_feature(value):
    """Wrapper for inserting int64 features into Example proto.
    """
    if not isinstance(value, list):
        value = [value]
    return tf.train.Feature(int64_list=tf.train.Int64List(value=value))


def float_feature(value):
    """Wrapper for inserting float features into Example proto.
    """
    if not isinstance(value, list):
        value = [value]
    return tf.train.Feature(float_list=tf.train.FloatList(value=value))


def bytes_feature(value):
    """Wrapper for inserting bytes features into Example proto.
    """
    if not isinstance(value, list):
        value = [value]
    return tf.train.Feature(bytes_list=tf.train.BytesList(value=value))

標簽數字序號對應表

VOC_LABELS = {
    'none': (0, 'Background'),
    'aeroplane': (1, 'Vehicle'),
    'bicycle': (2, 'Vehicle'),
    'bird': (3, 'Animal'),
    'boat': (4, 'Vehicle'),
    'bottle': (5, 'Indoor'),
    'bus': (6, 'Vehicle'),
    'car': (7, 'Vehicle'),
    'cat': (8, 'Animal'),
    'chair': (9, 'Indoor'),
    'cow': (10, 'Animal'),
    'diningtable': (11, 'Indoor'),
    'dog': (12, 'Animal'),
    'horse': (13, 'Animal'),
    'motorbike': (14, 'Vehicle'),
    'person': (15, 'Person'),
    'pottedplant': (16, 'Indoor'),
    'sheep': (17, 'Animal'),
    'sofa': (18, 'Indoor'),
    'train': (19, 'Vehicle'),
    'tvmonitor': (20, 'Indoor'),
}

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM