在使用slim之類的tensorflow自帶框架的時候一般默認的數據格式就是TFRecords,在訓練的時候使用TFRecords中數據的流程如下:使用input pipeline讀取tfrecords文件/其他支持的格式,然后隨機亂序,生成文件序列,讀取並解碼數據,輸入模型訓練。
如果有一串jpg圖片地址和相應的標簽:images和labels
1. 生成TFrecords
存入TFRecords文件需要數據先存入名為example的protocol buffer,然后將其serialize成為string才能寫入。example中包含features,用於描述數據類型:bytes,float,int64。
import tensorflow as tf
import cv2
def _int64_feature(value):
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
def _bytes_feature(value):
return tf.train.Feature(bytes_list=tf.train.BytesList(value=value))
train_filename = 'train.tfrecords'
with tf.python_io.TFRecordWriter(train_filename) as tfrecord_writer:
for i in range(len(images)):
# read in image data by tf
img_data = tf.gfile.FastGFile(images[i], 'rb').read() # image data type is string
label = labels[i]
# get width and height of image
image_shape = cv2.imread(images[i]).shape
width = image_shape[1]
height = image_shape[0]
# create features
feature = {'train/image': _bytes_feature(img_data),
'train/label': _int64_feature(label), # label: integer from 0-N
'train/height': _int64_feature(height),
'train/width': _int64_feature(width)}
# create example protocol buffer
example = tf.train.Example(features=tf.train.Features(feature=feature))
# serialize protocol buffer to string
tfrecord_writer.write(example.SerializeToString())
tfrecord_writer.close()
2. 讀取TFRecords文件
首先用tf.train.string_input_producer讀取tfrecords文件的list建立FIFO序列,可以申明num_epoches和shuffle參數表示需要讀取數據的次數以及時候將tfrecords文件讀入順序打亂,然后定義TFRecordReader讀取上面的序列返回下一個record,用tf.parse_single_example對讀取到TFRecords文件進行解碼,根據保存的serialize example和feature字典返回feature所對應的值。此時獲得的值都是string,需要進一步解碼為所需的數據類型。把圖像數據的string reshape成原始圖像后可以進行preprocessing操作。此外,還可以通過tf.train.batch或者tf.train.shuffle_batch將圖像生成batch序列。
由於tf.train函數會在graph中增加tf.train.QueueRunner類,而這些類有一系列的enqueue選項使一個隊列在一個線程里運行。為了填充隊列就需要用tf.train.start_queue_runners來為所有graph中的queue runner啟動線程,而為了管理這些線程就需要一個tf.train.Coordinator來在合適的時候終止這些線程。
import tensorflow as tf
import matplotlib.pyplot as plt
data_path = 'train.tfrecords'
with tf.Session() as sess:
# feature key and its data type for data restored in tfrecords file
feature = {'train/image': tf.FixedLenFeature([], tf.string),
'train/label': tf.FixedLenFeature([], tf.int64),
'train/height': tf.FixedLenFeature([], tf.int64),
'train/width': tf.FixedLenFeature([], tf.int64)}
# define a queue base on input filenames
filename_queue = tf.train.string_input_producer([data_path], num_epoches=1)
# define a tfrecords file reader
reader = tf.TFRecordReader()
# read in serialized example data
_, serialized_example = reader.read(filename_queue)
# decode example by feature
features = tf.parse_single_example(serialized_example, features=feature)
image = tf.image.decode_jpeg(features['train/image'])
image = tf.image.convert_image_dtype(image, dtype=tf.float32) # convert dtype from unit8 to float32 for later resize
label = tf.cast(features['train/label'], tf.int64)
height = tf.cast(features['train/height'], tf.int32)
width = tf.cast(features['train/width'], tf.int32)
# restore image to [height, width, 3]
image = tf.reshape(image, [height, width, 3])
# resize
image = tf.image.resize_images(image, [224, 224])
# create bathch
images, labels = tf.train.shuffle_batch([image, label], batch_size=10, capacity=30, num_threads=1, min_after_dequeue=10) # capacity是隊列的最大容量,num_threads是dequeue后最小的隊列大小,num_threads是進行隊列操作的線程數。
# initialize global & local variables
init_op = tf.group(tf.global_variables_initializer(), tf.local_variables_initializer())
sess.run(init_op)
# create a coordinate and run queue runner objects
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
for batch_index in range(3):
batch_images, batch_labels = sess.run([images, labels])
for i in range(10):
plt.imshow(batch_images[i, ...])
plt.show()
print "Current image label is: ", batch_lables[i]
# close threads
coord.request_stop()
coord.join(threads)
sess.close()
