1.1 簡介
深層神經網絡一般都需要大量的訓練數據才能獲得比較理想的結果。在數據量有限的情況下,可以通過數據增強(Data Augmentation)來增加訓練樣本的多樣性
, 提高模型魯棒性,避免過擬合。
在計算機視覺中,典型的數據增強方法有翻轉(Flip),旋轉(Rotat ),縮放(Scale),隨機裁剪或補零(Random Crop or Pad),色彩抖動(Color jittering),加噪聲(Noise)
筆者在跟進視頻及圖像中的人體姿態檢測和關鍵點追蹤(Human Pose Estimatiion and Tracking in videos)的項目。因此本文的數據增強僅使用——翻轉(Flip),旋轉(Rotate ),縮放以及縮放(Scale)
2.1 裁剪(Crop)
image.shape--([3, width, height])一個視頻序列中的一幀圖片,裁剪前大小不統一
bbox.shape--([4,])人體檢測框,用於裁剪
x.shape--([1,13]) 人體13個關鍵點的所有x坐標值
y.shape--([1,13])人體13個關鍵點的所有y坐標值
1 def crop(image, bbox, x, y, length): 2 x, y, bbox = x.astype(np.int), y.astype(np.int), bbox.astype(np.int) 3 4 x_min, y_min, x_max, y_max = bbox 5 w, h = x_max - x_min, y_max - y_min 6 7 # Crop image to bbox 8 image = image[y_min:y_min + h, x_min:x_min + w, :] 9 10 # Crop joints and bbox 11 x -= x_min 12 y -= y_min 13 bbox = np.array([0, 0, x_max - x_min, y_max - y_min]) 14 15 # Scale to desired size 16 side_length = max(w, h) 17 f_xy = float(length) / float(side_length) 18 image, bbox, x, y = Transformer.scale(image, bbox, x, y, f_xy) 19 20 # Pad 21 new_w, new_h = image.shape[1], image.shape[0] 22 cropped = np.zeros((length, length, image.shape[2])) 23 24 dx = length - new_w 25 dy = length - new_h 26 x_min, y_min = int(dx / 2.), int(dy / 2.) 27 x_max, y_max = x_min + new_w, y_min + new_h 28 29 cropped[y_min:y_max, x_min:x_max, :] = image 30 x += x_min 31 y += y_min 32 33 x = np.clip(x, x_min, x_max) 34 y = np.clip(y, y_min, y_max) 35 36 bbox += np.array([x_min, y_min, x_min, y_min]) 37 return cropped, bbox, x.astype(np.int), y.astype(np.int)
2.2 縮放(Scale)
image.shape--([3, 256, 256])一個視頻序列中的一幀圖片,裁剪后輸入網絡為256*256
bbox.shape--([4,])人體檢測框,用於裁剪
x.shape--([1,13]) 人體13個關鍵點的所有x坐標值
y.shape--([1,13])人體13個關鍵點的所有y坐標值
f_xy--縮放倍數
1 def scale(image, bbox, x, y, f_xy): 2 (h, w, _) = image.shape 3 h, w = int(h * f_xy), int(w * f_xy) 4 image = resize(image, (h, w), preserve_range=True, anti_aliasing=True, mode='constant').astype(np.uint8) 5 6 x = x * f_xy 7 y = y * f_xy 8 bbox = bbox * f_xy 9 10 x = np.clip(x, 0, w) 11 y = np.clip(y, 0, h) 12 13 return image, bbox, x, y
2.3 翻轉(fillip)
這里是將圖片圍繞對稱軸進行左右翻轉(因為人體是左右對稱的,在關鍵點檢測中有助於防止模型過擬合)
1 def flip(image, bbox, x, y): 2 image = np.fliplr(image).copy() 3 w = image.shape[1] 4 x_min, y_min, x_max, y_max = bbox 5 bbox = np.array([w - x_max, y_min, w - x_min, y_max]) 6 x = w - x 7 x, y = Transformer.swap_joints(x, y) 8 return image, bbox, x, y
翻轉前:
翻轉后:
2.4 旋轉(rotate)
angle--旋轉角度
1 def rotate(image, bbox, x, y, angle): 2 # image - -(256, 256, 3) 3 # bbox - -(4,) 4 # x - -[126 129 124 117 107 99 128 107 108 105 137 155 122 99] 5 # y - -[209 176 136 123 178 225 65 47 46 24 44 64 49 54] 6 # angle - --8.165648811999333 7 # center of image [128,128] 8 o_x, o_y = (np.array(image.shape[:2][::-1]) - 1) / 2. 9 width,height = image.shape[0],image.shape[1] 10 x1 = x 11 y1 = height - y 12 o_x = o_x 13 o_y = height - o_y 14 image = rotate(image, angle, preserve_range=True).astype(np.uint8) 15 r_x, r_y = o_x, o_y 16 angle_rad = (np.pi * angle) /180.0 17 x = r_x + np.cos(angle_rad) * (x1 - o_x) - np.sin(angle_rad) * (y1 - o_y) 18 y = r_y + np.sin(angle_rad) * (x1 - o_x) + np.cos(angle_rad) * (y1 - o_y) 19 x = x 20 y = height - y 21 bbox[0] = r_x + np.cos(angle_rad) * (bbox[0] - o_x) + np.sin(angle_rad) * (bbox[1] - o_y) 22 bbox[1] = r_y + -np.sin(angle_rad) * (bbox[0] - o_x) + np.cos(angle_rad) * (bbox[1] - o_y) 23 bbox[2] = r_x + np.cos(angle_rad) * (bbox[2] - o_x) + np.sin(angle_rad) * (bbox[3] - o_y) 24 bbox[3] = r_y + -np.sin(angle_rad) * (bbox[2] - o_x) + np.cos(angle_rad) * (bbox[3] - o_y) 25 return image, bbox, x.astype(np.int), y.astype(np.int)
旋轉前:
旋轉后:
3 結果(output)
數據增強前的原圖:
數據增強后: