Unity模擬器是一款模擬汽車行駛的賽車游戲。這個模擬器是一個可以收集游戲中數據圖片,以及游戲中你控制賽車的角度。
1數據介紹
游戲中模擬了賽車上安裝左中右三個攝像頭,數據就是這些攝像頭截取的畫面。
模擬器關於方向盤的角度只有中間攝像頭對應的轉動角度。
實現
1對圖像進行水平翻 有擴充數據的效果
def horizontal_filp(img, degree): ''' :param img: :param degree: :return: cv2.flip 值為1代表水平翻轉 翻轉之后角度正負變反 ''' choice = np.random.choice([0,1]) if choice == 1: img, degree = cv2.flip(img, 1), -degree return (img, degree)
2隨機調成圖片的亮度
def randonm_brightness(img, degree): ''' 隨機調整輸入圖像的亮度 調整強度於 0.1(變黑)和1(無變化)之間 :param img: :param degree: :return: ''' hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) #調整亮度V: alpha×V alpha = np.random.uniform(low=0.1, high=1, size=None) v = hsv[:, :, 2] v = v * alpha hsv[:, :, 2] = v.astype('uint8') rgb = cv2.cvtColor(hsv.astype('uint8'), cv2.COLOR_HSV2RGB) return (rgb, degree)
3左右攝像頭角度的轉換
def left_right_random_swap(img_address, degree, degree_corr=1.0/4): ''' 隨機從左, 中, 右圖像中選擇一張圖像, 並相應的調整轉動的角度。 :param img_address: 中間圖像的文件路徑 :param degree: 中間圖像對於方向盤轉動角度 :param degree_corr: 左中右關系的值 :return: ''' swap = np.random.choice(['L', 'R', 'C']) if swap == 'L': img_address = img_address.replace('center', 'left') corrected_label = np.arctan(math.tan(degree)+degree_corr) return (img_address, corrected_label) elif swap == 'R': img_address = img_address.replace('center', 'right') corrected_label = np.arctan(math.tan(degree) - degree_corr) return (img_address, corrected_label) else: return (img_address, degree)
4數據平衡 以一定概率丟棄角度為0的圖片
def discard_zero_steering(degrees, rate): ''' 以一定概率丟棄轉動角度為0的數據 :param degrees: 轉動角度的list :param rate: 丟棄系數 若 = 0.8 則80%保丟 :return: ''' steering_zero_idx = np.where(degrees == 0) steering_zero_idx = steering_zero_idx[0] size_del = int(len(steering_zero_idx)*rate) return np.random.choice(steering_zero_idx, size=size_del, replace=False)
5網絡結構的實現
def get_model(shape): ''' 預測方向盤角度: 以圖像為輸入, 預測方向盤的轉動角度 shape: 輸入圖像的尺寸, 例如(128, 128, 3) ''' model = Sequential() model.add(Conv2D(8, (5, 5), strides=(1, 1), padding="valid", activation='relu', input_shape=shape)) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Dropout(0.5)) model.add(Conv2D(16, (4, 4), strides=(1, 1), padding="valid", activation='relu', kernel_regularizer=l2(0.01), activity_regularizer=l1(0.01))) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.5)) model.add(Conv2D(16, (4, 4), strides=(1, 1), padding="valid", activation='relu', kernel_regularizer=l2(0.01), activity_regularizer=l1(0.01))) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.5)) model.add(Conv2D(16, (5, 5), strides=(1, 1), padding="valid", activation='relu', kernel_regularizer=l2(0.01), activity_regularizer=l1(0.01))) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.5)) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(50, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(10, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(1, activation='linear')) sgd = SGD(lr=0.01) adm=Adamax(lr=0.02, beta_1=0.9, beta_2=0.999) model.compile(optimizer=sgd, loss='mean_squared_error') return model
6截取了圖片的部分圖片並對我們的圖片進行歸一化操作
X[example,:,:,:] = cv2.resize(img[80:140, 0:320], (shape[0], shape[1]) ) / 255 - 0.5
7訓練結果