Win7+keras+tensorflow使用YOLO-v3訓練自己的數據集


一、下載和測試模型

1. 下載YOLO-v3

git clone https://github.com/qqwweee/keras-yolo3.git

這是在Ubuntu里的命令,windows直接去 https://github.com/qqwweee/keras-yolo3下載、解壓。得到一個 keras-yolo3-master 文件夾

2. 下載權重

wget https://pjreddie.com/media/files/yolov3.weights

去 https://pjreddie.com/media/files/yolov3.weights 下載權重。將 yolov3.weights 放入 keras-yolo3-master 文件夾

3. 生成 h5 文件

python convert.py yolov3.cfg yolov3.weights model_data/yolo.h5

執行convert.py文件,這是將darknet的yolo轉換為用於keras的h5文件,生成的h5被保存在model_data下。命令中的 convert.py 和 yolo.cfg 已經在keras-yolo3-master 文件夾下,不需要單獨下載。

4. 用已經被訓練好的yolo.h5進行圖片識別測試

python yolo_video.py --image

執行后會讓你輸入一張圖片的路徑,由於我准備的圖片放在與yolo_video.py同級目錄,所以直接輸入圖片名稱,不需要加路徑

這就表明測試成功了。

 

二、制作自己的VOC數據集

參考我原來寫的博客:

在Ubuntu內制作自己的VOC數據集

我是在Ubuntu內標注然后移到Windows的,如果在Windows里安裝了LabelImg,可以直接在Windows下標注。

最后文件布局為:

 

三、修改配置文件、執行訓練

1. 復制 voc_annotation.py 到voc文件夾下,修改 voc_annotation.py 分類。如下圖:

    執行 voc_annotation.py 獲得這四個文件

import xml.etree.ElementTree as ET
from os import getcwd

sets=[('2018', 'train'), ('2018', 'val'), ('2018', 'test'), ('2018', 'trainval')]

classes = []


def convert_annotation(year, image_id, list_file):
    in_file = open('VOCdevkit\VOC%s\Annotations\%s.xml'%(year, image_id), encoding = 'utf-8')
    tree=ET.parse(in_file)
    root = tree.getroot()

    for obj in root.iter('object'):
        difficult = obj.find('difficult').text
        cls = obj.find('name').text
        if cls not in classes or int(difficult)==1:
            continue
        cls_id = classes.index(cls)
        xmlbox = obj.find('bndbox')
        b = (int(xmlbox.find('xmin').text), int(xmlbox.find('ymin').text), int(xmlbox.find('xmax').text), int(xmlbox.find('ymax').text))
        list_file.write(" " + ",".join([str(a) for a in b]) + ',' + str(cls_id))

wd = getcwd()

for year, image_set in sets:
    image_ids = open('VOCdevkit\VOC%s\ImageSets\Main\%s.txt'%(year, image_set)).read().strip().split()
    list_file = open('%s_%s.txt'%(year, image_set), 'w')
    for image_id in image_ids:
        list_file.write('%s\VOCdevkit\VOC%s\JPEGImages\%s.jpg'%(wd, year, image_id))
        convert_annotation(year, image_id, list_file)
        list_file.write('\n')
        
    list_file.close()

網上都是 train、val、test、三個文件。但我覺得還應該加一個 trainval。還有將所有的 / 改為 \ (Windows下路徑表示和linux下不同)。高亮部分是為了防止Windows讀取錯誤(博主就恰好碰到了)

2. 在model_data文件夾下新建一個 my_classes.txt(可以根據你的數據來,比如你檢測是花的種類,可以叫 flower.txt。起名最好有意義),將你的類別寫入,一行一個。

3. 修改yolov3.cfg 文件

使用遷移學習思想,用已經預訓練好的權重接着訓練。需要下面的修改步驟:

IDE里直接打開cfg文件,ctrl+f搜 yolo, 總共會搜出3個含有yolo的地方。

每個地方都要修改3處,

          filter :3*(5+len(classes))

          classes:len(classes)  我的類別是17

          random:原來是1,顯存小改為0

      

重新生成h5文件

python convert.py -w yolov3.cfg yolov3.weights model_data/yolo_weights.h5

 

4. 訓練

執行下面的train.py

python train.py
"""
Retrain the YOLO model for your own dataset.
"""
import numpy as np
import keras.backend as K
from keras.layers import Input, Lambda
from keras.models import Model
from keras.callbacks import TensorBoard, ModelCheckpoint, EarlyStopping
 
from yolo3.model import preprocess_true_boxes, yolo_body, tiny_yolo_body, yolo_loss
from yolo3.utils import get_random_data
 
 
def _main():
    annotation_path = 'voc/2018_trainval.txt'
    log_dir = 'model_data/logs/'
    classes_path = 'model_data/my_classes.txt'
    anchors_path = 'model_data/yolo_anchors.txt'
    class_names = get_classes(classes_path)
    anchors = get_anchors(anchors_path)
    input_shape = (416,416) # multiple of 32, hw
    model = create_model(input_shape, anchors, len(class_names) )
    train(model, annotation_path, input_shape, anchors, len(class_names), log_dir=log_dir)
 
def train(model, annotation_path, input_shape, anchors, num_classes, log_dir='logs/'):
    model.compile(optimizer='adam', loss={
        'yolo_loss': lambda y_true, y_pred: y_pred})
    logging = TensorBoard(log_dir=log_dir)
    checkpoint = ModelCheckpoint(log_dir + "ep{epoch:03d}-loss{loss:.3f}-val_loss{val_loss:.3f}.h5",
        monitor='val_loss', save_weights_only=True, save_best_only=True, period=1)
    batch_size = 10
    val_split = 0.2
    with open(annotation_path) as f:
        lines = f.readlines()
    np.random.shuffle(lines)
    num_val = int(len(lines)*val_split)
    num_train = len(lines) - num_val
    print('Train on {} samples, val on {} samples, with batch size {}.'.format(num_train, num_val, batch_size))
 
    model.fit_generator(data_generator_wrap(lines[:num_train], batch_size, input_shape, anchors, num_classes),
            steps_per_epoch=max(1, num_train//batch_size),
            validation_data=data_generator_wrap(lines[num_train:], batch_size, input_shape, anchors, num_classes),
            validation_steps=max(1, num_val//batch_size),
            epochs=20,
            initial_epoch=0)
    model.save_weights(log_dir + 'trained_weights.h5')
 
def get_classes(classes_path):
    with open(classes_path) as f:
        class_names = f.readlines()
    class_names = [c.strip() for c in class_names]
    return class_names
 
def get_anchors(anchors_path):
    with open(anchors_path) as f:
        anchors = f.readline()
    anchors = [float(x) for x in anchors.split(',')]
    return np.array(anchors).reshape(-1, 2)
 
def create_model(input_shape, anchors, num_classes, load_pretrained=False, freeze_body=False,
            weights_path='model_data/yolo_weights.h5'):
    K.clear_session() # get a new session
    image_input = Input(shape=(None, None, 3))
    h, w = input_shape
    num_anchors = len(anchors)
    y_true = [Input(shape=(h//{0:32, 1:16, 2:8}[l], w//{0:32, 1:16, 2:8}[l], \
        num_anchors//3, num_classes+5)) for l in range(3)]
 
    model_body = yolo_body(image_input, num_anchors//3, num_classes)
    print('Create YOLOv3 model with {} anchors and {} classes.'.format(num_anchors, num_classes))
 
    if load_pretrained:
        model_body.load_weights(weights_path, by_name=True, skip_mismatch=True)
        print('Load weights {}.'.format(weights_path))
        if freeze_body:
            # Do not freeze 3 output layers.
            num = len(model_body.layers)-3
            for i in range(num): model_body.layers[i].trainable = False
            print('Freeze the first {} layers of total {} layers.'.format(num, len(model_body.layers)))
 
    model_loss = Lambda(yolo_loss, output_shape=(1,), name='yolo_loss',
        arguments={'anchors': anchors, 'num_classes': num_classes, 'ignore_thresh': 0.5})(
        [*model_body.output, *y_true])
    model = Model([model_body.input, *y_true], model_loss)
    return model
def data_generator(annotation_lines, batch_size, input_shape, anchors, num_classes):
    n = len(annotation_lines)
    np.random.shuffle(annotation_lines)
    i = 0
    while True:
        image_data = []
        box_data = []
        for b in range(batch_size):
            i %= n
            image, box = get_random_data(annotation_lines[i], input_shape, random=True)
            image_data.append(image)
            box_data.append(box)
            i += 1
        image_data = np.array(image_data)
        box_data = np.array(box_data)
        y_true = preprocess_true_boxes(box_data, input_shape, anchors, num_classes)
        yield [image_data, *y_true], np.zeros(batch_size)
 
def data_generator_wrap(annotation_lines, batch_size, input_shape, anchors, num_classes):
    n = len(annotation_lines)
    if n==0 or batch_size<=0: return None
    return data_generator(annotation_lines, batch_size, input_shape, anchors, num_classes)
 
if __name__ == '__main__':
    _main()

代碼標紅的地方,需要根據自己實際情況進行修改。

其他可以設置的參數

batch_size = 32:默認值比較大,對電腦性能有要求。可以調小。我設置的是10

val_split = 0.1 : 這個表示,驗證集占訓練集的比例。建議划分大點。不然驗證集的圖片會很少。不利於驗證集loss的計算

epochs = 100,可以調小一點。我設置的是20

參考地址:

https://blog.csdn.net/m0_37857151/article/details/81330699

https://blog.csdn.net/mingqi1996/article/details/83343289


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM