COCO數據集提取特定多個類並在YOLO-V3上訓練


本blog多處代碼copy自https://blog.csdn.net/TYUT_xiaoming/article/details/102480016,主要記錄自己實踐中遇到的問題和自己的解決方案,按下面的流程走相信你能快樂地完成該任務~

Step 1 Prepare

Yolo-v3代碼fork from https://github.com/eriklindernoren/PyTorch-YOLOv3

Coco數據集需要自行下載

Step 2 提取圖片和標注信息

首先運行下面的代碼從原coco數據集中提取需要的類的圖片,需要修改的地方有:

  • savepath
  • dataset_List
  • classes_names
  • dataDir
from pycocotools.coco import COCO
import os
import shutil
from tqdm import tqdm
import skimage.io as io
import matplotlib.pyplot as plt
import cv2
from PIL import Image, ImageDraw

#the path you want to save your results for coco to voc
savepath="/coco_class/"
img_dir=savepath+'images/val2014/'
anno_dir=savepath+'Annotations/val2014/'
# datasets_list=['train2014', 'val2014']
# datasets_list=['train2014']
datasets_list=['val2014']
classes_names = ["person","bicycle","car","motorbike", "bus", "truck"] 

#Store annotations and train2014/val2014/... in this folder
dataDir= '/coco/'  

headstr = """\
<annotation>
    <folder>VOC</folder>
    <filename>%s</filename>
    <source>
        <database>My Database</database>
        <annotation>COCO</annotation>
        
        <flickrid>NULL</flickrid>
    </source>
    <size>
        <width>%d</width>
        <height>%d</height>
        <depth>%d</depth>
    </size>
    <segmented>0</segmented>
"""
objstr = """\
    <object>
        <name>%s</name>
        <pose>Unspecified</pose>
        <truncated>0</truncated>
        <difficult>0</difficult>
        <bndbox>
            <xmin>%d</xmin>
            <ymin>%d</ymin>
            <xmax>%d</xmax>
            <ymax>%d</ymax>
        </bndbox>
    </object>
"""

tailstr = '''\
</annotation>
'''

#if the dir is not exists,make it,else delete it
def mkr(path):
    if os.path.exists(path):
        shutil.rmtree(path)
        os.mkdir(path)
    else:
        os.mkdir(path)
mkr(img_dir)
mkr(anno_dir)
def id2name(coco):
    classes=dict()
    for cls in coco.dataset['categories']:
        classes[cls['id']]=cls['name']
    return classes

def write_xml(anno_path,head, objs, tail):
    f = open(anno_path, "w")
    f.write(head)
    for obj in objs:
        f.write(objstr%(obj[0],obj[1],obj[2],obj[3],obj[4]))
    f.write(tail)


def save_annotations_and_imgs(coco,dataset,filename,objs):
    #eg:COCO_train2014_000000196610.jpg-->COCO_train2014_000000196610.xml
    anno_path=anno_dir+filename[:-3]+'xml'
    img_path=dataDir+'images/'+dataset+'/'+filename
    # print(img_path)
    dst_imgpath=img_dir+filename
    print(img_path,'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa')

    img=cv2.imread(img_path)
    # print(img)
    if (img.shape[2] == 1):
        print(filename + " not a RGB image")
        return

    shutil.copy(img_path, dst_imgpath)

    head=headstr % (filename, img.shape[1], img.shape[0], img.shape[2])
    tail = tailstr
    write_xml(anno_path,head, objs, tail)


def showimg(coco,dataset,img,classes,cls_id,show=True):
    global dataDir
    I=Image.open('%s/%s/%s/%s'%(dataDir,'images',dataset,img['file_name']))
    #Get the annotated information by ID
    annIds = coco.getAnnIds(imgIds=img['id'], catIds=cls_id, iscrowd=None)
    # print(annIds)
    anns = coco.loadAnns(annIds)
    # print(anns)
    # coco.showAnns(anns)
    objs = []
    for ann in anns:
        class_name=classes[ann['category_id']]
        if class_name in classes_names:
            print(class_name)
            if 'bbox' in ann:
                bbox=ann['bbox']
                xmin = int(bbox[0])
                ymin = int(bbox[1])
                xmax = int(bbox[2] + bbox[0])
                ymax = int(bbox[3] + bbox[1])
                obj = [class_name, xmin, ymin, xmax, ymax]
                objs.append(obj)
                draw = ImageDraw.Draw(I)
                draw.rectangle([xmin, ymin, xmax, ymax])
    if show:
        plt.figure()
        plt.axis('off')
        plt.imshow(I)
        plt.show()

    return objs

for dataset in datasets_list:
    #./COCO/annotations/instances_train2014.json
    annFile='{}/annotations/instances_{}.json'.format(dataDir,dataset)

    #COCO API for initializing annotated data
    coco = COCO(annFile)
    '''
    When the COCO object is created, the following information will be output:
    loading annotations into memory...
    Done (t=0.81s)
    creating index...
    index created!
    So far, the JSON script has been parsed and the images are associated with the corresponding annotated data.
    '''
    #show all classes in coco
    classes = id2name(coco)
    print(classes)
    #[1, 2, 3, 4, 6, 8]
    classes_ids = coco.getCatIds(catNms=classes_names)
    print(classes_ids)
    # exit()
    for cls in classes_names:
        #Get ID number of this class
        cls_id=coco.getCatIds(catNms=[cls])
        img_ids=coco.getImgIds(catIds=cls_id)
        print(cls,len(img_ids))
        # imgIds=img_ids[0:10]
        for imgId in tqdm(img_ids):
            img = coco.loadImgs(imgId)[0]
            filename = img['file_name']
            # print(filename)
            objs=showimg(coco, dataset, img, classes,classes_ids,show=False)
            print(objs)
            save_annotations_and_imgs(coco, dataset, filename, objs)


這一步會生成生成提取后的images文件夾和Anootations(.xml)文件夾

Step 3 過濾錯誤提取信息

用上面的代碼會造成提取多個類xml文件都沒有object這個屬性,這也是為什么有這篇blog的原因。。

我采用了很暴力的方法,就是把那些不包含我們要的類的annotation和image刪除即可,運行下面代碼:

import os

Dir = './coco_class/Annotations/val2014'
ImageDir = './coco_class/images/val2014'
cnt = 0
for i, file_name in enumerate(os.listdir(Dir)):
fsize = os.path.getsize(os.path.join(Dir,file_name))
if fsize == 410:
print('removing {} of size{}'.format(file_name,fsize))
os.remove(os.path.join(ImageDir, file_name[:-3]+'jpg'))
os.remove(os.path.join(Dir, file_name))
cnt += 1

print('remove {} files'.format(cnt))

OK,現在我們正式完成了圖片的過濾。

Step 4 .xml轉.txt生成label信息

修改下面代碼的:

  • classes
  • data_path
  • list_file
  • in_file
  • out_file
import xml.etree.ElementTree as ET
import pickle
import os
from os import listdir, getcwd
from os.path import join
 
 
classes = ['person','bicycle','car','motorbike', 'bus', 'truck']  
#classes = ['truck']  
 
 
 
def convert(size, box):
    dw = 1./(size[0])
    dh = 1./(size[1])
    x = (box[0] + box[1])/2.0 - 1
    y = (box[2] + box[3])/2.0 - 1
    w = box[1] - box[0]
    h = box[3] - box[2]
    x = x*dw
    w = w*dw
    y = y*dh
    h = h*dh
    return (x,y,w,h)
 
def convert_annotation(image_id):
    in_file = open('/coco_class/Annotations/train2014/%s.xml'%(image_id))
    out_file = open('/coco_class/labels/train2014/%s.txt'%(image_id), 'w')
    tree=ET.parse(in_file)
    root = tree.getroot()
    size = root.find('size')
    w = int(size.find('width').text)
    h = int(size.find('height').text)
 
    for obj in root.iter('object'):
        difficult = obj.find('difficult').text
        cls = obj.find('name').text
        print(cls)
        if cls not in classes or int(difficult)==1:
            continue
        cls_id = classes.index(cls)
        xmlbox = obj.find('bndbox')
        b = (float(xmlbox.find('xmin').text), float(xmlbox.find('xmax').text), float(xmlbox.find('ymin').text), float(xmlbox.find('ymax').text))
        bb = convert((w,h), b)
        out_file.write(str(cls_id) + " " + " ".join([str(a) for a in bb]) + '\n')
 
 
data_path = '/coco_class/images/train2014'
img_names = os.listdir(data_path)
 
list_file = open('/coco_class/class_train.txt', 'w')
for img_name in img_names:
    if not os.path.exists('coco_class/labels/train2014'):
        os.makedirs('/coco_class/labels/train2014')
 
    list_file.write('/coco_class/images/train2014/%s\n'%img_name)
    image_id = img_name[:-4]
    convert_annotation(image_id)
 
list_file.close()

到這里,我們就完成了對coco數據集的分割,接下來就是yolo環節了~

Step 5 修改YOLO-V3代碼

這部分多處copy自https://cloud.tencent.com/developer/ask/210396

  • 修改(或復制備份)data/coco.names文件,刪除你要檢測的類之外的所有其他類
  • 修改cfg文件(例如config/yolov3.cfg),將610,696,783 行的3個類從80更改為你要檢測的類數
  • 將第603,689,776行的cfg文件中的3個過濾器從255更改為(classes + 5)x3 = 33(我是訓練6個類所以(6+5) x 3
  • 修改/config/coco.data,train和valid為剛剛生成的coco_class文件夾中的class_train.txt和class_valid.txt,class_num也要修改
  • 運行train.py或者detect.py即可

Step 6 訓練過程中遇到的小bug

在訓練過程中,我遇到了CUDA error: device-side assert triggered的bug,在yolov3的官方issue中找到了解決方案:https://github.com/eriklindernoren/PyTorch-YOLOv3/issues/157

修改utils/utils.py即可

b, target_labels = target[:, :2].long().t()
gx, gy = gxy.t()
gw, gh = gwh.t()
gi, gj = gxy.long().t()
########## TODO(arthur77wang):
gi[gi < 0] = 0
gj[gj < 0] = 0
gi[gi > nG - 1] = nG - 1
gj[gj > nG - 1] = nG - 1
###################
# Set masks
obj_mask[b, best_n, gj, gi] = 1
noobj_mask[b, best_n, gj, gi] = 0

不是搞這個方向的,只是做項目遇到一些問題,所以貼上來給大家分享,希望大家少走彎路,有問題歡迎聯系yxzhangxmu@163.com多多交流~


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM