mmdetection安裝過程中依靠https://github.com/open-mmlab/mmdetection/blob/master/docs/get_started.md
然后在安裝第三步Install mmcv-full時,發現自己的cuda是10.1的,然后pytorch是1.7.1的然后就用了這條命令
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu101/torch1.7.1/index.html
實際上是錯誤的,但是沒有報錯,他就直接給你按了一個最新版本的mmcv-full,這和我想要的是不一樣的,我要的是
cuda10.1,pytorch1.7.1的。這是因為這個地址是不存在的
https://download.openmmlab.com/mmcv/dist/cu101/torch1.7.1/index.html
想要得到cuda10.1,pytorch1.7.1的mmcv-full,就需要用
https://download.openmmlab.com/mmcv/dist/cu101/torch1.7.0/index.html
因為這個地址是存在的
這是從mmcv-full的參考表格里知道的,連接是這個https://mmcv.readthedocs.io/en/latest/#install-with-pip
表格是這個,這里的torch1.7就指的是torch1.7.0,在那個http地址上既不能寫成torch1.7,也不能寫成torch1.7.1
然后第三步重要選一種mmcv的安裝方式就行
然后YOLOV3就按下面的弄,然后有一個'`cfg` or `default_args` must contain the key "type"的問題
是因為yolov3_d53_mstrain-608_273e_coco.py文件里的runner = dict(max_epochs=300)這條語句錯了,這是一個還未修正的BUG,
應該改為runner = dict(type='EpochBasedRunner', max_epochs=300),也就是加上type='EpochBasedRunner'
在YOLOtest是,需要用checkpoint的絕對路徑,用相對路徑就會出做錯,不知道為什么,明顯是個BUG
d=====( ̄▽ ̄*)b 我是小小搬運工!站在各位巨人的肩膀上完成噠~~~
哇,再次撒花花~~~
安裝過程
項目地址:https://github.com/open-mmlab/mmdetection
安裝細節
環境配置
python 3.7 pytorch 1.6.0 torchvision 0.7.0 cuda 10.2
- conda create -n mmdetection python=3.7
- git clone https://github.com/open-mmlab/mmdetection.git
- conda install pytorch1.6.0 torchvision0.7.0 cudatoolkit=10.2 -c pytorch
https://github.com/open-mmlab/mmcv
根據這個下載對應的mmcv
-
pip install mmcv-full==1.2.2 -f https://download.openmmlab.com/mmcv/dist/cu102/torch1.6.0/index.html
-
pip install -r /home/lhh/workspace/AnacondaProjects/mmdetection/mmdetection/requirements/build.txt
(后面加清華源可能快些,沒有嘗試) -
運行一段代碼,成功!
使用自己數據集訓練
數據集格式
修改路徑與配置(/home/lhh/workspace/AnacondaProjects/mmdetection/mmdetection/configs/base)都是在這個文件夾下設置的
- dataset中的.py文件設置路徑(用到coco數據集就修改對應的py文件)eg:voc0721.py文件
- models文件夾修改對應模型的.py文件設置類別數量
- schedules 文件夾修改.py文件設置epoch
- mmdet/datasets/voc.py設置類別名,如果是1類加逗號
- mmdet/core/evaluation/class_names.py設置類別名
以voc數據集,faster_rcnn為例
-
修改schedule_1x.py文件
修改最后一行的訓練epoch -
修改配置文件(/home/lhh/workspace/AnacondaProjects/mmdetection/mmdetection/configs/fast_rcnn)中的fast_rcnn_r50_fpn_1x_coco.py設置配置文件的位置,數據類型的位置
-
創建文件夾work_dir保存訓練過程及結果
-
運行(具體需要看train.py文件,需要哪些參數,在tools文件夾下)
比如:python tools/train.py configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py --work-dir work_dir
訓練結果:
map結果繪制
- mmdetection$ python tools/analyze_logs.py plot_curve ./work_dir/20201228_234809.log.json --keys mAP --legend mAP --out mAP.jpg
之后將訓練過程和結果放在統一文件中,上述路徑會所更改 - 參考鏈接:https://www.cnblogs.com/beeblog72/p/12076562.html
- 同理,loss繪制
- python tools/analyze_logs.py plot_curve ./work_dir/20201228_234809.log.json --keys loss --legend loss --out loss.jpg
- acc
- python tools/analyze_logs.py plot_curve ./work_dir/faster_rcnn_r50_fpn_1x_coco/20201228_234809.log.json --keys acc --legend acc --out acc.jpg
測試
參考鏈接:https://blog.csdn.net/zxfhahaha/article/details/103754467
注 由於test.py文件只對coco數據集進行eval,所以先用test.py文件生成pkl文件,再用eval_metric.py文件進行計算mAP
- python tools/test.py configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py work_dir/latest.pth --out results.pkl
–out后面可以加路徑,不然直接生成再項目根路徑下 - python tools/eval_metric.py configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py result.pkl --eval=mAP
使用pkl文件計算每個類的AP
測試結果
撒花花~~~
以coco數據集,yolov3模型為例
數據集格式:
train2017文件中存放的是圖片
annotations存放的是:
先准備三個相同voc格式的數據集,里面分別存放train、test 和val
- 將某一個txt文本中的數字存的是圖片的名字,要把這些名字的圖片保存到另一個文件夾中
from PIL import Image f3 = open("F:/dataDB/precoco/val/ImageSets/Main/val.txt",'r') #test文件所在路徑 for line2 in f3.readlines(): line3=line2[:-1] #讀取每行去掉后四位的數 im = Image.open('H:/make_data/AB/Images02/{}.jpg'.format(line3))#打開改路徑下的line3記錄的的文件名 im.save('F:/dataDB/precoco/val/JPEGImages/{}.jpg'.format(line3)) #把文件夾中指定的文件名稱的圖片另存到該路徑下 f3.close()
- 將某一個txt文本中的數字存的是圖片的名字,要把這些名字的圖片的xml保存到另一個文件夾中
# -*- coding: UTF-8 -*- #!/usr/bin/env python import sys import re import numpy as np import shutil data = [] for line in open("F:/dataDB/precoco/val/ImageSets/Main/val.txt", "r"): # 設置文件對象並讀取每一行文件 data.append(line) for a in data: #print(a) line3=a[:-1] #讀取每行去掉后四位的數,本人使用的格式為000001.jpg,即去掉.jpg #print('line3', line3) line4 = line3 + '.xml' print(line4) oldname = r'H:/make_data/AB/Anotations02/{}'.format(line4) #print('old', oldname) newname = r'F:/dataDB/precoco/val/Annotations/{}'.format(line4) #print('new', newname) shutil.copyfile(oldname, newname) #將需要的文件從oldname復制到newname
- voc 轉coco數據集
import xml.etree.ElementTree as ET import os import json coco = dict() coco['images'] = [] coco['type'] = 'instances' coco['annotations'] = [] coco['categories'] = [] category_set = dict() image_set = set() category_item_id = -1 image_id = 20180000000 annotation_id = 0 def addCatItem(name): global category_item_id category_item = dict() category_item['supercategory'] = 'none' category_item_id += 1 category_item['id'] = category_item_id category_item['name'] = name coco['categories'].append(category_item) category_set[name] = category_item_id return category_item_id def addImgItem(file_name, size): global image_id if file_name is None: raise Exception('Could not find filename tag in xml file.') if size['width'] is None: raise Exception('Could not find width tag in xml file.') if size['height'] is None: raise Exception('Could not find height tag in xml file.') image_id += 1 image_item = dict() image_item['id'] = image_id image_item['file_name'] = file_name image_item['width'] = size['width'] image_item['height'] = size['height'] coco['images'].append(image_item) image_set.add(file_name) return image_id def addAnnoItem(object_name, image_id, category_id, bbox): global annotation_id annotation_item = dict() annotation_item['segmentation'] = [] seg = [] # bbox[] is x,y,w,h # left_top seg.append(bbox[0]) seg.append(bbox[1]) # left_bottom seg.append(bbox[0]) seg.append(bbox[1] + bbox[3]) # right_bottom seg.append(bbox[0] + bbox[2]) seg.append(bbox[1] + bbox[3]) # right_top seg.append(bbox[0] + bbox[2]) seg.append(bbox[1]) annotation_item['segmentation'].append(seg) annotation_item['area'] = bbox[2] * bbox[3] annotation_item['iscrowd'] = 0 annotation_item['ignore'] = 0 annotation_item['image_id'] = image_id annotation_item['bbox'] = bbox annotation_item['category_id'] = category_id annotation_id += 1 annotation_item['id'] = annotation_id coco['annotations'].append(annotation_item) def parseXmlFiles(xml_path): for f in os.listdir(xml_path): if not f.endswith('.xml'): continue bndbox = dict() size = dict() current_image_id = None current_category_id = None file_name = None size['width'] = None size['height'] = None size['depth'] = None xml_file = os.path.join(xml_path, f) print(xml_file) tree = ET.parse(xml_file) root = tree.getroot() if root.tag != 'annotation': raise Exception('pascal voc xml root element should be annotation, rather than {}'.format(root.tag)) # elem is <folder>, <filename>, <size>, <object> for elem in root: current_parent = elem.tag current_sub = None object_name = None if elem.tag == 'folder': continue if elem.tag == 'filename': file_name = elem.text if file_name in category_set: raise Exception('file_name duplicated') # add img item only after parse <size> tag elif current_image_id is None and file_name is not None and size['width'] is not None: if file_name not in image_set: current_image_id = addImgItem(file_name, size) print('add image with {} and {}'.format(file_name, size)) else: raise Exception('duplicated image: {}'.format(file_name)) # subelem is <width>, <height>, <depth>, <name>, <bndbox> for subelem in elem: bndbox['xmin'] = None bndbox['xmax'] = None bndbox['ymin'] = None bndbox['ymax'] = None current_sub = subelem.tag if current_parent == 'object' and subelem.tag == 'name': object_name = subelem.text if object_name not in category_set: current_category_id = addCatItem(object_name) else: current_category_id = category_set[object_name] elif current_parent == 'size': if size[subelem.tag] is not None: raise Exception('xml structure broken at size tag.') size[subelem.tag] = int(subelem.text) # option is <xmin>, <ymin>, <xmax>, <ymax>, when subelem is <bndbox> for option in subelem: if current_sub == 'bndbox': if bndbox[option.tag] is not None: raise Exception('xml structure corrupted at bndbox tag.') bndbox[option.tag] = int(option.text) # only after parse the <object> tag if bndbox['xmin'] is not None: if object_name is None: raise Exception('xml structure broken at bndbox tag') if current_image_id is None: raise Exception('xml structure broken at bndbox tag') if current_category_id is None: raise Exception('xml structure broken at bndbox tag') bbox = [] # x bbox.append(bndbox['xmin']) # y bbox.append(bndbox['ymin']) # w bbox.append(bndbox['xmax'] - bndbox['xmin']) # h bbox.append(bndbox['ymax'] - bndbox['ymin']) print('add annotation with {},{},{},{}'.format(object_name, current_image_id, current_category_id, bbox)) addAnnoItem(object_name, current_image_id, current_category_id, bbox) if __name__ == '__main__': xml_path = 'F:/dataDB/precoco/val/Annotations' # 這是xml文件所在的地址 json_file = 'F:/dataDB/precoco/val/ImageSets/val.json' # 這是你要生成的json文件 parseXmlFiles(xml_path) # 只需要改動這兩個參數就行了 json.dump(coco, open(json_file, 'w'))
參考鏈接:
https://blog.csdn.net/weixin_41765699/article/details/100124689
修改過程
- 修改configs/_ base_/datasets文件下的coco_detection.py文件
- coco.py文件的類別名
- class_names.py文件的類別名
錯誤:
之前使用的yolov3…py文件有問題,沒有類別數(num_classes),自己還一直死鑽。。。。
換成如下圖所示:
- 運行代碼:
- python tools/train.py configs/yolo/yolov3_d53_mstrain-608_273e_coco.py --work-dir work_dir/yolov3_d53_320_273e_coco
- 運行成功!
撒花花~~~
測試
- python tools/test.py configs/yolo/yolov3_d53_mstrain-608_273e_coco.py work_dir/yolov3_d53_mstrain-608_273e_coco/latest.pth --out result.pkl --eval bbox