yolov5
主要是對yolo v5 的學習記錄
YOLOv5是一種單階段目標檢測算法,該算法在YOLOv4的基礎上添加了一些新的改進思路,使其速度與精度都得到了極大的性能提升。
Yolov5官方代碼中,給出的目標檢測網絡中一共有4個版本,分別是Yolov5s、Yolov5m、Yolov5l、Yolov5x四個模型。
Yolov5s網絡最小,速度最少,AP精度也最低。
但如果檢測的以大目標為主,追求速度,倒也是個不錯的選擇。
其他的三種網絡,在此基礎上,不斷加深加寬網絡,AP精度也不斷提升,但速度的消耗也在不斷增加。
Yolov5代碼中給出的網絡文件是yaml格式
本篇 以 Yolov5s 為例子, 其他3個是以此基礎上寬度擴展、深度增加
整體框圖
(1)輸入端:Mosaic數據增強、自適應錨框計算,自適應圖像縮放 主要是訓練階段的增強
(2)Backbone:融合其他模型的思想 Focus結構,CSP結構
(3)Neck:FPN+PAN結構
(4)Prediction:輸出和之前類似主要 損失函數GIOU_Loss 和預測框損失 DIOU_nms
輸入端
輸入圖像大小為 --img-size default=[640, 640]
1)Mosaic數據增強(訓練使用)
Yolov5的輸入端仍然采用了和Yolov4一樣的Mosaic數據增強的方式。 Mosaic數據增強則采用了4張圖片,隨機縮放、隨機裁剪、隨機排布的方式進行拼接,形成一張大的圖片,可以豐富數據集的同時極大的提升網絡的訓練速度。
主要優點是
-
豐富數據集:隨機使用4張圖片,隨機縮放,再隨機分布進行拼接,大大豐富了檢測數據集
-
減少GPU:可能會有人說,隨機縮放,普通的數據增強也可以做
核心代碼 datasets.load_mosaic(self,index)
在自定義數據加載 LoadImagesAndLabels(Dataset).__getitem__(self, index) 調用了上述函數
def load_mosaic(self, index):
# loads images in a 4-mosaic ,數據擴充 模式
labels4, segments4 = [], []
s = self.img_size # 640
yc, xc = [int(random.uniform(-x, 2 * s + x)) for x in self.mosaic_border] # mosaic center x, y 隨機大圖的中心,特別重要
indices = [index] + random.choices(self.indices, k=3) # 3 additional image indices # 在基本之上添加3個圖像
for i, index in enumerate(indices):
# Load image
img, _, (h, w) = load_image(self, index)
# place img in img4
if i == 0: # top left
img4 = np.full((s * 2, s * 2, img.shape[2]), 114, dtype=np.uint8) # base image with 4 tiles
x1a, y1a, x2a, y2a = max(xc - w, 0), max(yc - h, 0), xc, yc # xmin, ymin, xmax, ymax (large image)
x1b, y1b, x2b, y2b = w - (x2a - x1a), h - (y2a - y1a), w, h # xmin, ymin, xmax, ymax (small image)
elif i == 1: # top right
x1a, y1a, x2a, y2a = xc, max(yc - h, 0), min(xc + w, s * 2), yc
x1b, y1b, x2b, y2b = 0, h - (y2a - y1a), min(w, x2a - x1a), h
elif i == 2: # bottom left
x1a, y1a, x2a, y2a = max(xc - w, 0), yc, xc, min(s * 2, yc + h)
x1b, y1b, x2b, y2b = w - (x2a - x1a), 0, w, min(y2a - y1a, h)
elif i == 3: # bottom right
x1a, y1a, x2a, y2a = xc, yc, min(xc + w, s * 2), min(s * 2, yc + h)
x1b, y1b, x2b, y2b = 0, 0, min(w, x2a - x1a), min(y2a - y1a, h)
# 將四個圖像拼接到一塊
img4[y1a:y2a, x1a:x2a] = img[y1b:y2b, x1b:x2b] # img4[ymin:ymax, xmin:xmax]
padw = x1a - x1b
padh = y1a - y1b
# print(img4.shape) # (1280, 1280, 3)
# Labels Labels的拼接
labels, segments = self.labels[index].copy(), self.segments[index].copy()
if labels.size:
labels[:, 1:] = xywhn2xyxy(labels[:, 1:], w, h, padw, padh) # normalized xywh to pixel xyxy format
segments = [xyn2xy(x, w, h, padw, padh) for x in segments]
labels4.append(labels)
segments4.extend(segments)
labels4 = np.concatenate(labels4, 0) # (8.5)
# print(segments4)
for x in (labels4[:, 1:], *segments4):
np.clip(x, 0, 2 * s, out=x) # clip when using random_perspective()
# img4, labels4 = replicate(img4, labels4) # replicate
# Augment
img4, labels4 = random_perspective(img4, labels4, segments4,
degrees=self.hyp['degrees'],
translate=self.hyp['translate'],
scale=self.hyp['scale'],
shear=self.hyp['shear'],
perspective=self.hyp['perspective'],
border=self.mosaic_border) # border to remove
return img4, labels4
2)自適應錨框計算(訓練使用)
針對不同的數據集,都會有初始設定長寬的錨框。
在初始錨框的基礎上輸出預測框,進而和真實框groundtruth進行比對,計算兩者差距,再反向更新,迭代網絡參數。
yolov5s 初始設置
# anchors
anchors:
- [10,13, 16,30, 33,23] # P3/8
- [30,61, 62,45, 59,119] # P4/16
- [116,90, 156,198, 373,326] # P5/32
parser.add_argument('--noautoanchor', default=False,action='store_true', help='disable autoanchor check') # action 開關
# 函數 autoanchor.py
check_anchors(dataset, model=model, thr=hyp['anchor_t'], imgsz=imgsz)
3)自適應圖片縮放(只是在測試推理使用)
在目標檢測中,不同的圖片長寬都不相同, 常用的方式是將原始圖片resize()
縮放到一個標准尺寸, 針對長寬比較大的圖片 Yolov5代碼中對此進行了改進。
在Yolov5代碼中datasets.py的letterbox函數中進行了修改,對原始圖像自適應的添加最少的黑邊。
主要有以下幾步
1、計算縮放比例 可以得到0.52,和0.69兩個縮放系數,選擇小的縮放系數0.52
2、計算縮放后的尺寸 原始圖片的長寬都乘以最小的縮放系數 得到長、寬
3、計算黑邊填充數值 得到原本需要填充的高度, 采用numpy np.mod(x,32)
取余數的方式
np.mod函數的后面用32 , 因為 Yolov5的網絡經過5次下采樣
注意:
1、訓練時沒有采用縮減黑邊的方式,還是采用傳統填充的方式
2、只是在測試,使用模型推理時,才采用縮減黑邊的方式,提高目標檢測,推理的速度。
datasets.letterbox(img, shape, auto=False)
LoadImagesAndLabels(Dataset).__getitem__(self, index) 調用了上述函數
def letterbox(img, new_shape=(640, 640), color=(114, 114, 114), auto=True, scaleFill=False, scaleup=True, stride=32):
# Resize and pad image while meeting stride-multiple constraints
shape = img.shape[:2] # current shape [height, width]
if isinstance(new_shape, int):
new_shape = (new_shape, new_shape)
# Scale ratio (new / old)
r = min(new_shape[0] / shape[0], new_shape[1] / shape[1])
if not scaleup: # only scale down, do not scale up (for better test mAP)
r = min(r, 1.0)
# Compute padding
ratio = r, r # width, height ratios
new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r))
dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1] # wh padding
if auto: # minimum rectangle
dw, dh = np.mod(dw, stride), np.mod(dh, stride) # wh padding
elif scaleFill: # stretch
dw, dh = 0.0, 0.0
new_unpad = (new_shape[1], new_shape[0])
ratio = new_shape[1] / shape[1], new_shape[0] / shape[0] # width, height ratios
dw /= 2 # divide padding into 2 sides
dh /= 2
if shape[::-1] != new_unpad: # resize
img = cv2.resize(img, new_unpad, interpolation=cv2.INTER_LINEAR)
top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1))
left, right = int(round(dw - 0.1)), int(round(dw + 0.1))
img = cv2.copyMakeBorder(img, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color) # add border
return img, ratio, (dw, dh)
Backbone
Focus結構
Focus結構-該結構的主要思想是通過slice操作來對輸入圖片進行裁剪
CSP結構
CSP結構借鑒了CSPNet的設計思路,Yolov5設計了兩種CSP結構,以Yolov5s網絡為例,以CSP1_X結構應用於Backbone主干網絡,另一種CSP2_X結構則應用於Neck中
Neck
Yolov5現在的Neck和Yolov4中一樣,都采用FPN+PAN的結構
采用借鑒CSPNet設計的CSP2結構,加強網絡特征融合的能力
FPN+PAN-所謂的FPN,即特征金字塔網絡,通過在特征圖上面構建金字塔,可以更好的解決目標檢測中尺度問題。PAN則是借鑒了圖像分割領域PANet算法中的創新點,它是一種自底向上的結構,它在FPN的基礎上增加了兩個PAN結構
輸出端
1)Bounding box損失函數
Yolov5中采用其中的GIOU_Loss做Bounding box的損失函數
2)nms非極大值抑制
針對很多目標框的篩選,通常需要nms操作。Yolov4在DIOU_Loss的基礎上采用DIOU_nms的方式,而Yolov5中仍然采用加權nms的方式。
out = non_max_suppression(out, conf_thres, iou_thres, labels=lb, multi_label=True, agnostic=single_cls)
網絡結構
yolo5s 模型參數
# YOLOv5 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Focus, [64, 3]], # 0-P1/2
[-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[-1, 3, C3, [128]],
[-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[-1, 9, C3, [256]],
[-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[-1, 9, C3, [512]],
[-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
[-1, 1, SPP, [1024, [5, 9, 13]]],
[-1, 3, C3, [1024, False]], # 9
]
# YOLOv5 head
head:
[[-1, 1, Conv, [512, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P4
[-1, 3, C3, [512, False]], # 13
[-1, 1, Conv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 4], 1, Concat, [1]], # cat backbone P3
[-1, 3, C3, [256, False]], # 17 (P3/8-small)
[-1, 1, Conv, [256, 3, 2]],
[[-1, 14], 1, Concat, [1]], # cat head P4
[-1, 3, C3, [512, False]], # 20 (P4/16-medium)
[-1, 1, Conv, [512, 3, 2]],
[[-1, 10], 1, Concat, [1]], # cat head P5
[-1, 3, C3, [1024, False]], # 23 (P5/32-large)
[[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
]
# 核心模型
class Model(nn.Module):
def __init__(self, cfg='yolov5s.yaml', ch=3, nc=None, anchors=None): # model, input channels, number of classes
super(Model, self).__init__()
if isinstance(cfg, dict):
self.yaml = cfg # model dict
else: # is *.yaml
import yaml # for torch hub
self.yaml_file = Path(cfg).name
with open(cfg) as f:
self.yaml = yaml.safe_load(f) # model dict
# Define model
ch = self.yaml['ch'] = self.yaml.get('ch', ch) # input channels
if nc and nc != self.yaml['nc']:
logger.info(f"Overriding model.yaml nc={self.yaml['nc']} with nc={nc}")
self.yaml['nc'] = nc # override yaml value
if anchors:
logger.info(f'Overriding model.yaml anchors with anchors={anchors}')
self.yaml['anchors'] = round(anchors) # override yaml value
self.model, self.save = parse_model(deepcopy(self.yaml), ch=[ch]) # model, savelist 將所有的模型參數打印出來
self.names = [str(i) for i in range(self.yaml['nc'])] # default names
self.inplace = self.yaml.get('inplace', True)
# logger.info([x.shape for x in self.forward(torch.zeros(1, ch, 64, 64))])
# Build strides, anchors
m = self.model[-1] # Detect()
if isinstance(m, Detect):
s = 256 # 2x min stride
m.inplace = self.inplace
m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward
m.anchors /= m.stride.view(-1, 1, 1)
check_anchor_order(m)
self.stride = m.stride
self._initialize_biases() # only run once
# logger.info('Strides: %s' % m.stride.tolist())
# Init weights, biases only nn.BatchNorm2d init
initialize_weights(self)
self.info()
logger.info('')
def forward(self, x, augment=False, profile=False):
if augment:
return self.forward_augment(x) # augmented inference, None
else:
return self.forward_once(x, profile) # single-scale inference, train
def forward_augment(self, x):
img_size = x.shape[-2:] # height, width
s = [1, 0.83, 0.67] # scales
f = [None, 3, None] # flips (2-ud, 3-lr)
y = [] # outputs
for si, fi in zip(s, f):
xi = scale_img(x.flip(fi) if fi else x, si, gs=int(self.stride.max()))
yi = self.forward_once(xi)[0] # forward
# cv2.imwrite(f'img_{si}.jpg', 255 * xi[0].cpu().numpy().transpose((1, 2, 0))[:, :, ::-1]) # save
yi = self._descale_pred(yi, fi, si, img_size)
y.append(yi)
return torch.cat(y, 1), None # augmented inference, train
def forward_once(self, x, profile=False):
# 最核心的 模塊, 遍歷參數 進行模型圖的構造
y, dt = [], [] # outputs
for m in self.model:
# 遍歷不同的模塊
if m.f != -1: # if not from previous layer
x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f] # from earlier layers
if profile:
o = thop.profile(m, inputs=(x,), verbose=False)[0] / 1E9 * 2 if thop else 0 # FLOPs
t = time_synchronized()
for _ in range(10):
_ = m(x)
dt.append((time_synchronized() - t) * 100)
if m == self.model[0]:
logger.info(f"{'time (ms)':>10s} {'GFLOPs':>10s} {'params':>10s} {'module'}")
logger.info(f'{dt[-1]:10.2f} {o:10.2f} {m.np:10.0f} {m.type}')
x = m(x) # run
y.append(x if m.i in self.save else None) # save output
# 將每個模型模塊的結果保存y:list 里面, 需要看 m.f的值來去確定每次使用 x
if profile:
logger.info('%.1fms total' % sum(dt))
return x
def _descale_pred(self, p, flips, scale, img_size):
# de-scale predictions following augmented inference (inverse operation)
if self.inplace:
p[..., :4] /= scale # de-scale
if flips == 2:
p[..., 1] = img_size[0] - p[..., 1] # de-flip ud
elif flips == 3:
p[..., 0] = img_size[1] - p[..., 0] # de-flip lr
else:
x, y, wh = p[..., 0:1] / scale, p[..., 1:2] / scale, p[..., 2:4] / scale # de-scale
if flips == 2:
y = img_size[0] - y # de-flip ud
elif flips == 3:
x = img_size[1] - x # de-flip lr
p = torch.cat((x, y, wh, p[..., 4:]), -1)
return p
def _initialize_biases(self, cf=None): # initialize biases into Detect(), cf is class frequency
# https://arxiv.org/abs/1708.02002 section 3.3
# cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1.
m = self.model[-1] # Detect() module
for mi, s in zip(m.m, m.stride): # from
b = mi.bias.view(m.na, -1) # conv.bias(255) to (3,85)
b.data[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image)
b.data[:, 5:] += math.log(0.6 / (m.nc - 0.99)) if cf is None else torch.log(cf / cf.sum()) # cls
mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True)
def _print_biases(self):
m = self.model[-1] # Detect() module
for mi in m.m: # from
b = mi.bias.detach().view(m.na, -1).T # conv.bias(255) to (3,85)
logger.info(
('%6g Conv2d.bias:' + '%10.3g' * 6) % (mi.weight.shape[1], *b[:5].mean(1).tolist(), b[5:].mean()))
# def _print_weights(self):
# for m in self.model.modules():
# if type(m) is Bottleneck:
# logger.info('%10.3g' % (m.w.detach().sigmoid() * 2)) # shortcut weights
def fuse(self): # fuse model Conv2d() + BatchNorm2d() layers
logger.info('Fusing layers... ')
for m in self.model.modules():
if type(m) is Conv and hasattr(m, 'bn'):
m.conv = fuse_conv_and_bn(m.conv, m.bn) # update conv
delattr(m, 'bn') # remove batchnorm
m.forward = m.fuseforward # update forward
self.info()
return self
def nms(self, mode=True): # add or remove NMS module
present = type(self.model[-1]) is NMS # last layer is NMS
if mode and not present:
logger.info('Adding NMS... ')
m = NMS() # module
m.f = -1 # from
m.i = self.model[-1].i + 1 # index
self.model.add_module(name='%s' % m.i, module=m) # add
self.eval()
elif not mode and present:
logger.info('Removing NMS... ')
self.model = self.model[:-1] # remove
return self
def autoshape(self): # add AutoShape module
logger.info('Adding AutoShape... ')
m = AutoShape(self) # wrap model
copy_attr(m, self, include=('yaml', 'nc', 'hyp', 'names', 'stride'), exclude=()) # copy attributes
return m
def info(self, verbose=False, img_size=640): # print model information
model_info(self, verbose, img_size)
本文參考了以下兩個大佬