移動目標檢測(純圖像方式、無需神經網絡訓練)


  偶然看到一個公眾號的文章,對移動目標檢測系統的設計,這是一種極為簡便,容易實現的目標檢測,因為它不需要訓練神經網絡,也不需要制作訓練集,前提是背景不能變化,最適用於固定攝像頭的環境,比如說路口的車輛目標檢測,智能生產線上對產品的檢測等。缺點是針對不同的使用環境需要適當的調整一些參數,找到的輪廓與實際輪廓也有一點差異。

  大概了解了一下整個系統的實現過程,對於一段視頻,將視頻拆分為一幀幀的圖像,提取相鄰兩幀圖片進行灰度化操作,再兩幀做差,可以得到相鄰兩幀的圖像差異,人眼可能察覺不到這些細微的差異,可像素做差后再進行圖像二值化,再對圖片進行膨脹操作,可以得到某些移動物體的詳細輪廓信息,然后通過輪廓提取,提取出在位置和面積滿足一定條件的輪廓,在原圖片中繪制出這些輪廓的包圍區域;對所有相鄰幀全部執行這一系列的操作,然后將這些圖片拼成一幅視頻,實現移動目標連續檢測。

隨便取出相鄰兩幀圖片 

灰度化后相鄰幀做差:

 

 

二值化、膨脹操作

 

 設置檢測區域,車輛經過直線后才進行檢測



 

整體效果:上方的數字表示的是當前經過圖片中黃線后檢測到的車輛數量

總結:幀差法雖然實現起來簡單,可是具有它的局限性,攝像頭與背景要保證持相對靜止,一旦兩者存在相對運動,這種方式就不使用了。

代碼實現:

import os
import re
import cv2 # opencv library
import numpy as np
from os.path import isfile, join
import matplotlib.pyplot as plt

# get file names of the frames
col_frames = os.listdir('frames/')

# sort file names
col_frames.sort(key=lambda f: int(re.sub('\D', '', f)))

# empty list to store the frames
col_images=[]

for i in col_frames:
    # read the frames
    img = cv2.imread('frames/'+i)
    # append the frames to the list
    col_images.append(img)



# kernel for image dilation
kernel = np.ones((4,4),np.uint8)

# font style
font = cv2.FONT_HERSHEY_SIMPLEX

# directory to save the ouput frames
pathIn = "contour_frames_3/"

for i in range(len(col_images)-1):

    # frame differencing
    grayA = cv2.cvtColor(col_images[i], cv2.COLOR_BGR2GRAY)
    grayB = cv2.cvtColor(col_images[i+1], cv2.COLOR_BGR2GRAY)
    diff_image = cv2.absdiff(grayB, grayA)

    # image thresholding
    ret, thresh = cv2.threshold(diff_image, 30, 255, cv2.THRESH_BINARY)

    # image dilation
    dilated = cv2.dilate(thresh,kernel,iterations = 1)

    # find contours
    rwa,contours, hierarchy = cv2.findContours(dilated.copy(), cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)

    # shortlist contours appearing in the detection zone
    valid_cntrs = []
    for cntr in contours:
        x,y,w,h = cv2.boundingRect(cntr)
        if (x <= 200) & (y >= 80) & (cv2.contourArea(cntr) >= 25):
            if (y >= 90) & (cv2.contourArea(cntr) < 40):
                break
            valid_cntrs.append(cntr)

    # add contours to original frames
    dmy = col_images[i].copy()
    cv2.drawContours(dmy, valid_cntrs, -1, (127,200,0), 2)

    cv2.putText(dmy, "vehicles detected: " + str(len(valid_cntrs)), (55, 15), font, 0.6, (0, 180, 0), 2)
    cv2.line(dmy, (0, 80),(256,80),(100, 255, 255))
    #cv2.imshow("show",dmy)
    #cv2.waitKey(100)
    cv2.imwrite(pathIn+str(i)+'.png',dmy)

# specify video name
pathOut = 'vehicle_detection_v3.mp4'

# specify frames per second
fps = 14.0

frame_array = []
files = [f for f in os.listdir(pathIn) if isfile(join(pathIn, f))]
files.sort(key=lambda f: int(re.sub('\D', '', f)))    

for i in range(len(files)):
    filename=pathIn + files[i]

    #read frames
    img = cv2.imread(filename)
    height, width, layers = img.shape
    size = (width,height)

    #inserting the frames into an image array
    frame_array.append(img)

    out = cv2.VideoWriter(pathOut, cv2.VideoWriter_fourcc(*'DIVX'), fps, size)

    for i in range(len(frame_array)):
        # writing to a image array
        out.write(frame_array[i])

    out.release()

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM