2020系統綜合實踐 第7次實踐作業 5組


綜合系統實踐 第7次實踐作業

(1)在樹莓派中安裝opencv庫

在Raspberry Pi 4B上安裝OpenCV 4.1.2

安裝OpenCV的進程可能非常耗時且需要安裝許多依賴項和先決條件。

①展開文件系統

如果使用全新的Raspbian Stretch安裝,首先需要擴展文件系統,以包括micro-SD卡上的所有可用空間:

sudo raspi-config

然后選擇“高級選項”菜單項

接下來選擇“擴展文件系統”

回車確定

選擇finish

然后重新啟動Pi

sudo reboot 

重新啟動后,文件系統應該已經擴展到包含micro-SD卡上的所有可用空間。可以驗證該盤已被執行擴展和檢查的輸出

df -h

Raspbian文件系統已經擴展到包含所有16GB的micro-SD卡。

但是,即使擴展了我的文件系統,我也已經使用了16GB卡的43%。

一件簡單的事情就是刪除LibreOffice和Wolfram引擎,以釋放Pi上的一些空間:

sudo apt-get purge wolfram-engine
sudo apt-get purge libreoffice*
sudo apt-get clean
sudo apt-get autoremove

刪除Wolfram引擎和LibreOffice后,可以回收近1GB!

②安裝依賴關系

# 更新和升級任何現有的軟件包
sudo apt-get update && sudo apt-get upgrade 
# 安裝開發工具CMake,幫助我們配置OpenCV構建過程
sudo apt-get install build-essential cmake pkg-config
# 圖像I/O包,允許我們從磁盤加載各種圖像文件格式。這種文件格式的例子包括JPEG,PNG,TIFF等
sudo apt-get install libjpeg-dev libtiff5-dev libjasper-dev libpng12-dev
# 視頻I/O包。這些庫允許我們從磁盤讀取各種視頻文件格式,並直接處理視頻流
sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev
sudo apt-get install libxvidcore-dev libx264-dev
# OpenCV庫附帶一個名為highgui的子模塊 ,用於在我們的屏幕上顯示圖像並構建基本的GUI。為了編譯 highgui模塊,我們需要安裝GTK開發庫
sudo apt-get install libgtk2.0-dev libgtk-3-dev
# OpenCV中的許多操作(即矩陣操作)可以通過安裝一些額外的依賴關系進一步優化
sudo apt-get install libatlas-base-dev gfortran
# 安裝Python 2.7和Python 3頭文件,以便我們可以用Python綁定來編譯OpenCV
sudo apt-get install python2.7-dev python3-dev

如果是新安裝的操作系統,那么這些版本的Python可能已經是最新版本了(終端可以看到)。

③下載OpenCV源代碼

現在我們已經安裝了依賴項,從官方的OpenCV倉庫中獲取OpenCV 的 4.1.2歸檔。

cd ~
wget -O opencv.zip https://github.com/Itseez/opencv/archive/4.1.2.zip
unzip opencv.zip

我們需要完整安裝 OpenCV 3(例如,可以訪問SIFT和SURF等功能),因此我們還需要獲取opencv_contrib存儲庫。

wget -O opencv_contrib.zip https://github.com/Itseez/opencv_contrib/archive/4.1.2.zip
unzip opencv_contrib.zip

注意:確保 opencv和 opencv_contrib版本相同。

如果版本號不匹配,那么可能會遇到編譯時錯誤或運行時錯誤。

④Python 2.7或Python 3

在我們開始在我們的Raspberry Pi 3上開始編譯OpenCV之前

首先需要安裝 Python包管理器pip:

wget https://bootstrap.pypa.io/get-pip.py
sudo python get-pip.py
sudo python3 get-pip.py

可能會收到一條消息,指出在發出這些命令時pip已經是最新的,但最好不要跳過這一步 。

接下來安裝virtualenv和 virtualenvwrapper

首先,了解虛擬環境是一種特殊的工具,通過為每個環境創建獨立的,獨立的 Python環境來保持不同項目所需的依賴關系,這一點很重要。

總之,它解決了“Project X取決於版本1.x,但項目Y需要4.x”的困境

安裝python虛擬機

sudo pip install virtualenv virtualenvwrapper
sudo rm -rf ~/.cache/pip 

配置~/.profile,添加內容:

# virtualenv and virtualenvwrapper
export WORKON_HOME=$HOME/.virtualenvs
export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3
source /usr/local/bin/virtualenvwrapper.sh

使之生效

source ~/.profile

使用Python3 安裝虛擬機

mkvirtualenv cv -p python3

虛擬機完成安裝之后,后續的所有操作全部在虛擬機中進行。按照教程的說明,一定要看清楚命令行前面是否有(cv),以此作為是否在虛擬機的判斷!
需要重新進入虛擬機,可運行下面的命令

source ~/.profile
workon cv

再次提醒:后續所有操作均在虛擬機中
安裝numpy

pip install numpy

⑤編譯OpenCV

cd ~/opencv-4.1.2/
mkdir build
cd build
cmake -D CMAKE_BUILD_TYPE=RELEASE \
    -D CMAKE_INSTALL_PREFIX=/usr/local \
    -D INSTALL_PYTHON_EXAMPLES=ON \
    -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib-4.1.2/modules \
    -D BUILD_EXAMPLES=ON ..

編譯之前配置交換空間大小

在開始編譯過程之前,應 增加交換空間的大小。這使OpenCV可以使用 Raspberry PI的所有四個內核進行編譯,而不會由於內存問題而掛起編譯。

把交換空間交換空間增大到 CONF_SWAPSIZE=1024

# 虛擬機中sudo才可以修改
sudo nano /etc/dphys-swapfile  
sudo /etc/init.d/dphys-swapfile stop
sudo /etc/init.d/dphys-swapfile start
# 開始編譯(順利的話1個多小時就可以編譯完,運氣不好的話遇到很多坑可能一天都...)
make  

編譯過程好費時間長而且一波三折,遇到了一些坑(好多ERROR),重新燒錄了好幾次備份系統(差點自閉QAQ),但好在都一一解決了,遇到問題詳情及解決辦法在(5)。

⑥安裝OpenCV

sudo make install
sudo ldconfig

檢查OpenCV的安裝位置

ls -l /usr/local/lib/python3.7/site-packages/
cd ~/.virtualenvs/cv/lib/python3.7/site-packages/
ln -s /usr/local/lib/python3.7/site-packages/cv2 cv2

驗證安裝

source ~/.profile 
workon cv
python
import cv2
cv2.__version__

關於opencv的編譯安裝,可以參考

(2)使用opencv和python控制樹莓派的攝像頭

安裝picreame

source ~/.profile 
workon cv 
pip install "picamera[array]"

拍照測試

按照教程給的示例代碼,驗證Python控制攝像頭拍照的效果,增加示例代碼中sleep的時間,讓攝像頭曝光時間增加,拍照效果比較好。

示例代碼

# import the necessary packages
from picamera.array import PiRGBArray
from picamera import PiCamera
import time
import cv2
 
# initialize the camera and grab a reference to the raw camera capture
camera = PiCamera()
rawCapture = PiRGBArray(camera)
 
# allow the camera to warmup
# 此處把0.1改成了5
time.sleep(5)
 
# grab an image from the camera
camera.capture(rawCapture, format="bgr")
image = rawCapture.array
 
# display the image on screen and wait for a keypress
cv2.imshow("Image", image)
cv2.waitKey(0)

image-20200602183951804

(3)利用樹莓派的攝像頭實現人臉識別

安裝依賴庫dlib,face_recognition

在命令行輸入:

source ~/.profile 
workon cv 
pip install dlib
pip install face_recognition

image-20200602203935077

切換到放有要加載圖片和python代碼的目錄下

image-20200602203058657

①facerec_on_raspberry_pi.py

示例代碼如下:

# This is a demo of running face recognition on a Raspberry Pi.
# This program will print out the names of anyone it recognizes to the console.
# To run this, you need a Raspberry Pi 2 (or greater) with face_recognition and
# the picamera[array] module installed.
# You can follow this installation instructions to get your RPi set up:
# https://gist.github.com/ageitgey/1ac8dbe8572f3f533df6269dab35df65

import face_recognition
import picamera
import numpy as np

# Get a reference to the Raspberry Pi camera.
# If this fails, make sure you have a camera connected to the RPi and that you
# enabled your camera in raspi-config and rebooted first.
camera = picamera.PiCamera()
camera.resolution = (320, 240)
output = np.empty((240, 320, 3), dtype=np.uint8)

# Load a sample picture and learn how to recognize it.
print("Loading known face image(s)")
image = face_recognition.load_image_file("test.jpg")
face_encoding = face_recognition.face_encodings(image)[0]

# Initialize some variables
face_locations = []
face_encodings = []

while True:

    print("Capturing image.")
    # Grab a single frame of video from the RPi camera as a numpy array
    camera.capture(output, format="rgb")

    # Find all the faces and face encodings in the current frame of video
    face_locations = face_recognition.face_locations(output)

    print("Found {} faces in image.".format(len(face_locations)))
    face_encodings = face_recognition.face_encodings(output, face_locations)

    # Loop over each face found in the frame to see if it's someone we know.
    for face_encoding in face_encodings:

        # See if the face is a match for the known face(s)
        match = face_recognition.compare_faces([face_encoding], face_encoding)
        name = "<Unknown Person>"

        if match[0]:
            name = "Trump"
        print("I see someone named {}!".format(name))

一開始攝像頭沒有對准照片,后面照片移入攝像頭拍攝范圍,可以看到識別成功。

  • test.jpg用於上傳轉換格式提取特征值保存

  • Trump.jpg用於測試是否准確識別出Trump

  • Kim Jong Eun.jpg用於測試是否准確識別出Unknown Person

image-20200602210528396

測試test2.jpg並查看是否准確識別Trump

img

image-20200602211737621

當照片切換到Kim Jong Eun.jpg,可以看到識別出一張人臉而且是Unknown Person

img

image-20200602213050009

②facerec_from_webcam_faster.py

示例代碼如下:

import face_recognition
import cv2
import numpy as np

# This is a demo of running face recognition on live video from your webcam. It's a little more complicated than the
# other example, but it includes some basic performance tweaks to make things run a lot faster:
#   1. Process each video frame at 1/4 resolution (though still display it at full resolution)
#   2. Only detect faces in every other frame of video.

# PLEASE NOTE: This example requires OpenCV (the `cv2` library) to be installed only to read from your webcam.
# OpenCV is *not* required to use the face_recognition library. It's only required if you want to run this
# specific demo. If you have trouble installing it, try any of the other demos that don't require it instead.

# Get a reference to webcam #0 (the default one)
video_capture = cv2.VideoCapture(0)

# Load a sample picture and learn how to recognize it.
Trump_image = face_recognition.load_image_file("Trump.jpg")
Trump_face_encoding = face_recognition.face_encodings(Trump_image)[0]

# Load a second sample picture and learn how to recognize it.
Kim_Jong_Eunimage = face_recognition.load_image_file("Kim_Jong_Eun.jpg")
Kim_Jong_Eunface_encoding = face_recognition.face_encodings(Kim_Jong_Eunimage)[0]

# Create arrays of known face encodings and their names
known_face_encodings = [
    Trump_face_encoding,
    Kim_Jong_Eunface_encoding
]

known_face_names = [
    "Trump",
    "Kim_Jong_Eun"
]



# Initialize some variables
face_locations = []
face_encodings = []
face_names = []

process_this_frame = True

while True:

    # Grab a single frame of video
    ret, frame = video_capture.read()

    # Resize frame of video to 1/4 size for faster face recognition processing
    small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25)

    # Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses)
    rgb_small_frame = small_frame[:, :, ::-1]

    # Only process every other frame of video to save time
    if process_this_frame:

        # Find all the faces and face encodings in the current frame of video
        face_locations = face_recognition.face_locations(rgb_small_frame)
        face_encodings = face_recognition.face_encodings(rgb_small_frame, face_locations)

        face_names = []
        for face_encoding in face_encodings:
            # See if the face is a match for the known face(s)
            matches = face_recognition.compare_faces(known_face_encodings, face_encoding)
            name = "Unknown"

            # # If a match was found in known_face_encodings, just use the first one.
            # if True in matches:
            #     first_match_index = matches.index(True)
            #     name = known_face_names[first_match_index]
            # Or instead, use the known face with the smallest distance to the new face
            face_distances = face_recognition.face_distance(known_face_encodings, face_encoding)
            best_match_index = np.argmin(face_distances)
            if matches[best_match_index]:
                name = known_face_names[best_match_index]

            face_names.append(name)

    process_this_frame = not process_this_frame

    # Display the results
    for (top, right, bottom, left), name in zip(face_locations, face_names):
        # Scale back up face locations since the frame we detected in was scaled to 1/4 size
        top *= 4
        right *= 4
        bottom *= 4
        left *= 4

        # Draw a box around the face
        cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255), 2)

        # Draw a label with a name below the face
        cv2.rectangle(frame, (left, bottom - 35), (right, bottom), (0, 0, 255), cv2.FILLED)
        font = cv2.FONT_HERSHEY_DUPLEX
        cv2.putText(frame, name, (left + 6, bottom - 6), font, 1.0, (255, 255, 255), 1)

    # Display the resulting image
    cv2.imshow('Video', frame)

    # Hit 'q' on the keyboard to quit!
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

# Release handle to the webcam
video_capture.release()
cv2.destroyAllWindows()

驗證識別成功

image-20200602225447539

image-20200603124349072

image-20200602225640221

因為左邊的人物圖像沒有上傳提取特征值保存,所以識別為unknown,識別成功

image-20200602225858969

(4)結合微服務的進階任務

①安裝Docker

下載安裝腳本

curl -fsSL https://get.docker.com -o get-docker.sh

image-20200603113918613

運行安裝腳本(阿里雲鏡像)

sh get-docker.sh --mirror Aliyun

查看docker版本,驗證是否安裝成功

image-20200603114217061

添加用戶到docker組

sudo usermod -aG docker pi 

重新登陸讓用戶組生效

exit 
ssh pi@raspiberry

重啟之后,docker指令之前就不需要加sudo了

image-20200603114547044

②定制opencv鏡像

拉取鏡像

docker pull sixsq/opencv-python

image-20200603115227482

創建並運行容器

docker run -it sixsq/opencv-python /bin/bash

在容器中,用pip3安裝 "picamera[array]",dlib和face_recognition

pip3 install "picamera[array]" 
pip3 install dlib 
pip3 install face_recognition
exit

commit鏡像image-20200603124450631

自定義鏡像

image-20200603142014724

  • Dockerfile
FROM opencv1
RUN mkdir /myapp
WORKDIR /myapp
COPY myapp .

構建鏡像

docker build -t opencv2 .

image-20200603142347801

查看鏡像

image-20200603142411985

③運行容器執行facerec_on_raspberry_pi.py

docker run -it --device=/dev/vchiq --device=/dev/video0 --name myopencv opencv2
python3 facerec_on_raspberry_pi.py

image-20200603162834905

image-20200603143857123

④選做:在opencv的docker容器中跑通步驟(3)的示例代碼facerec_from_webcam_faster.py

在Windows系統中安裝Xming

開啟樹莓派的ssh配置中的X11

image-20200603152147599

image-20200603153225966

查看DISPLAY環境變量值

printenv

image-20200603154316591

編寫run.sh

#sudo apt-get install x11-xserver-utils
xhost +
docker run -it \
        --net=host \
        -v $HOME/.Xauthority:/root/.Xauthority \
        -e DISPLAY=:10.0  \
        -e QT_X11_NO_MITSHM=1 \
        --device=/dev/vchiq \
        --device=/dev/video0 \
        --name facerecgui \
        opencv2 \
	python3 facerec_from_webcam_faster.py

打開終端,運行run.sh

sh run.sh

可以看到在windows的Xvideo可以正確識別人臉。

image-20200603154818505

image-20200603155022452

參考教程

(5)遇到的問題及解決方法

問題1:環境/Users/myuser/.virtualenvs/iron不包含激活腳本

在使用Python3 安裝虛擬機的時候出現下圖的安裝錯誤

卸載virtualenv和virtualenvwrapper

sudo pip3 uninstall virtualenv virtualenvwrapper

看到virtualenvwrapper我安裝的版本是5.0.0。

檢查了PyPi,它仍然是4.8.4版。

重新安裝了兩者並指定了版本

sudo pip3 install virtualenv virtualenvwrapper=='4.8.4'

我獲取了.bashrc的源代碼,其中附加了設置:

VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3
VIRTUALENVWRAPPER_VIRTUALENV=/usr/local/bin/virtualenv
export PATH=/usr/local/bin:$PATH
export WORKON_HOME=~/.virtualenvs
source /usr/local/bin/virtualenvwrapper.sh

Python3重新安裝虛擬機

mkvirtualenv cv -p python3

參考教程

Error: Environment /Users/myuser/.virtualenvs/iron does not contain activation script

ERROR: Environment '/home/pi/.virtualenvs/cv' does not contain an activate script

問題2:fatal error: features2d/test/test_detectors_regression.impl.hpp: 沒有那個文件或目錄

頭文件include路徑不對,解決方法如下:

將opencv-4.1.2/modules/features2d/test/文件下的

  • test_descriptors_regression.impl.hpp
  • test_detectors_regression.impl.hpp
  • test/test_detectors_invariance.impl.hpp
  • test_descriptors_invariance.impl.hpp
  • test_invariance_utils.hpp

拷貝到opencv_contrib-4.1.0/modules/xfeatures2d/test/文件下。

同時,將opencv_contrib-4.1.2/modules/xfeatures2d/test/test_features2d.cpp文件下的

#include "features2d/test/test_detectors_regression.impl.hpp"
#include "features2d/test/test_descriptors_regression.impl.hpp"

改成:

#include "test_detectors_regression.impl.hpp"
#include "test_descriptors_regression.impl.hpp"

之后編譯過程中可能還會遇到類似問題,也是按照同樣的做法。

參考博客

在Ubuntu16.04上編譯opencv4.1.0-gpu帶contrib版本碰到的問題

問題3: error: 'CODEC_FLAG_GLOBAL_HEADER' was not declared in this scope

解決方法:

在 /home/pi/opencv-4.1.2/modules/videoio/src/cap_ffmpeg_impl.hpp 里最頂端添加

#define AV_CODEC_FLAG_GLOBAL_HEADER (1 << 22) 
#define CODEC_FLAG_GLOBAL_HEADER AV_CODEC_FLAG_GLOBAL_HEADER 
#define AVFMT_RAWPICTURE 0x0020 

參考博客

Ubuntu 源碼安裝opencv320 報錯 error: 'CODEC_FLAG_GLOBAL_HEADER' was not declared in this scope

問題4: fatal error: boostdesc_bgm.i: 沒有那個文件或目錄#include "boostdesc_bgm.i"

解決方法:

需要下列文件

  • boostdesc_bgm.i

  • boostdesc_bgm_bi.i

  • boostdesc_bgm_hd.i

  • boostdesc_lbgm.i

  • boostdesc_binboost_064.i

  • boostdesc_binboost_128.i

  • boostdesc_binboost_256.i

  • vgg_generated_120.i

  • vgg_generated_64.i

  • vgg_generated_80.i

  • vgg_generated_48.i

網上下載壓縮包( 提取碼:p50x )拷貝到opencv_contrib/modules/xfeatures2d/src/目錄下並且解壓**

參考博客

安裝OpenCV時提示缺少boostdesc_bgm.i文件的問題解決方案

(6)分工協作及總結

小組成員名單

學號 姓名 分工
071703428 葉夢晴 查閱資料、實際操作及博客撰寫
031702444 李尚佳 實際操作、解決問題及博客撰寫
181700134 宋娟 查閱資料及提供代碼

在線協作

①通過TeamViewer遠程控制有樹莓派同學的電腦進行實際操作

②通過QQ語音和文件傳輸分享博客資料和溝通

image-20200603171411247

image-20200603170043335

③實驗小結

本次實驗學習了如何使用opencv和python控制樹莓派的攝像頭、利用樹莓派的攝像頭實現人臉識別以及opencv的docker容器中實現了人臉識別,整體操作下來沒有特別大無法解決的問題。就是在搭建opencv庫時,因為各種錯誤(大大小小的坑踩了個遍,環境重新燒錄了好幾遍QAQ),好幾次編譯進度條都到不了100%,花了很多很多的時間,在線協作和學習理論知識以及實踐操作合計大概有20h,還有其他一些小問題都記錄在上面了。通過這次實驗體會到樹莓派的用處真的很大,而且使用起來也真的很方便,期待下次能做出一個有創意又有意思的實驗!


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM