暑假實踐(三)——PaddleHub頭部姿態估計


一、前言

頭部姿態估計(基於PaddleHub發布的人臉關鍵點檢測模型face_landmark_localization,該模型轉換自https://github.com/lsy17096535/face-landmark)對二維圖像進行頭部姿態估計,得出Pitch(點頭)、Yaw(搖頭)、Roll(擺頭)三個參數,實現機器對圖片人物姿態進行解釋。(Pitch上負下正,Yaw左正右負,Roll左負右正,單位為弧度)

二、基本思路

通過將圖片中的人臉關鍵點投影到三維人臉模型上,根據二維和三維坐標變換關系矩陣,求解歐拉角,得出參數。
具體:利用輸入的世界坐標系點的位置和face_landmark_localization獲取的圖片人臉關鍵點位置以及相機參數計算得到旋轉、平移向量(本實踐是在相機沒有畸變情況下進行),由於OpenCV提供了函數solvePnp(),我們可以直接求得圖片的外參rotation_vector(旋轉向量),translation_vector(平移向量),利用函數cv2.Rodrigues()將旋轉向量轉換成旋轉矩陣,再將由旋轉矩陣和旋轉向量拼接得到的投影矩陣通過cv2.decomposeProjectionMatrix()分解,因為得到參數cameraMatrix,rotMatrix,transVect,rotMatrixX,rotMatrixY,rotMatrixZ,eulerAngles,所以要得到歐拉角僅需獲取第7個參數,最后在圖片中顯示參數。

三、實驗過程

1、導入模塊和調用包

import cv2
import numpy as np
import paddlehub as hub

2、加載人臉關鍵點檢測模型,並寫入頭部三維關鍵點坐標,以及要在圖片上顯示投影框的頭部投影點坐標

class HeadPost(object):
      def __init__(self):
            self.module = hub.Module(name="face_landmark_localization")
            # 頭部三維關鍵點坐標
            self.model_points = np.array([
                [6.825897, 6.760612, 4.402142],
                [1.330353, 7.122144, 6.903745],
                [-1.330353, 7.122144, 6.903745],
                [-6.825897, 6.760612, 4.402142],
                [5.311432, 5.485328, 3.987654],
                [1.789930, 5.393625, 4.413414],
                [-1.789930, 5.393625, 4.413414],
                [-5.311432, 5.485328, 3.987654],
                [2.005628, 1.409845, 6.165652],
                [-2.005628, 1.409845, 6.165652],
                [2.774015, -2.080775, 5.048531],
                [-2.774015, -2.080775, 5.048531],
                [0.000000, -3.116408, 6.097667],
                [0.000000, -7.415691, 4.070434]
            ], dtype='float')
            # 頭部投影點
            self.reprojectsrc = np.float32([
                [10.0, 10.0, 10.0],
                [10.0, -10.0, 10.0],
                [-10.0, 10.0, 10.0],
                [-10.0, -10.0, 10.0]])
            # 投影點連線
            self.line_pairs = [
                [0, 2], [1, 3], [0, 1], [2, 3]]
         

3、從face_landmark_localization的檢測結果抽取姿態估計需要的點坐標

def get_image_points(self, face_landmark):
      image_points = np.array([
      face_landmark[17], face_landmark[21],
      face_landmark[22], face_landmark[26],
      face_landmark[36], face_landmark[39],
      face_landmark[42], face_landmark[45],
      face_landmark[31], face_landmark[35],
      face_landmark[48], face_landmark[54],
      face_landmark[57], face_landmark[8]
      ], dtype='float')
      return image_points

4、獲取旋轉向量和平移向量

def get_pose_vector(self, image_points):
      # 設定相機的焦距、圖像的中心位置
      center = (self.photo_size[1] / 2, self.photo_size[0] / 2)
      focal_length = self.photo_size[1]
      # 相機內參數矩陣
      camera_matrix = np.array([
          [focal_length, 0, center[0]],
          [0, focal_length, center[1]],
          [0, 0, 1]],
          dtype="float")
      # 畸變矩陣(假設不存在畸變)
      dist_coeffs = np.zeros((4, 1))
      # 函數solvepnp接收一組對應的3D坐標和2D坐標,以及相機內參camera_matrix和dist_coeffs進行反推圖片的外參rotation_vector,translation_vector
      ret, rotation_vector, translation_vector = cv2.solvePnP(self.model_points,image_points,camera_matrix,dist_coeffs)
      # 函數projectPoints根據所給的3D坐標和已知的幾何變換來求解投影后的2D坐標
      reprojectdst, ret = cv2.projectPoints(self.reprojectsrc, rotation_vector, translation_vector, camera_matrix,dist_coeffs)
      return rotation_vector, translation_vector, camera_matrix, dist_coeffs, reprojectdst

5、計算歐拉角

# 將旋轉向量轉換為歐拉角
def get_euler_angle(self,rotation_vector, translation_vector):
      # 通過羅德里格斯公式將旋轉向量和旋轉矩陣之間進行轉換
      rvec_matrix = cv2.Rodrigues(rotation_vector)[0]
      proj_matrix = np.hstack((rvec_matrix, translation_vector))
      euler_angles = cv2.decomposeProjectionMatrix(proj_matrix)[6]
      return euler_angles

6、在圖片中顯示參數和投影框

def pose_euler_angle(self, photo):
      self.photo_size = photo.shape
      res = self.module.keypoint_detection(images=[photo], use_gpu=False)
      face_landmark = res[0]['data'][0]
      image_points = self.get_image_points(face_landmark)
      rotation_vector, translation_vector, camera_matrix, dist_coeffs, reprojectdst = self.get_pose_vector(image_points)
      pitch, yaw, roll = self.get_euler_angle(rotation_vector, translation_vector)
      #畫出投影框
      reprojectdst = tuple(map(tuple, reprojectdst.reshape(4, 2)))
      for start, end in self.line_pairs:
            v2.line(photo, reprojectdst[start], reprojectdst[end], (0, 0, 255))
      #標注14個人臉關鍵點
      for (x, y) in image_points:
          cv2.circle(photo, (int(x), int(y)), 2, (0, 0, 255), -1)
      #顯示參數
      cv2.putText(photo, "pitch: " + "{:5.2f}".format(euler_angle[0, 0]), (15, int(self.photo_size[0] / 2 - 30)),cv2.FONT_HERSHEY_SIMPLEX,0.7, (0, 0, 255), 2)
      cv2.putText(photo, "yaw: " + "{:6.2f}".format(euler_angle[1, 0]), (15, int(self.photo_size[0] / 2 )),cv2.FONT_HERSHEY_SIMPLEX,0.7, (0, 0, 255), 2)
      cv2.putText(photo, "roll: " + "{:6.2f}".format(euler_angle[2, 0]), (15, int(self.photo_size[0] / 2 + 30)),cv2.FONT_HERSHEY_SIMPLEX,0.7, (0, 0, 255), 2)
      cv2.imshow('headpost', photo)
      cv2.waitKey(0)

7、插入圖片

HeadPost().pose_euler_angle(photo=cv2.imread('hbi.jpg'))

四、遇到的問題

1、import paddlehub as hub 出現錯誤,復現如下

File "D:/python代碼/pycharm代碼/頭部姿態估計.py", line 3, in <module>
    import paddlehub as hub
  File "C:\Users\86183\AppData\Roaming\Python\Python37\site-packages\paddlehub\__init__.py", line 12, in <module>
    from . import module
  File "C:\Users\86183\AppData\Roaming\Python\Python37\site-packages\paddlehub\module\__init__.py", line 16, in <module>
    from . import module
  File "C:\Users\86183\AppData\Roaming\Python\Python37\site-packages\paddlehub\module\module.py", line 31, in <module>
    from paddlehub.common import utils
  File "C:\Users\86183\AppData\Roaming\Python\Python37\site-packages\paddlehub\common\__init__.py", line 16, in <module>
    from . import utils
  File "C:\Users\86183\AppData\Roaming\Python\Python37\site-packages\paddlehub\common\utils.py", line 33, in <module>
    from paddlehub.common.logger import logger
  File "C:\Users\86183\AppData\Roaming\Python\Python37\site-packages\paddlehub\common\logger.py", line 155, in <module>
    logger = Logger()
  File "C:\Users\86183\AppData\Roaming\Python\Python37\site-packages\paddlehub\common\logger.py", line 67, in __init__
    level = json.load(fp).get("log_level", "DEBUG")
  File "D:\Anaconda\lib\json\__init__.py", line 296, in load
    parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
  File "D:\Anaconda\lib\json\__init__.py", line 348, in loads
    return _default_decoder.decode(s)
  File "D:\Anaconda\lib\json\decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "D:\Anaconda\lib\json\decoder.py", line 355, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

卸載重裝PaddleHub也未能解決,終於在GitHub社區中找到解決辦法
將以下代碼拷貝到.paddlehub/conf/config.json中成功解決

{
"server_url": [
"http://paddlepaddle.org.cn/paddlehub"
],
"resource_storage_server_url": "https://bj.bcebos.com/paddlehub-data/",
"debug": false,
"log_level": "DEBUG"
}

2、最終顯示效果圖時,一直顯示空白內容,最后才發現由於使用的cv2.imshow()展示結果過快,使圖片一閃而過,在其后加入cv2.waitkey(0)成功解決。(參數0表示一直等待)

3、最開始我使用的是AI Studio中頭部姿態點頭、搖頭估計項目中19個人臉關鍵點,發現參數Roll顯示異常,於是替換成常用的14個人臉關鍵點進行檢測,展示效果更好。對比效果圖如下,19個人臉關鍵點(上),14個人臉關鍵點(下)(Pitch上負下正,Yaw左正右負,Roll左負右正,單位為弧度)

五、最終效果

六、總結

基於PaddleHub的頭部姿態估計課題對我來說十分不容易,在學習各種坐標系如何進行轉換,如何求得旋轉和平移矩陣以及怎樣換算歐拉角后,我更加深刻體會到數學對學習人工智能的重要地位。另外,我對於常用包和模塊的運用不熟練,接下來我會更加深入python的學習,同時也會用PaddleHub嘗試其他項目,努力提升自己。

參考文獻:
https://blog.csdn.net/cdknight_happy/article/details/79975060
https://zhuanlan.zhihu.com/p/82064640
https://www.sohu.com/a/278664242_100007727
https://aistudio.baidu.com/aistudio/projectdetail/673271


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM