STB數據集使用



STB數據集

一. 數據集簡介

  • 數據介紹

STB數據集來源於這篇論文:A hand pose tracking benchmark from stereo matching.

數據集內容:

Our stereo hand pose benchmark contains sequences with 6 different backgrounds and every background has two sequences with counting and random poses. Every sequence has 1500 frames, so there are totally 18000 frames in our benchmark. Stereo and depth images were captured from a Point Grey Bumblebee2 stereo camera and an Intel Real Sense F200 active depth camera simultaneously.

一個雙目(左右相機RGB)和一個深度(彩色RGB+depth),一搬論文使用方式:

STB is a real-world dataset containing 18,000 images with the ground truth of 21 3D hand joint locations and corresponding depth images. we split the dataset into 15,000 training samples and 3,000 test samples.

具體怎么分割使用自己衡量即可。

使用此數據集的人很多:

Learning to Estimate 3D Hand Pose from Single RGB Images

3D Hand Shape and Pose Estimation from a Single RGB Image

...還有很多論文使用此數據,最近一直在做2D和3D手勢識別的方向,后面會寫一篇綜述具體來說。

二. 數據集的使用

  • 雙目相機

Point Grey Bumblebee2 stereo camera: base line = 120.054 fx = 822.79041 fy = 822.79041 tx = 318.47345 ty = 250.31296

  • 進一步解釋
    • 數據集給出的參數:相機內參、baseline
    • 注釋1:內參表示左右相機的內參相同,使用統一參數。
    • 注釋2:baselin表示兩個相機之間的距離,單位mm。由於手的距離大概5m左右,相機之間距離120mm,0.1<<5,這里忽略兩個相機之間的旋轉矩陣。
    • 觀察畫出的2D點,偏差有點大。可能是忽略旋轉和標注不准等問題導致。
#左右相機僅相差個平移參數baseline,旋轉忽略
fx = 822.79041
fy = 822.79041
tx = 318.47345
ty = 250.31296
base = 120.054
#增廣矩陣計算方便
R_l = np.asarray([
  [1,0,0,0],
  [0,1,0,0],
  [0,0,1,0]])
R_r = R_l.copy()
R_r[0, 3] = -base #作為平移參數
#內參矩陣
K = np.asarray([
  [fx,0,tx],
  [0,fy,ty],
  [0,0,1]])

#世界坐標系點,4*21矩陣,[x,y,z,1]增廣矩陣,計算方便
points = XXX

#平移+內參
left_point = np.dot(np.dot(K , R_l), points)
right_point = np.dot(np.dot(K , R_r), points)

#消除尺度z
image_cood = left_point / left_point[-1, ...]
image_left = (image_cood[:2,...].T).astype(np.uint)
image_cood = right_point / right_point[-1, ...]
image_right = (image_cood[:2,...].T).astype(np.uint)
  • 深度相機

Intel Real Sense F200 active depth camera: fx color = 607.92271 fy color = 607.88192 tx color = 314.78337 ty color = 236.42484 fx depth = 475.62768 fy depth = 474.77709 tx depth = 336.41179 ty depth = 238.77962 rotation vector = [0.00531 -0.01196 0.00301] (use Rodrigues' rotation formula to transform it into rotation matrix) translation vector = [-24.0381 -0.4563 -1.2326] (rotation and translation vector can transform the coordinates relative to color camera to those relative to depth camera)

  • 進一步解釋
    • 數據集給出的參數:彩色相機內參、深度相機內參、一個外參
    • 注釋1:旋轉參數以向量形式給出,直接使用Rodrigues公式轉換成旋轉矩陣即可,具體參考<視覺SLAM14講>。
    • 注釋2:外參的解釋官網說是--彩色相機到深度相機的轉換。博主嘗試了兩種方法均失敗--->1)世界坐標系=深度相機,轉換到彩色相機失敗。2)世界坐標系=彩色相機,轉換到深度相機失敗。
    • 注釋3:參考GCNHand3D論文,原來世界坐標系=深度相機,外參確實如官網所述:彩色相機到深度相機的轉換。當然你也可以當做是使用左手坐標系的原則,和傳統的右手坐標系相反即可。右手:\(A=R*B\),左手:\(A=inv(R)*B\)。彩色到深度外參:\(R\),那么深度到彩色:\(inv(R)\),同等左右手坐標系理解。
fx = [822.79041,607.92271,475.62768]
fy = [822.79041,607.88192,474.77709]
tx = [318.47345,314.78337,336.41179]
ty = [250.31296,236.42484,238.77962]

# 0:Point Grey Bumblebee2 stereo camera
# 1:Intel Real Sense F200 active depth camera COLOR
# 2:Intel Real Sense F200 active depth camera DEPTH
index = 1

inter_mat = np.asarray([[fx[index], 0, tx[index], 0],
                        [0, fy[index], ty[index], 0],
                        [0,  0,  1, 0]]) #camera intrinsic param

#matrix_R,_ = cv2.Rodrigues((0.00531,   -0.01196,  0.00301)) #calculate rotation matrix from rotate vector
#matrix_R_inv = np.linalg.inv(matrix_R)
#matrix_extrinsic是matrix_R_inv和平移矩陣的結合,
#TODO 注意平移矩陣得加負號
matrix_extrinsic = np.asarray([[ 0.99992395, 0.002904166, 0.01195165, +24.0381],
                        [ -0.00304,  0.99998137, 0.00532784, +0.4563],
                        [ -0.01196763,  -0.00529184,  0.99991438, +1.2326],
                        [   0,     0,     0,       1]])
#世界坐標系點,4*21矩陣,[x,y,z,1]增廣矩陣,計算方便
points=XXX
#depth 3D
image_depth = np.dot(inter_mat , points)
#color 3D
image_color = np.dot(transfrom_matrix , points)
#3D to 2D
image_cood = image_depth or image_color
image_cood = image_cood / image_cood[-1, ...]
image_cood = (image_cood[:2,...].T).astype(np.uint)
  • 注意事項
  1. STB數據集是以mm為單位
  2. 深度圖感覺不准確,distance = R + 255*G,有些圖手的像素和背景一樣,噪聲很大。使用STB作為訓練和評價指標的論文較少,且使用深度圖的更少。

三. 參考文獻

icip17_stereo_hand_pose_dataset

旋轉方向

很多資料下次綜述再羅列


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM