嘗試進行Kinect2.0相機進行標定
1. Color鏡頭標定
$(u_{rgb},v_{rgb},1)=W_{rgb}*(x,y,z)$
Calibration results after optimization (with uncertainties): Focal Length: fc = [ 1094.03583 1087.37528 ] +/- [ 55.02816 51.42175 ] Principal point: cc = [ 942.00992 530.35240 ] +/- [ 13.00131 31.27892 ] Skew: alpha_c = [ 0.00000 ] +/- [ 0.00000 ] => angle of pixel axes = 90.00000 +/- 0.00000 degrees Distortion: kc = [ 0.06857 -0.10542 0.00233 0.00092 0.00000 ] +/- [ 0.02206 0.02884 0.00379 0.00492 0.00000 ] Pixel error: err = [ 0.49343 0.67737 ]
2. 紅外鏡頭標定
$(u_{ir},v_{ir},1)=W_{ir}*(x,y,z)$
Calibration results after optimization (with uncertainties): Focal Length: fc = [ 379.40726 378.54472 ] +/- [ 40.73354 34.75290 ] Principal point: cc = [ 263.73696 201.72450 ] +/- [ 9.17740 30.29723 ] Skew: alpha_c = [ 0.00000 ] +/- [ 0.00000 ] => angle of pixel axes = 90.00000 +/- 0.00000 degrees Distortion: kc = [ 0.03377 -0.04195 0.00519 0.00734 0.00000 ] +/- [ 0.07368 0.25678 0.01111 0.00965 0.00000 ] Pixel error: err = [ 0.88997 0.92779 ]
根據上面兩個式子可以推導出兩個圖像像素之間的對應關系。先將RGB圖像映射和depth同樣大小。
3. 2個相機相對關系計算
前面兩個相機都是以相機中心作為世界坐標系的原點。要建立兩個相機之間的關系,需要構建以統一的世界坐標系。以深度相機中心為世界坐標的原點。
則RGB相機和相機原點存在如下關系
$(x_{w},y_{w},z_{w})=(x_{ir},y_{ir},z_{ir})=R*(x_{rgb},y_{rgb},z_{rgb})+T$
補充:兩個相機的配准問題當前的好多博客里寫的方法,根據color和Ir 的外參計算的說法很扯淡。
因為2個圖像的分辨率不同,外參根本不在一個框架下。這應該是一個立體匹配的問題。
A Quantitative Comparison of Calibration Methods for RGB-D Sensors Using Different Technologies
https://github.com/rgbdemo/rgbdemo/blob/master/calibration/calibrate_kinect.cpp
工具包:
Matlab自帶工具箱:
http://www.ilovematlab.cn/thread-267670-1-1.html
http://www.cnblogs.com/li-yao7758258/p/5929145.html
其他工具箱:
張正友標定法 Camera Calibration Toolbox for Matlab 這個工具箱使用standard可以,但是另外一個不讀到內存的好像有bug。
http://www.vision.caltech.edu/bouguetj/calib_doc/index.html
http://blog.csdn.net/felix86/article/details/38401447
kinect 2.0 SDK學習筆記(四)--深度圖與彩色圖對齊
orbslam2的基礎理論(一):https://blog.csdn.net/qq_18661939/article/details/51829573
