轉自 http://wiki.opencv.org.cn/index.php/%E6%91%84%E5%83%8F%E5%A4%B4%E6%A0%87%E5%AE%9A
攝像頭標定
標定原理介紹
- 攝像機小孔模型 Cv照相機定標和三維重建#針孔相機模型和變形
標定程序1(opencv自帶的示例程序)
簡介
讀者可以直接使用Opencv自帶的攝像機標定示例程序,該程序位於 “\OpenCV\samples\c目錄下的calibration.cpp”,程序的輸入支持直接從USB攝像機讀取圖片標定,或者讀取avi文件或者已經存放於電腦上圖片進行標定。
使用說明
編譯運行程序,如果未設置任何命令行參數,則程序會有提示,告訴你應該在你編譯出來的程序添加必要的命令行,比如你的程序是calibration.exe(以windows操作系統為例)。則你可以添加如下命令行(以下加粗的字體所示):
calibration -w 6 -h 8 -s 2 -n 10 -o camera.yml -op -oe [<list_of_views.txt>]
調用命令行和參數介紹
Usage: calibration
-w <board_width> # 圖片某一維方向上的交點個數 -h <board_height> # 圖片另一維上的交點個數 [-n <number_of_frames>] # 標定用的圖片幀數 # (if not specified, it will be set to the number # of board views actually available) [-d <delay>] # a minimum delay in ms between subsequent attempts to capture a next view # (used only for video capturing) [-s <square_size>] # square size in some user-defined units (1 by default) [-o <out_camera_params>] # the output filename for intrinsic [and extrinsic] parameters [-op] # write detected feature points [-oe] # write extrinsic parameters [-zt] # assume zero tangential distortion [-a <aspect_ratio>] # fix aspect ratio (fx/fy) [-p] # fix the principal point at the center [-v] # flip the captured images around the horizontal axis [input_data] # 輸入數據,是下面三種之中的一種: # - 指定的包含圖片列表的txt文件 # - name of video file with a video of the board # if input_data not specified, a live view from the camera is used
上圖中,橫向和縱向分別為9個交點和6個交點,對應上面的命令行的命令參數應該為: -w 9 -h 6。
- 經多次使用發現,不指定 -p參數時計算的結果誤差較大,主要表現在對u0,v0的估計誤差較大,因此建議使用時加上-p參數
list_of_views.txt
該txt文件表示的是你在電腦上面需要用以標定的圖片列表。
view000.png view001.png #view002.png view003.png view010.png one_extra_view.jpg
上面的例子中,前面加“井號”的圖片被忽略。
- 在windows的命令行中,有一種簡便的辦法來產生此txt文件。在CMD窗口中輸入如下命令(假設當前目錄里面的所有jpg文件都用作標定,並且生成的文件為a.txt)。
dir *.jpg /B >> a.txt
輸入為攝像機或者avi文件時
"When the live video from camera is used as input, the following hot-keys may be used:\n" " <ESC>, 'q' - quit the program\n" " 'g' - start capturing images\n" " 'u' - switch undistortion on/off\n";
代碼
請直接復制 calibration.cpp 中的相關代碼。
標定程序2
OPENCV沒有提供完整的示例,自己整理了一下,貼出來記錄。
- 首先自制一張標定圖片,用A4紙打印出來,設定距離,再設定標定棋盤的格子數目,如8×6,以下是我做的圖片8×8
- 然后利用cvFindChessboardCorners找到棋盤在攝像頭中的2D位置,這里cvFindChessboardCorners不太穩定,有時不能工作,也許需要圖像增強處理。
- 計算實際的距離,應該是3D的距離。我設定為21.6毫米,既在A4紙上為兩厘米。
- 再用cvCalibrateCamera2計算內參,
- 最后用cvUndistort2糾正圖像的變形。
結果如下:
代碼
具體的函數使用,請參考Cv照相機定標和三維重建#照相機定標
每個攝像機都有唯一的參數,例如,焦點,主點以及透鏡的畸變模型。查找攝像機內參數差的過程為攝像機的標定;對基於增強現實的應用來講,對攝像機標定很重要,因為它將透視變換和透鏡的畸變都反映在輸出圖像上。為了讓用戶在增強現實應用中獲得更佳的體驗
,應該用相同的透視投影來增強物體的可視化效果;標定攝像機需要特殊的模式圖像,例如:棋盤板或具有白色背景的黑圓圈,被標定的攝像機需要從不同的角度對特殊模式圖像拍攝 10-15張照片,然后通過標定算法來找到最優的攝像機內部參數和畸變向量。
Show the distortion removal for the images too. When you work with an image list it is not possible to remove the distortion inside the loop. Therefore, you must do this after the loop. Taking advantage of this now I’ll expand the undistort function, which is in fact first calls initUndistortRectifyMap to find transformation matrices and then performs transformation using remapfunction. Because, after successful calibration map calculation needs to be done only once, by using this expanded form you may speed up your application:
if( s.inputType == Settings::IMAGE_LIST && s.showUndistorsed ) { Mat view, rview, map1, map2; initUndistortRectifyMap(cameraMatrix, distCoeffs, Mat(), getOptimalNewCameraMatrix(cameraMatrix, distCoeffs, imageSize, 1, imageSize, 0), imageSize, CV_16SC2, map1, map2); for(int i = 0; i < (int)s.imageList.size(); i++ ) { view = imread(s.imageList[i], 1); if(view.empty()) continue; remap(view, rview, map1, map2, INTER_LINEAR); imshow("Image View", rview); char c = waitKey(); if( c == ESC_KEY || c == 'q' || c == 'Q' ) break; } }
關於精度的說法
http://stackoverflow.com/questions/12794876/how-to-verify-the-correctness-of-calibration-of-a-webcam
Hmm, are you looking for "handsome" or "accurate"?
Camera calibration is one of the very few subjects in computer vision where accuracy can be directly quantified in physical terms, and verified by a physical experiment. And the usual lesson is that (a) your numbers are just as good as the effort (and money) you put into them, and (b) real accuracy (as opposed to imagined one) is expensive, so you should figure out in advance what your application really requires in the way of precision.
If you look up the geometrical specs of even very cheap lens/ccd combos (in the megapixel range and above), it becomes readily apparent that sub-sub-mm calibration accuracies are theoretically achievable within a table-top volume of space. Just work out (from the spec sheet of your camera's sensor) the solid angle spanned by one pixel - you'll be dazzled by the spatial resolution you have within reach of your wallet. However, actually achieving REPEATABLY something near that theoretical accuracy takes work.
Here are some recommendations (from personal experience) for getting a good calibration experience with home-grown equipment.
-
If your method uses a flat target ("checkerboard" or similar), manufacture a good one. Choose a very flat backing (for the size you mention window glass 5 mm thick or more is excellent, though obviously fragile). Verify its flatness against another edge (or, better, a laser beam). Print the pattern on thick-stock paper that won't stretch too easily. Lay it after printing on the backing before gluing and verify that the square sides are indeed very nearly orthogonal. Cheap ink-jet or laser printers are not designed for rigorous geometrical accuracy, do not trust them blindly. Best practice is to use a professional print shop (even a Kinko's will do a much better job than most home printers). Then attach the pattern very carefully to the backing, using spray-on glue and slowly wiping with soft cloth to avoid bubbles and stretching. Wait for a day or longer for the glue to cure and the glue-paper stress to reach its long-term steady state. Finally measure the corner positions with a good caliper and a magnifier. You may get away with one single number for the "average" square size, but it must be an average of actual measurements, not of hopes-n-prayers. Best practice is to actually use a table of measured positions.
-
Watch your temperature and humidity changes: paper adsorbs water from the air, the backing dilates and contracts. It is amazing how many articles you can find that report sub-millimeter calibration accuracies without quoting the environment conditions (or the target response to them). Needless to say, they are mostly crap. The lower temperature dilation coefficient of glass compared to common sheet metal is another reason for preferring the former as a backing.
-
Needless to say, you must disable the auto-focus feature of your camera, if it has one: focusing physically moves one or more pieces of glass inside your lens, thus changing (slightly) the field of view and (usually by a lot) the lens distortion and the principal point.
-
Place the camera on a stable mount that won't vibrate easily. Focus (and f-stop the lens, if it has an iris) as is needed for the application (not the calibration - the calibration procedure and target must be designed for the app's needs, not the other way around). Do not even think of touching camera or lens afterwards. If at all possible, avoid "complex" lenses - e.g. zoom lenses or very wide angle ones. Fisheye or anamorphic lenses require models much more complex than stock OpenCV makes available.
-
Take lots of measurements and pictures. You want hundreds of measurements (corners) per image, and tens of images. Where data is concerned, the more the merrier. A 10x10 checkerboard is the absolute minimum I would consider. I normally worked at 20x20.
-
Span the calibration volume when taking pictures. Ideally you want your measurements to be uniformly distributed in the volume of space you will be working with. Most importantly, make sure to angle the target significantly with respect to the focal axis in some of the pictures - to calibrate the focal length you need to "see" some real perspective foreshortening. For best results use a repeatable mechanical jig to move the target. A good one is a one-axis turntable, which will give you an excellent prior model for the motion of the target.
-
Minimize vibrations and associated motion blur when taking photos.
-
Use good lighting. Really. It's amazing how often I see people realize late in the game that you need photons to calibrate any camera :-) Use diffuse ambient lighting, and bounce it off white cards on both sides of the field of view.
-
Watch what your corner extraction code is doing. Draw the detected corner positions on top of the images (in Matlab or Octave, for example), and judge their quality. Removing outliers early using tight thresholds is better than trusting the robustifier in your bundle adjustment code.
-
Constrain your model if you can. For example, don't try to estimate the principal point if you don't have a good reason to believe that your lens is significantly off-center w.r.t the image, just fix it at the image center on your first attempt. The principal point location is usually poorly observed, because it is inherently confused with the center of the nonlinear distortion and by the component parallel to the image plane of the target-to-camera's translation. Getting it right requires a carefully designed procedure that yields three or more independent vanishing points of the scene and a very good bracketing of the nonlinear distortion. Similarly, unless you have reason to suspect that the lens focal axis is really tilted w.r.t. the sensor plane, fix at zero the (1,2) component of the camera matrix. Generally speaking, use the simplest model that satisfies your measurementsand your application needs (that's Ockam's razor for you).
-
When you have a calibration solution from your optimizer with low enough RMS error (a few tenths of a pixel, typically, see other answer below), plot the XY pattern of the residual errors (predicted_xy - measured_xy for each corner in all images) and see if it's a round-ish cloud centered at (0, 0). "Clumps" of outliers or non-roundness of the cloud of residuals are screaming alarm bells that something is very wrong - most likely outliers, or an inappropriate lens distortion model.
-
Take extra images to verify the accuracy of the solution - use them to verify that the lens distortion is actually removed, and that the planar homography predicted by the calibrated model actually matches the one recovered from the measured corners.
There is a problem with your camera calibration: cv::calibrateCamera()
returns the root mean square (RMS) reprojection error [1] and should be between 0.1 and 1.0 pixels in a good calibration. For a point of reference, I get approximately 0.25 px RMS error using my custom stereo camera made of two hardware-synchronized Playstation Eye cameras running at the 640 x 480 resolution.
Are you sure that the pixel coordinates returned by cv::findChessboardCorners()
are in the same order as those in obj
? If the axes were flipped, you would get symptoms similar to those that you are describing.
[1]: OpenCV calculates reprojection error by projecting three-dimensional of chessboard points into the image using the final set of calibration parameters and comparing the position of the corners. An RMS error of 300 means that, on average, each of these projected points is 300 px away from its actual position.
==========================================================================通過查資料看文檔,自己的使用流程總結下==================================================================================
手工標定攝像機說明
方法一:
工具:攝像頭一個; 黑白棋盤紙一張,明確知道橫縱方向內格的交點個數以及方塊的 面積大小;
標定的流程:
- 啟動命令行窗口,目錄切換到有D:\workSpace\cameraCalibation\x64\Debug\
- 執行命令(calibration -w=10 -h=7 -s=1 -o=camera.yml -op -oe -p) ,
對參數進行簡單的說明:
-w: 圖形寬度方向的內交點個數;
-h: 圖形高度方向的內交點個數;
-s: 代表的是面積;
-o: 指定輸出相機內部的參數到文件;
-p: 修正交點的圓心;
-n: 指定計算攝像機參數時用到的圖片數,通常10--20就足夠了;
-op: 寫入檢測到的特征點到文件;
-oe: 寫外部參數 到文件;
注:-op -oe 這些參數矯正的時候沒有用到;
3.摁下 g ,進行圖片捕獲時, 旋轉圖片,平移一些距離;
4.執行完之后會返回RMS 的值;代表着攝像機的精度,該值越接近0,越好;一般該 值在0-1.0 之間就夠用,重復以上步驟,調節攝像機讓該值盡可能地小;
5.在該路徑的目錄下,會生成camera.yml; 這里邊記錄着攝像機測參數,以及精度值;這個文件用來矯正圖片使用,使像素點在更加精確的位置上;