深度圖轉激光在ROS包depthimage_to_laserscan中實現,本篇講解其計算過程。關於點雲轉激光數據的思路也是類似的,只需要將一定高度范圍內的數據進行投影即可。
1. 深度圖轉激光原理
原理如圖(1)所示。深度圖轉激光中,對任意給定的一個深度圖像點\(m(u,v,z)\),其轉換激光的步驟為:
a.將深度圖像的點\(m(u,v,z)\)轉換到深度相機坐標系下的坐標點\(M(x,y,z)\),具體求解過程請參考“深度圖轉點雲的原理”;
b.計算直線\(AO\)和\(CO\)的夾角\(AOC\),計算公式如下:
\(\theta = atan(x/z)\)
c.將角\(AOC\)影射到相應的激光數據槽中.已知激光的最小最大范圍\([\alpha,\beta]\),激光束共細分為\(N\)分,那么可用數組\(laser[N]\)表示激光數據。那么點\(M\)投影到數組laser中的索引值\(n\)可如下計算:
\(n=(\theta - \alpha)/((\beta - \alpha)/N) =N(\theta - \alpha)/(\beta - \alpha)\)
\(laser[n]\)的值為\(M\)在x軸上投影的點\(C\)到相機光心\(O\)的距離\(r\),即
\(laser[n] = r=OC=\sqrt{z^{2}+x^{2}}\)
圖(1)
2. 代碼
以下為添加注釋說明的實現代碼(另可參閱原始代碼),
/** * Converts the depth image to a laserscan using the DepthTraits to assist. * * This uses a method to inverse project each pixel into a LaserScan angular increment. This method first projects the pixel * forward into Cartesian coordinates, then calculates the range and angle for this point. When multiple points coorespond to * a specific angular measurement, then the shortest range is used. * * @param depth_msg The UInt16 or Float32 encoded depth message. * @param cam_model The image_geometry camera model for this image. * @param scan_msg The output LaserScan. * @param scan_height The number of vertical pixels to feed into each angular_measurement. * */ template<typename T> void convert(const sensor_msgs::ImageConstPtr& depth_msg, const image_geometry::PinholeCameraModel& cam_model, const sensor_msgs::LaserScanPtr& scan_msg, const int& scan_height) const{ // Use correct principal point from calibration float center_x = cam_model.cx();//圖像中心位置x float center_y = cam_model.cy();//圖像中心位置y // Combine unit conversion (if necessary) with scaling by focal length for computing (X,Y) double unit_scaling = depthimage_to_laserscan::DepthTraits<T>::toMeters( T(1) ); float constant_x = unit_scaling / cam_model.fx(); float constant_y = unit_scaling / cam_model.fy(); const T* depth_row = reinterpret_cast<const T*>(&depth_msg->data[0]); int row_step = depth_msg->step / sizeof(T); int offset = (int)(cam_model.cy()-scan_height/2); depth_row += offset*row_step; // Offset to center of image for(int v = offset; v < offset+scan_height_; v++, depth_row += row_step){ for (int u = 0; u < (int)depth_msg->width; u++) // Loop over each pixel in row { T depth = depth_row[u]; double r = depth; // Assign to pass through NaNs and Infs double th = -atan2((double)(u - center_x) * constant_x, unit_scaling); // 計算夾角AOC,Atan2(x, z),實際上這里省略了深度值,因為分子分母都有,所以就略去了 but depth divides out int index = (th - scan_msg->angle_min) / scan_msg->angle_increment;//計算對應激光數據的索引 if (depthimage_to_laserscan::DepthTraits<T>::valid(depth)){ // Not NaN or Inf // Calculate in XYZ,計算XYZ double x = (u - center_x) * depth * constant_x; double z = depthimage_to_laserscan::DepthTraits<T>::toMeters(depth); // Calculate actual distance,計算激光的真實距離 r = sqrt(pow(x, 2.0) + pow(z, 2.0)); } // Determine if this point should be used.判斷激光距離是否超出預設的有效范圍 if(use_point(r, scan_msg->ranges[index], scan_msg->range_min, scan_msg->range_max)){ scan_msg->ranges[index] = r; } } } } image_geometry::PinholeCameraModel cam_model_; ///< image_geometry helper class for managing sensor_msgs/CameraInfo messages. float scan_time_; ///< Stores the time between scans. float range_min_; ///< Stores the current minimum range to use. float range_max_; ///< Stores the current maximum range to use. int scan_height_; ///< Number of pixel rows to use when producing a laserscan from an area. std::string output_frame_id_; ///< Output frame_id for each laserscan. This is likely NOT the camera's frame_id. };