內容轉載自我的博客
1. 介紹
照相機是將一個三維場景或物體投影到二維平面上,降維的過程通常會存在信息的損失,而重建(Reconstruction)就是要從獲取到的眾多二維圖像中復原原始三維場景或物體。具體流程就是:
- 通過多角度拍攝或者從視頻中提取得到一組圖像序列,將這些圖像序列作為整個系統的輸入
- 在多視角的圖像中,根據紋理特征提取出稀疏特征點(稱為點雲),通過這些特征點估計相機位置和參數
- 得到相機參數並完成特征點匹配后,就可以獲得更稠密的點雲
- 根據這些點重建物體表面,並進行紋理映射,就還原出三維場景和物體了
簡略來說就是:圖像獲取->特征匹配->深度估計->稀疏點雲->相機參數估計->稠密點雲->表面重建->紋理映射
2. 下載OpenSfm
2.1 下載opensfm的原始github庫
- 訪問OpenSfm的項目主頁查看安裝步驟:
git clone --recursive https://github.com/mapillary/OpenSfM
如果速度慢,可以使用git config --global https.https://github.com.proxy socks5://127.0.0.1:1080
注意,遞歸方式才會下載OpenSfM/opensfm/src/third_party/pybind11
文件夾下的內容,否則要自己下載pybind11的zip文件解壓在對應位置:
rmdir pybind11/ && git clone https://github.com/pybind/pybind11.git
- 也可以opensfm下載release版本0.5.1,然后解壓進入pybind11文件夾下載pybind11的zip文件
注意
最好選擇:OpenSfM v0.5.1、pybind/pybind11 v2.2.4
2.2 安裝依賴
使用如下命令安裝依賴:
sudo apt-get install build-essential cmake libatlas-base-dev libatlas-base-dev libgoogle-glog-dev \
libopencv-dev libsuitesparse-dev python3-pip python3-dev python3-numpy python3-opencv \
python3-pyproj python3-scipy python3-yaml libeigen3-dev
安裝opengv,官網教程,具體步驟如下(DPYTHON_INSTALL_DIR
是要安裝到的目錄):
mkdir source && cd source/
git clone --recurse-submodules -j8 https://github.com/laurentkneip/opengv.git
cd opengv && mkdir build && cd build
cmake .. -DBUILD_TESTS=OFF -DBUILD_PYTHON=ON -DPYBIND11_PYTHON_VERSION=3.6 -DPYTHON_INSTALL_DIR=/usr/local/lib/python3.6/dist-packages/
sudo make install
安裝ceres,可以按照此步驟
cd ../../
curl -L http://ceres-solver.org/ceres-solver-1.14.0.tar.gz | tar xz
cd ./ceres-solver-1.14.0 && mkdir build-code && cd build-code
cmake .. -DCMAKE_C_FLAGS=-fPIC -DCMAKE_CXX_FLAGS=-fPIC -DBUILD_EXAMPLES=OFF -DBUILD_TESTING=OFF
sudo make -j4 install
安裝pip庫,然后build這個opensfm的庫,安裝在pip里面
cd ../../../ && pip3 install -i https://pypi.tuna.tsinghua.edu.cn/simple -r requirements.txt
python3 setup.py build
此時opensfm即安裝成功
3. 測試
在opensfm主目錄下
bin/opensfm_run_all data/berlin
python3 -m http.server
點擊viewer
文件夾,選擇reconstruction.html
打開,然后選擇上面命令生成的文件data/berlin/reconstruction.meshed.json
;也可以在undistorted
文件夾下面找到merged.ply
文件打開即可
如果使用SIFT提取特征,需要pip3 install -i https://pypi.tuna.tsinghua.edu.cn/simple opencv-contrib-python==3.4.2.16
(opencv-python版本不用改動)
4. 注意事項
如果PATH環境變量設置的是某個python虛擬環境優先(即運行which python3
看到某個虛擬環境的路徑),同時又想把opensfm配置到系統python里面:嚴格按照官網安裝鏈接,只是把其中的python3
換成/usr/bin/python3
,pip3
換成/usr/bin/pip3
(如果本機的PATH修改過),即
# 完整下載OpenSfM倉庫(2317fbb),包括里面的pybind11等
git clone --recursive https://github.com/mapillary/OpenSfM opensfm
# 進入opensfm主目錄
cd opensfm
# 再次更新子模塊保證最新
git submodule update --init --recursive
# 更新源
sudo apt-get update
# 安裝依賴的包
sudo apt-get install -y \
build-essential vim curl cmake git \
libatlas-base-dev libeigen3-dev \
libgoogle-glog-dev libopencv-dev libsuitesparse-dev \
python3-dev python3-numpy python3-opencv python3-pip \
python3-pyproj python3-scipy python3-yaml
# ---------編譯安裝ceres---------
# 創建臨時目錄
mkdir source && cd source
# 下載ceres v1.14並解壓
curl -L http://ceres-solver.org/ceres-solver-1.14.0.tar.gz | tar xz
# 創建編譯文件夾
cd ceres-solver-1.14.0 && mkdir build && cd build
# cmake
cmake .. -DCMAKE_C_FLAGS=-fPIC -DCMAKE_CXX_FLAGS=-fPIC -DBUILD_EXAMPLES=OFF -DBUILD_TESTING=OFF
# 開啟48線程編譯安裝
sudo make -j48 install
# ----------編譯安裝opengv-------
# 回到source文件夾下
cd ../../
# 下載opengv
git clone https://github.com/paulinus/opengv.git
# 更新子模塊保證代碼最新
cd opengv && git submodule update --init --recursive
# 創建編譯文件夾
mkdir build && cd build
# cmake
cmake .. -DBUILD_TESTS=OFF \
-DBUILD_PYTHON=ON \
-DPYBIND11_PYTHON_VERSION=3.6 \
-DPYTHON_INSTALL_DIR=/usr/local/lib/python3.6/dist-packages/
# 開啟48線程編譯安裝
sudo make -j48 install
# 安裝opensfm需要的python庫
/usr/bin/pip3 install \
exifread==2.1.2 gpxpy==1.1.2 networkx==1.11 \
numpy pyproj==1.9.5.1 pytest==3.0.7 \
python-dateutil==2.6.0 PyYAML==3.12 \
scipy xmltodict==0.10.2 \
loky repoze.lru
# ----------編譯opensfm----------
/usr/bin/python3 setup.py build
# 安裝特定版本的opencv-contrib,此時可用SIFT特征提取算法
/usr/bin/pip3 install -i https://pypi.tuna.tsinghua.edu.cn/simple opencv-contrib-python==3.4.2.16
安裝以后,使用時首先export PATH=
,把/usr/bin
放在第一位,保證python3
調用的是/usr/bin/python3
5. 配置文件
每次運行opensfm生成點雲,不僅需要原始圖片數據,還需要一個配置文件config.yaml
,文件結構如下:
lab
├── config.yaml
└── images
├── DJI_1_0239.JPG
├── DJI_1_0240.JPG
├── DJI_1_0242.JPG
└── DJI_1_0268.JPG
1 directory, 5 files
配置文件的默認選項如下,見鏈接opensfm.org
# Metadata
use_exif_size: yes
default_focal_prior: 0.85
# Params for features
feature_type: HAHOG # Feature type (AKAZE, SURF, SIFT, HAHOG, ORB)
feature_root: 1 # If 1, apply square root mapping to features
feature_min_frames: 4000 # If fewer frames are detected, sift_peak_threshold/surf_hessian_threshold is reduced.
feature_process_size: 2048 # Resize the image if its size is larger than specified. Set to -1 for original size
feature_use_adaptive_suppression: no
# Params for SIFT
sift_peak_threshold: 0.1 # Smaller value -> more features
sift_edge_threshold: 10 # See OpenCV doc
# Params for SURF
surf_hessian_threshold: 3000 # Smaller value -> more features
surf_n_octaves: 4 # See OpenCV doc
surf_n_octavelayers: 2 # See OpenCV doc
surf_upright: 0 # See OpenCV doc
# Params for AKAZE (See details in lib/src/third_party/akaze/AKAZEConfig.h)
akaze_omax: 4 # Maximum octave evolution of the image 2^sigma (coarsest scale sigma units)
akaze_dthreshold: 0.001 # Detector response threshold to accept point
akaze_descriptor: MSURF # Feature type
akaze_descriptor_size: 0 # Size of the descriptor in bits. 0->Full size
akaze_descriptor_channels: 3 # Number of feature channels (1,2,3)
akaze_kcontrast_percentile: 0.7
akaze_use_isotropic_diffusion: no
# Params for HAHOG
hahog_peak_threshold: 0.00001
hahog_edge_threshold: 10
hahog_normalize_to_uchar: yes
# Params for general matching
lowes_ratio: 0.8 # Ratio test for matches
matcher_type: FLANN # FLANN, BRUTEFORCE, or WORDS
symmetric_matching: yes # Match symmetricly or one-way
# Params for FLANN matching
flann_branching: 8 # See OpenCV doc
flann_iterations: 10 # See OpenCV doc
flann_checks: 20 # Smaller -> Faster (but might lose good matches)
# Params for BoW matching
bow_file: bow_hahog_root_uchar_10000.npz
bow_words_to_match: 50 # Number of words to explore per feature.
bow_num_checks: 20 # Number of matching features to check.
bow_matcher_type: FLANN # Matcher type to assign words to features
# Params for VLAD matching
vlad_file: bow_hahog_root_uchar_64.npz
# Params for matching
matching_gps_distance: 150 # Maximum gps distance between two images for matching
matching_gps_neighbors: 0 # Number of images to match selected by GPS distance. Set to 0 to use no limit (or disable if matching_gps_distance is also 0)
matching_time_neighbors: 0 # Number of images to match selected by time taken. Set to 0 to disable
matching_order_neighbors: 0 # Number of images to match selected by image name. Set to 0 to disable
matching_bow_neighbors: 0 # Number of images to match selected by BoW distance. Set to 0 to disable
matching_bow_gps_distance: 0 # Maximum GPS distance for preempting images before using selection by BoW distance. Set to 0 to disable
matching_bow_gps_neighbors: 0 # Number of images (selected by GPS distance) to preempt before using selection by BoW distance. Set to 0 to use no limit (or disable if matching_bow_gps_distance is also 0)
matching_bow_other_cameras: False # If True, BoW image selection will use N neighbors from the same camera + N neighbors from any different camera.
matching_vlad_neighbors: 0 # Number of images to match selected by VLAD distance. Set to 0 to disable
matching_vlad_gps_distance: 0 # Maximum GPS distance for preempting images before using selection by VLAD distance. Set to 0 to disable
matching_vlad_gps_neighbors: 0 # Number of images (selected by GPS distance) to preempt before using selection by VLAD distance. Set to 0 to use no limit (or disable if matching_vlad_gps_distance is also 0)
matching_vlad_other_cameras: False # If True, VLAD image selection will use N neighbors from the same camera + N neighbors from any different camera.
matching_use_filters: False # If True, removes static matches using ad-hoc heuristics
# Params for geometric estimation
robust_matching_threshold: 0.004 # Outlier threshold for fundamental matrix estimation as portion of image width
robust_matching_calib_threshold: 0.004 # Outlier threshold for essential matrix estimation during matching in radians
robust_matching_min_match: 20 # Minimum number of matches to accept matches between two images
five_point_algo_threshold: 0.004 # Outlier threshold for essential matrix estimation during incremental reconstruction in radians
five_point_algo_min_inliers: 20 # Minimum number of inliers for considering a two view reconstruction valid
five_point_refine_match_iterations: 10 # Number of LM iterations to run when refining relative pose during matching
five_point_refine_rec_iterations: 1000 # Number of LM iterations to run when refining relative pose during reconstruction
triangulation_threshold: 0.006 # Outlier threshold for accepting a triangulated point in radians
triangulation_min_ray_angle: 1.0 # Minimum angle between views to accept a triangulated point
triangulation_type: FULL # Triangulation type : either considering all rays (FULL), or sing a RANSAC variant (ROBUST)
resection_threshold: 0.004 # Outlier threshold for resection in radians
resection_min_inliers: 10 # Minimum number of resection inliers to accept it
# Params for track creation
min_track_length: 2 # Minimum number of features/images per track
# Params for bundle adjustment
loss_function: SoftLOneLoss # Loss function for the ceres problem (see: http://ceres-solver.org/modeling.html#lossfunction)
loss_function_threshold: 1 # Threshold on the squared residuals. Usually cost is quadratic for smaller residuals and sub-quadratic above.
reprojection_error_sd: 0.004 # The standard deviation of the reprojection error
exif_focal_sd: 0.01 # The standard deviation of the exif focal length in log-scale
principal_point_sd: 0.01 # The standard deviation of the principal point coordinates
radial_distorsion_k1_sd: 0.01 # The standard deviation of the first radial distortion parameter
radial_distorsion_k2_sd: 0.01 # The standard deviation of the second radial distortion parameter
radial_distorsion_k3_sd: 0.01 # The standard deviation of the third radial distortion parameter
radial_distorsion_p1_sd: 0.01 # The standard deviation of the first tangential distortion parameter
radial_distorsion_p2_sd: 0.01 # The standard deviation of the second tangential distortion parameter
bundle_outlier_filtering_type: FIXED # Type of threshold for filtering outlier : either fixed value (FIXED) or based on actual distribution (AUTO)
bundle_outlier_auto_ratio: 3.0 # For AUTO filtering type, projections with larger reprojection than ratio-times-mean, are removed
bundle_outlier_fixed_threshold: 0.006 # For FIXED filtering type, projections with larger reprojection error after bundle adjustment are removed
optimize_camera_parameters: yes # Optimize internal camera parameters during bundle
bundle_max_iterations: 100 # Maximum optimizer iterations.
retriangulation: yes # Retriangulate all points from time to time
retriangulation_ratio: 1.2 # Retriangulate when the number of points grows by this ratio
bundle_interval: 999999 # Bundle after adding 'bundle_interval' cameras
bundle_new_points_ratio: 1.2 # Bundle when the number of points grows by this ratio
local_bundle_radius: 3 # Max image graph distance for images to be included in local bundle adjustment
local_bundle_min_common_points: 20 # Minimum number of common points betwenn images to be considered neighbors
local_bundle_max_shots: 30 # Max number of shots to optimize during local bundle adjustment
save_partial_reconstructions: no # Save reconstructions at every iteration
# Params for GPS alignment
use_altitude_tag: no # Use or ignore EXIF altitude tag
align_method: orientation_prior # orientation_prior or naive
align_orientation_prior: horizontal # horizontal, vertical or no_roll
bundle_use_gps: yes # Enforce GPS position in bundle adjustment
bundle_use_gcp: no # Enforce Ground Control Point position in bundle adjustment
# Params for navigation graph
nav_min_distance: 0.01 # Minimum distance for a possible edge between two nodes
nav_step_pref_distance: 6 # Preferred distance between camera centers
nav_step_max_distance: 20 # Maximum distance for a possible step edge between two nodes
nav_turn_max_distance: 15 # Maximum distance for a possible turn edge between two nodes
nav_step_forward_view_threshold: 15 # Maximum difference of angles in degrees between viewing directions for forward steps
nav_step_view_threshold: 30 # Maximum difference of angles in degrees between viewing directions for other steps
nav_step_drift_threshold: 36 # Maximum motion drift with respect to step directions for steps in degrees
nav_turn_view_threshold: 40 # Maximum difference of angles in degrees with respect to turn directions
nav_vertical_threshold: 20 # Maximum vertical angle difference in motion and viewing direction in degrees
nav_rotation_threshold: 30 # Maximum general rotation in degrees between cameras for steps
# Params for image undistortion
undistorted_image_format: jpg # Format in which to save the undistorted images
undistorted_image_max_size: 100000 # Max width and height of the undistorted image
# Params for depth estimation
depthmap_method: PATCH_MATCH_SAMPLE # Raw depthmap computation algorithm (PATCH_MATCH, BRUTE_FORCE, PATCH_MATCH_SAMPLE)
depthmap_resolution: 640 # Resolution of the depth maps
depthmap_num_neighbors: 10 # Number of neighboring views
depthmap_num_matching_views: 6 # Number of neighboring views used for each depthmaps
depthmap_min_depth: 0 # Minimum depth in meters. Set to 0 to auto-infer from the reconstruction.
depthmap_max_depth: 0 # Maximum depth in meters. Set to 0 to auto-infer from the reconstruction.
depthmap_patchmatch_iterations: 3 # Number of PatchMatch iterations to run
depthmap_patch_size: 7 # Size of the correlation patch
depthmap_min_patch_sd: 1.0 # Patches with lower standard deviation are ignored
depthmap_min_correlation_score: 0.1 # Minimum correlation score to accept a depth value
depthmap_same_depth_threshold: 0.01 # Threshold to measure depth closeness
depthmap_min_consistent_views: 3 # Min number of views that should reconstruct a point for it to be valid
depthmap_save_debug_files: no # Save debug files with partial reconstruction results
# Other params
processes: 1 # Number of threads to use
# Params for submodel split and merge
submodel_size: 80 # Average number of images per submodel
submodel_overlap: 30.0 # Radius of the overlapping region between submodels
submodels_relpath: "submodels" # Relative path to the submodels directory
submodel_relpath_template: "submodels/submodel_%04d" # Template to generate the relative path to a submodel directory
submodel_images_relpath_template: "submodels/submodel_%04d/images" # Template to generate the relative path to a submodel images directory