【API】Open3D庫的使用教程


官方文檔

1. PointCloud 與 io操作

homepage_api: PointCloud

常用方法:

  • print(pcd) -> 輸出模型的points/faces數量

  • 屬性:

    • normals
    • colors
    • points
  • 方法

    • select_by_index
    • get_center
    • get_max_bound
    • get_min_bound
    • get_oriented_bounding_box
    • crop
    • dimension
    • normalize_normals
    • paint_uniform_color
    • estimate_normals # 頂點法線估計
    • compute_mean_and_covariance
    • compute_nearest_neighbor_distance
    • compute_point_cloud_distance
    • remove_non_finite_points
    • remove_radius_outlier
    • remove_statistical_outlier
    • uniform_down_sample # 通過收集每第n個點來對點雲進行下采樣
    • voxel_down_sample # 把點雲分配在三維的網格中,取平均值
    • voxel_down_sample_and_trace
    • rotate
    • scale
    • transform

1.1. 讀取與保存模型

homepage

pcd = o3d.io.read_point_cloud(uu.rpath("open3d_/bunny_simp.pcd"))
# ply = o3d.io.read_point_cloud(uu.rpath("open3d_/bunny_simp.ply"))
print(pcd)

# %% 另存模型
res = o3d.io.write_point_cloud(uu.rpath("open3d_/bunny_simp.ply"), pcd)
# res == True/False

當然,也可以按照三角面片的方式讀取和保存:

mesh = o3d.io.read_triangle_mesh("../../TestData/knot.ply")
print(mesh)
o3d.io.write_triangle_mesh("copy_of_knot.ply", mesh)

也包括對二維圖像(jpg/png)的讀取API:

img = o3d.io.read_image("../../test_data/lena_color.jpg")
o3d.io.write_image("copy_of_lena_color.jpg", img)

1.2. 點雲可視化

homepage

o3d.visualization.draw_geometries(geometry_list: List[open3d.cpu.pybind.geometry.Geometry], window_name: str = 'Open3D', width: int = 1920, height: int = 1080, left: int = 50, top: int = 50, point_show_normal: bool = False, mesh_show_wireframe: bool = False, mesh_show_back_face: bool = False, lookat: numpy.ndarray[float64[3, 1]], up: numpy.ndarray[float64[3, 1]], front: numpy.ndarray[float64[3, 1]], zoom: float) -> None

# 示例
o3d.visualization.draw_geometries([pcd],
                                  zoom=0.3412,
                                  front=[0.4257, -0.2125, -0.8795],
                                  lookat=[2.6172, 2.0475, 1.532],
                                  up=[-0.0694, -0.9768, 0.2024])

使用Visualizer定義顯示窗口的細節(支持逐步更新):

vis = o3d.visualization.Visualizer()
vis.create_window()
# 將兩個點雲放入visualizer
vis.add_geometry(source)
# vis.add_geometry(target)
vis.get_render_option().point_size = 2  # 點雲大小
vis.get_render_option().background_color = np.asarray([0, 0, 0])  # 背景顏色

vis.update_geometry()
vis.poll_events()
vis.update_renderer()
vis.run()

使用json配置文件來定義細節:

vis.get_render_option().load_from_json("../../test_data/renderoption.json")

Change field of view:

def custom_draw_geometry_with_custom_fov(pcd, fov_step):
    vis = o3d.visualization.Visualizer()
    vis.create_window()
    vis.add_geometry(pcd)
    ctr = vis.get_view_control()
    print("Field of view (before changing) %.2f" % ctr.get_field_of_view())
    ctr.change_field_of_view(step=fov_step)
    print("Field of view (after changing) %.2f" % ctr.get_field_of_view())
    vis.run()
    vis.destroy_window()

顯示動圖(Callback functions):

def custom_draw_geometry_with_rotation(pcd):

    def rotate_view(vis):
        ctr = vis.get_view_control()
        ctr.rotate(10.0, 0.0)
        return False

    o3d.visualization.draw_geometries_with_animation_callback([pcd], rotate_view)

支持快捷鍵操作:

def custom_draw_geometry_with_key_callback(pcd):

    def change_background_to_black(vis):
        opt = vis.get_render_option()
        opt.background_color = np.asarray([0, 0, 0])
        return False

    def load_render_option(vis):
        vis.get_render_option().load_from_json(
            "../../test_data/renderoption.json")
        return False

    def capture_depth(vis):
        depth = vis.capture_depth_float_buffer()
        plt.imshow(np.asarray(depth))
        plt.show()
        return False

    def capture_image(vis):
        image = vis.capture_screen_float_buffer()
        plt.imshow(np.asarray(image))
        plt.show()
        return False

    key_to_callback = {}
    key_to_callback[ord("K")] = change_background_to_black
    key_to_callback[ord("R")] = load_render_option
    key_to_callback[ord(",")] = capture_depth
    key_to_callback[ord(".")] = capture_image
    o3d.visualization.draw_geometries_with_key_callbacks([pcd], key_to_callback)

1.3. Open3D中的用戶交互操作

homepage_api: crop-geometry

pcd = o3d.io.read_point_cloud(uu.rpath("open3d_/bunny.ply"))
o3d.visualization.draw_geometries_with_editing([pcd])

This function simply reads a point cloud and calls draw_geometries_with_editing() . This function provides vertex selection and cropping.

  1. Press 'Y' twice to align geometry with negative direction of y-axis
  2. Press 'K' to lock screen and to switch to selection mode
  3. Drag for rectangle selection, or use ctrl + left click for polygon selection
  4. Press 'C' to get a selected geometry and to save it
  5. Press 'F' to switch to freeview mode

1.4. 其他顯示渲染技巧

1.5. 數據格式轉換(numpy)

numpy數組轉open3d格式:

target_points = np.array(target.points)
target_origin = o3d.geometry.PointCloud()
target_origin.points = o3d.utility.Vector3dVector(target_points)

# 或者
nd_arr = np.ones((100, 3),dtype = np.float32)
point_cloud = PointCloud()
point_cloud.points = Vector3dVector(nd_arr[:,0:3].reshape(-1,3))
o3d.visualization.draw_geometries([point_cloud])

open3d格式轉numpy數組:

# Convert Open3D.o3d.geometry.PointCloud to numpy array
xyz_load = np.asarray(pcd_load.points)

1.6. 點雲上色

source = o3d.read_point_cloud("plys/test.ply")

source.paint_uniform_color([1, 0.706, 0])      # 黃色
source.paint_uniform_color([0, 0.651, 0.929])  # 藍色

1.7. 體素下采樣, Voxel downsampling

downpcd = pcd.voxel_down_sample(voxel_size=0.05)

Downsample the point cloud with a voxel of 0.05, voxel_size越大,則保留點越少。

1.8. Vertex normal estimation

downpcd.estimate_normals(
    search_param=o3d.geometry.KDTreeSearchParamHybrid(radius=0.1, max_nn=30))

The two key arguments:

  • radius = 0.1
  • max_nn = 30

specifies search radius and maximum nearest neighbor.

1.9. 最小外包圍框(包絡體)

aabb = chair.get_axis_aligned_bounding_box()
aabb.color = (1, 0, 0)
obb = chair.get_oriented_bounding_box()
obb.color = (0, 1, 0)
o3d.visualization.draw_geometries([chair, aabb, obb],
                                  zoom=0.7,
                                  front=[0.5439, -0.2333, -0.8060],
                                  lookat=[2.4615, 2.1331, 1.338],
                                  up=[-0.1781, -0.9708, 0.1608])

1.10. 包絡凸面體

pcl = o3dtut.get_bunny_mesh().sample_points_poisson_disk(number_of_points=2000)
hull, _ = pcl.compute_convex_hull()
hull_ls = o3d.geometry.LineSet.create_from_triangle_mesh(hull)
hull_ls.paint_uniform_color((1, 0, 0))
o3d.visualization.draw_geometries([pcl, hull_ls])

1.11. Transformation

homepage

Translate:

mesh = o3d.geometry.TriangleMesh.create_coordinate_frame()
mesh_tx = copy.deepcopy(mesh).translate((1.3, 0, 0))
mesh_ty = copy.deepcopy(mesh).translate((0, 1.3, 0))
print(f'Center of mesh: {mesh.get_center()}')
print(f'Center of mesh tx: {mesh_tx.get_center()}')
print(f'Center of mesh ty: {mesh_ty.get_center()}')

Rotation:

mesh = o3d.geometry.TriangleMesh.create_coordinate_frame()
mesh_r = copy.deepcopy(mesh)
R = mesh.get_rotation_matrix_from_xyz((np.pi / 2, 0, np.pi / 4))
mesh_r.rotate(R, center=(0, 0, 0))
o3d.visualization.draw_geometries([mesh, mesh_r])

Scale:

mesh = o3d.geometry.TriangleMesh.create_coordinate_frame()
mesh_s = copy.deepcopy(mesh).translate((2, 0, 0))
mesh_s.scale(0.5, center=mesh_s.get_center())
o3d.visualization.draw_geometries([mesh, mesh_s])

2. Filters: 濾波算法

2.1. Downsample

homepage_api: Point cloud outlier removal

  • pcd.voxel_down_sample(voxel_size=0.02)

    Downsample the point cloud with a voxel of 0.02

  • pcd.uniform_down_sample(every_k_points=5)

    Every 5th points are selected

    與 voxel_down_sample() 最大的區別就是,生成的點雲由於較為隨機的隔n抽樣,所以生成的新模型屬於無序點雲。

  • Select down sample

    The following helper function uses select_by_index, which takes a binary mask to output only the selected points. The selected points and the non-selected points are visualized.

    使用帶二進制掩碼的 select_down_sample 僅輸出所選點。選定的點和未選定的點並可視化。

    def display_inlier_outlier(cloud, ind):
        inlier_cloud = cloud.select_by_index(ind)
        outlier_cloud = cloud.select_by_index(ind, invert=True)
    
        print("Showing outliers (red) and inliers (gray): ")
        outlier_cloud.paint_uniform_color([1, 0, 0])
        inlier_cloud.paint_uniform_color([0.8, 0.8, 0.8])
        o3d.visualization.draw_geometries([inlier_cloud, outlier_cloud],
                                          zoom=0.3412,
                                          front=[0.4257, -0.2125, -0.8795],
                                          lookat=[2.6172, 2.0475, 1.532],
                                          up=[-0.0694, -0.9768, 0.2024])
    

2.2. Point cloud outlier removal

homepage_api: Point cloud outlier removal

  • Statistical outlier removal, 統計離群值移除

    cl, ind = voxel_down_pcd.remove_statistical_outlier(nb_neighbors=20,
                                                        std_ratio=2.0)
    display_inlier_outlier(voxel_down_pcd, ind)
    

    刪除與點雲的平均值相比更遠離其鄰居的點。

    • nb_neighbors: 允許指定要考慮多少個鄰居,以便計算給定點的平均距離
    • std_ratio: 允許基於跨點雲的平均距離的標准偏差來設置閾值級別。此數字越低,過濾器將越具有攻擊性
  • Radius outlier removal, 半徑離群值去除

    cl, ind = voxel_down_pcd.remove_radius_outlier(nb_points=16, radius=0.05)
    display_inlier_outlier(voxel_down_pcd, ind)
    

    刪除在給定球體中周圍幾乎沒有鄰居的點。

    • nb_points: 選擇球體應包含的最小點數
    • radius: 定義將用於計算鄰居的球體的半徑

2.3. Plane Segmentation: 使用RANSAC從點雲中分割幾何圖元的支持面

要在點雲中找到具有最大支持的平面,我們可以使用segement_plane。

  • distance_threshold: 定義一個點到一個估計平面的最大距離,該點可被視為一個不規則點;
  • ransac_n: 定義隨機采樣的點數以估計一個平面;
  • num_iterations: 定義對隨機平面進行采樣和驗證的頻率。

函數將平面返回為(a,b,c,d) 。這樣,對於平面上的每個點(x,y,z),我們都有 ax + by + cz + d = 0 。該功能進一步調整內部點的索引列表。

plane_model, inliers = downpcd_inlier_cloud.segment_plane(distance_threshold=0.01, ransac_n=5, num_iterations=10000)
[a, b, c, d] = plane_model
print(f"Plane equation: {a:.2f}x + {b:.2f}y + {c:.2f}z + {d:.2f} = 0")

3. 手動配准

homepage_api: manual-registration

import numpy as np
import copy

def draw_registration_result(source, target, transformation):
    source_temp = copy.deepcopy(source)
    target_temp = copy.deepcopy(target)
    source_temp.paint_uniform_color([1, 0.706, 0])
    target_temp.paint_uniform_color([0, 0.651, 0.929])
    source_temp.transform(transformation)
    o3d.visualization.draw_geometries([source_temp, target_temp])

def pick_points(pcd):
    vis = o3d.visualization.VisualizerWithEditing()
    vis.create_window()
    vis.add_geometry(pcd)
    vis.run()  # user picks points
    vis.destroy_window()
    return vis.get_picked_points()

print("Demo for manual ICP")
source = o3d.io.read_point_cloud("/d/Home/workspace/MCAD/vision3D//ICP/cloud_bin_0.pcd")
target = o3d.io.read_point_cloud("/d/Home/workspace/MCAD/vision3D/ICP/cloud_bin_2.pcd")
print("Visualization of two point clouds before manual alignment")
draw_registration_result(source, target, np.identity(4))

# pick points from two point clouds and builds correspondences
picked_id_source = pick_points(source)
picked_id_target = pick_points(target)
assert (len(picked_id_source) >= 3 and len(picked_id_target) >= 3)
assert (len(picked_id_source) == len(picked_id_target))
corr = np.zeros((len(picked_id_source), 2))
corr[:, 0] = picked_id_source
corr[:, 1] = picked_id_target

# estimate rough transformation using correspondences
print("Compute a rough transform using the correspondences given by user")
p2p = o3d.pipelines.registration.TransformationEstimationPointToPoint()
trans_init = p2p.compute_transformation(source, target,
                                        o3d.utility.Vector2iVector(corr))

# point-to-point ICP for refinement
print("Perform point-to-point ICP refinement")
threshold = 0.03  # 3cm distance threshold
reg_p2p = o3d.pipelines.registration.registration_icp(
    source, target, threshold, trans_init,
    o3d.pipelines.registration.TransformationEstimationPointToPoint())
draw_registration_result(source, target, reg_p2p.transformation)
  1. Please pick at least three correspondences using [shift + left click], Press [shift + right click] to undo point picking
  2. After picking points, press 'Q' to close the window

4. 項目中多個模型的拼接

wechat: Open3D中文教程-多方向點雲位置配准

從文件中讀取多個點雲模型,點雲被降采樣並一起可視化。

def load_point_clouds(voxel_size=0.0):
    pcds = []
    for i in range(3):
        pcd = o3d.io.read_point_cloud("../../test_data/ICP/cloud_bin_%d.pcd" %
                                      i)
        # 降采樣
        pcd_down = pcd.voxel_down_sample(voxel_size=voxel_size)
        pcds.append(pcd_down)
    return pcds

voxel_size = 0.02
pcds_down = load_point_clouds(voxel_size)
o3d.visualization.draw_geometries(pcds_down,
                                  zoom=0.3412,
                                  front=[0.4257, -0.2125, -0.8795],
                                  lookat=[2.6172, 2.0475, 1.532],
                                  up=[-0.0694, -0.9768, 0.2024])

4.1. ICP配准(基於PoseGraph對齊)

wechat: Open3D中文教程-多方向點雲位置配准

Tutorial Doc

Tutorial Doc

ICP (iterative closest point), 是對點雲配准目前最常用的方法。其原理就是不斷的對一個點雲進行轉換,並計算其中每個點與另外一個點雲集的距離,將它轉換成一個 fitness score 。然后不斷地變換知道將這個總距離降到最低。一般來說icp都是經過全局配准之后運用的,也就是說兩個點雲要先被粗略地配准,然后icp會完成配准的微調。


姿勢圖有兩個關鍵元素:節點(幾何體片段)和邊。

  • 節點是一個與位姿矩陣Ti相關聯的幾何Pi,它將Pi轉換成全局空間。變換矩陣Ti(集合)是待優化的未知變量。相鄰節點通常有很大的重疊,可以用點到面ICP進行配准。

  • 每個邊包含一個變換矩陣Tij,該矩陣將源幾何體Pi與目標幾何體Pj對齊。

本教程使用 點到面ICP 來估計轉換。在更復雜的情況下,這種成對注冊問題應該通過 全局注冊 來解決。

成對配准容易出錯:錯誤對齊的對數可能會超過正確對齊的對。因此,他們將姿勢圖的邊分為兩類。 里程表邊緣 連接臨時接近的相鄰節點。像ICP這樣的局部配准算法可以可靠地對齊它們。 循環閉合邊 連接任何非相鄰節點。對齊是通過 全局注冊 找到的,可靠性較低。在Open3D中,這兩類邊是通過 PoseGraphEdge 初始化器中的 uncertain 參數來區分的。

下面的腳本將創建具有三個節點和三條邊的姿態圖。在這些邊中,有兩條是里程計邊 (uncertain = False) ,一條是環路閉合邊(uncertain = True)。

def pairwise_registration(source, target):
    print("Apply point-to-plane ICP")
    icp_coarse = o3d.pipelines.registration.registration_icp(
        source, target, max_correspondence_distance_coarse, np.identity(4),
        o3d.pipelines.registration.TransformationEstimationPointToPlane())
    icp_fine = o3d.pipelines.registration.registration_icp(
        source, target, max_correspondence_distance_fine,
        icp_coarse.transformation,
        o3d.pipelines.registration.TransformationEstimationPointToPlane())
    transformation_icp = icp_fine.transformation
    information_icp = o3d.pipelines.registration.get_information_matrix_from_point_clouds(
        source, target, max_correspondence_distance_fine,
        icp_fine.transformation)
    return transformation_icp, information_icp

def full_registration(pcds, max_correspondence_distance_coarse,
                      max_correspondence_distance_fine):
    pose_graph = o3d.pipelines.registration.PoseGraph()
    odometry = np.identity(4)
    pose_graph.nodes.append(o3d.pipelines.registration.PoseGraphNode(odometry))
    n_pcds = len(pcds)
    for source_id in range(n_pcds):
        for target_id in range(source_id + 1, n_pcds):
            transformation_icp, information_icp = pairwise_registration(
                pcds[source_id], pcds[target_id])
            print("Build o3d.pipelines.registration.PoseGraph")
            if target_id == source_id + 1:  # odometry case
                odometry = np.dot(transformation_icp, odometry)
                pose_graph.nodes.append(
                    o3d.pipelines.registration.PoseGraphNode(
                        np.linalg.inv(odometry)))
                pose_graph.edges.append(
                    o3d.pipelines.registration.PoseGraphEdge(source_id,
                                                             target_id,
                                                             transformation_icp,
                                                             information_icp,
                                                             uncertain=False))
            else:  # loop closure case
                pose_graph.edges.append(
                    o3d.pipelines.registration.PoseGraphEdge(source_id,
                                                             target_id,
                                                             transformation_icp,
                                                             information_icp,
                                                             uncertain=True))
    return pose_graph

print("Full registration ...")
max_correspondence_distance_coarse = voxel_size * 15
max_correspondence_distance_fine = voxel_size * 1.5
with o3d.utility.VerbosityContextManager(o3d.utility.VerbosityLevel.Debug) as cm:
    pose_graph = full_registration(pcds_down,
                                   max_correspondence_distance_coarse,
                                   max_correspondence_distance_fine)

Open3D使用global_optimization 功能執行姿勢圖優化。有兩種優化方法可供選擇:GlobalOptimizationGaussNewton or GlobalOptimizationLevenbergMarquardt。推薦后者,因為它具有更好的收斂性。類GlobalOptimizationConvergenceCriteria可以用來設置全局優化的最大次數和各種參數。

GlobalOptimizationOption 定義了幾個選項:

  • max_correspondence_distance 確定對應閾值
  • edge_prune_threshold 是修剪離群邊緣的閾值
  • reference_node 被認為是全局空間的節點id。
print("Optimizing PoseGraph ...")
option = o3d.pipelines.registration.GlobalOptimizationOption(
    max_correspondence_distance=max_correspondence_distance_fine,
    edge_prune_threshold=0.25,
    reference_node=0)
with o3d.utility.VerbosityContextManager(
        o3d.utility.VerbosityLevel.Debug) as cm:
    o3d.pipelines.registration.global_optimization(
        pose_graph,
        o3d.pipelines.registration.GlobalOptimizationLevenbergMarquardt(),
        o3d.pipelines.registration.GlobalOptimizationConvergenceCriteria(),
        option)

轉換后的點雲將被列出並使用draw_geometries可視化。

print("Transform points and display")
for point_id in range(len(pcds_down)):
    print(pose_graph.nodes[point_id].pose)
    pcds_down[point_id].transform(pose_graph.nodes[point_id].pose)
o3d.visualization.draw_geometries(pcds_down,
                                  zoom=0.3412,
                                  front=[0.4257, -0.2125, -0.8795],
                                  lookat=[2.6172, 2.0475, 1.532],
                                  up=[-0.0694, -0.9768, 0.2024])

合並多個點雲模型:

pcds = load_point_clouds(voxel_size)
pcd_combined = o3d.geometry.PointCloud()
for point_id in range(len(pcds)):
    pcds[point_id].transform(pose_graph.nodes[point_id].pose)
    pcd_combined += pcds[point_id]
pcd_combined_down = pcd_combined.voxel_down_sample(voxel_size=voxel_size)
o3d.io.write_point_cloud("multiway_registration.pcd", pcd_combined_down)
o3d.visualization.draw_geometries([pcd_combined_down],
                                  zoom=0.3412,
                                  front=[0.4257, -0.2125, -0.8795],
                                  lookat=[2.6172, 2.0475, 1.532],
                                  up=[-0.0694, -0.9768, 0.2024])

5. 體素化

體素化(Voxelization)是將物體的幾何形式表示轉換成最接近該物體的體素表示形式,產生體數據集,其不僅包含模型的表面信息,而且能描述模型的內部屬性。表示模型的空間體素跟表示圖像的二維像素比較相似,只不過從二維的點擴展到三維的立方體單元,而且基於體素的三維模型有諸多應用。

5.1. 對模型表面的體素化

這一步的操作比較簡單,首先計算出模型的AABB包圍盒,然后根據空間分辨率對包圍盒進行划分,得到每個大小為(X/N) * (Y/N) * (Z/N)空間像素列表。然后對構成3D模型的多邊形或三角形列表進行遍歷,得到這些基本體元所對就應的包圍盒,然后由AABB求交運算得到這些基本體元所能影響到的體素單元,將這些體素單元做為待判斷的基本對象。為了做進一步的精確判定,使用三角形與AABB的求交算法確定這些基本體元所能影響到最終體素,並將這些體素標記為非空,這樣就完成了對3D模型表面的體素化操作。

5.2. 對模型內部的體素化

將模型表面體素化的操作進行完之后即可得到對模型體素表示的一個“外殼”,接下來要做的操作就是進行模型的內部體素化操作。這里采用一種簡單的方法。首先將對應的3D模型建立空間八叉樹,這棵八叉樹主要用於進行基本體元面片的求交操作。然后對模型AABB中的所有空體素,從其中心位置以軸對齊方向來發射兩條射線,這兩條射線的方向相反,但基本方向都是軸對齊的。對於這兩條的射線利用空間模型的八叉樹來得到其與3D模型的相交位置,並得到相交點的法向量及到相交點的距離,然后根據這兩點法向量之間的關系來判斷得到當前體素是在3D模型的內部或是在3D模型的外部。將這樣的操作施加於每一個空的體素之后就可以完成對3D模型內部的體素化操作。

但是將這樣的操作施加於每一個空的體素速度比較慢,故而此處可以采用掃描的方法來進行加速處理。如果判斷得到某個體素的位置為模型內部后,就可以根據射線的方向及這兩條射線與模型的交點處的距離來對當前體素相鄰的體素進行掃描,這樣不需要再做判斷就可以標記出相鄰體素的狀態,這樣就加速了整個模型內部的體素化操作。

5.3. API

wechat: Open3D中文教程-體素化

Open3D 提供了函數 create_from_triangle_mesh 從三角形網格創建體素柵格。它返回一個體素網格,其中與三角形相交的所有體素都設置為1,其他所有體素都設置為0。參數 voxel_size 定義了體素網格的分辨率。

mesh = o3dtut.get_bunny_mesh()
# fit to unit cube
mesh.scale(1 / np.max(mesh.get_max_bound() - mesh.get_min_bound()),
          center=mesh.get_center())
o3d.visualization.draw_geometries([mesh])

print('voxelization')
voxel_grid = o3d.geometry.VoxelGrid.create_from_triangle_mesh(mesh,
                                                             voxel_size=0.05)
o3d.visualization.draw_geometries([voxel_grid])

也可以使用 create_from_point_cloud 方法從點雲創建體素網格。如果點雲的至少一個點位於體素內,則體素被占用。體素的顏色是體素內所有點的平均值。參數 voxel_size 定義體素網格的分辨率。

N = 2000
pcd = o3dtut.get_armadillo_mesh().sample_points_poisson_disk(N)
# fit to unit cube
pcd.scale(1 / np.max(pcd.get_max_bound() - pcd.get_min_bound()),
         center=pcd.get_center())
pcd.colors = o3d.utility.Vector3dVector(np.random.uniform(0, 1, size=(N, 3)))
o3d.visualization.draw_geometries([pcd])

print('voxelization')
voxel_grid = o3d.geometry.VoxelGrid.create_from_point_cloud(pcd,
                                                           voxel_size=0.05)
o3d.visualization.draw_geometries([voxel_grid])

5.4. 驗證點是否包含在體素內

體素網格也可以用來測試點是否在被占用的體素內。函數 check_if_included 以一個 (n,3) 維的數組作為輸入和一個 bool 類型的數組作為輸出。

queries = np.asarray(pcd.points)
output = voxel_grid.check_if_included(o3d.utility.Vector3dVector(queries))
print(output[:10])
[True, True, True, True, True, True, True, True, True, True]

5.5. 體素切割

函數 create_from_point_cloudcreate_from_triangle_mesh 僅在幾何體的曲面上創建被占用的體素。然而,可以從許多深度貼圖或輪廓切割體素網格。Open3D 提供了函數 carve_depth_mapcarve_silhouette 用於體素切割。

下面的代碼演示了如何使用:首先從幾何體渲染深度貼圖,然后使用這些深度貼圖雕刻密集的體素網格。結果是一個填充了給定形狀的體素網格。

def xyz_spherical(xyz):
    x = xyz[0]
    y = xyz[1]
    z = xyz[2]
    r = np.sqrt(x * x + y * y + z * z)
    r_x = np.arccos(y / r)
    r_y = np.arctan2(z, x)
    return [r, r_x, r_y]

def get_rotation_matrix(r_x, r_y):
    rot_x = np.asarray([[1, 0, 0], [0, np.cos(r_x), -np.sin(r_x)],
                        [0, np.sin(r_x), np.cos(r_x)]])
    rot_y = np.asarray([[np.cos(r_y), 0, np.sin(r_y)], [0, 1, 0],
                        [-np.sin(r_y), 0, np.cos(r_y)]])
    return rot_y.dot(rot_x)

def get_extrinsic(xyz):
    rvec = xyz_spherical(xyz)
    r = get_rotation_matrix(rvec[1], rvec[2])
    t = np.asarray([0, 0, 2]).transpose()
    trans = np.eye(4)
    trans[:3, :3] = r
    trans[:3, 3] = t
    return trans

def preprocess(model):
    min_bound = model.get_min_bound()
    max_bound = model.get_max_bound()
    center = min_bound + (max_bound - min_bound) / 2.0
    scale = np.linalg.norm(max_bound - min_bound) / 2.0
    vertices = np.asarray(model.vertices)
    vertices -= center
    model.vertices = o3d.utility.Vector3dVector(vertices / scale)
    return model

def voxel_carving(mesh,
                  output_filename,
                  camera_path,
                  cubic_size,
                  voxel_resolution,
                  w=300,
                  h=300,
                  use_depth=True,
                  surface_method='pointcloud'):
    mesh.compute_vertex_normals()
    camera_sphere = o3d.io.read_triangle_mesh(camera_path)

    # setup dense voxel grid
    voxel_carving = o3d.geometry.VoxelGrid.create_dense(
        width=cubic_size,
        height=cubic_size,
        depth=cubic_size,
        voxel_size=cubic_size / voxel_resolution,
        origin=[-cubic_size / 2.0, -cubic_size / 2.0, -cubic_size / 2.0],
        color=[1.0, 0.7, 0.0])

    # rescale geometry
    camera_sphere = preprocess(camera_sphere)
    mesh = preprocess(mesh)

    # setup visualizer to render depthmaps
    vis = o3d.visualization.Visualizer()
    vis.create_window(width=w, height=h, visible=False)
    vis.add_geometry(mesh)
    vis.get_render_option().mesh_show_back_face = True
    ctr = vis.get_view_control()
    param = ctr.convert_to_pinhole_camera_parameters()

    # carve voxel grid
    pcd_agg = o3d.geometry.PointCloud()
    centers_pts = np.zeros((len(camera_sphere.vertices), 3))
    for cid, xyz in enumerate(camera_sphere.vertices):
        # get new camera pose
        trans = get_extrinsic(xyz)
        param.extrinsic = trans
        c = np.linalg.inv(trans).dot(np.asarray([0, 0, 0, 1]).transpose())
        centers_pts[cid, :] = c[:3]
        ctr.convert_from_pinhole_camera_parameters(param)

        # capture depth image and make a point cloud
        vis.poll_events()
        vis.update_renderer()
        depth = vis.capture_depth_float_buffer(False)
        pcd_agg += o3d.geometry.PointCloud.create_from_depth_image(
            o3d.geometry.Image(depth),
            param.intrinsic,
            param.extrinsic,
            depth_scale=1)

        # depth map carving method
        if use_depth:
            voxel_carving.carve_depth_map(o3d.geometry.Image(depth), param)
        else:
            voxel_carving.carve_silhouette(o3d.geometry.Image(depth), param)
        print("Carve view %03d/%03d" % (cid + 1, len(camera_sphere.vertices)))
    vis.destroy_window()

    # add voxel grid survace
    print('Surface voxel grid from %s' % surface_method)
    if surface_method == 'pointcloud':
        voxel_surface = o3d.geometry.VoxelGrid.create_from_point_cloud_within_bounds(
            pcd_agg,
            voxel_size=cubic_size / voxel_resolution,
            min_bound=(-cubic_size / 2, -cubic_size / 2, -cubic_size / 2),
            max_bound=(cubic_size / 2, cubic_size / 2, cubic_size / 2))
    elif surface_method == 'mesh':
        voxel_surface = o3d.geometry.VoxelGrid.create_from_triangle_mesh_within_bounds(
            mesh,
            voxel_size=cubic_size / voxel_resolution,
            min_bound=(-cubic_size / 2, -cubic_size / 2, -cubic_size / 2),
            max_bound=(cubic_size / 2, cubic_size / 2, cubic_size / 2))
    else:
        raise Exception('invalid surface method')
    voxel_carving_surface = voxel_surface + voxel_carving

    return voxel_carving_surface, voxel_carving, voxel_surface

mesh = o3dtut.get_armadillo_mesh()

output_filename = os.path.abspath("../../test_data/voxelized.ply")
camera_path = os.path.abspath("../../test_data/sphere.ply")
visualization = True
cubic_size = 2.0
voxel_resolution = 128.0

voxel_grid, voxel_carving, voxel_surface = voxel_carving(
    mesh, output_filename, camera_path, cubic_size, voxel_resolution)

切割:

print("surface voxels")
print(voxel_surface)
o3d.visualization.draw_geometries([voxel_surface])

print("carved voxels")
print(voxel_carving)
o3d.visualization.draw_geometries([voxel_carving])

print("combined voxels (carved + surface)")
print(voxel_grid)
o3d.visualization.draw_geometries([voxel_grid])

6. 數據處理

在進行正式配准之前我們還需要對點雲做以下處理:特例去除 (outlier removal)。 目前大部分深度攝像頭所拍攝的點雲圖都帶有噪音,以及不存在的點,大多因為生產誤差以及攝像頭本身就有的噪聲,在配准的時候我們不希望包括這些因為誤差被記錄的點,所以為了提高配准的效率以及准確率,我們要先將這些特例點去除。

o3d.geometry.radius_outlier_removal() 這個函數是使用球體判斷一個特例的函數,它需要兩個參數:nb_pointsradius 。它會給點雲中的每個點畫一個半徑為 radius 的球體,如
果在這個球體中其他的點的數量小於 nb_points, 這個算法會將這個點判斷為特例,並刪除。

#為兩個點雲上上不同的顏色
source.paint_uniform_color([1, 0.706, 0])    #source 為黃色
target.paint_uniform_color([0, 0.651, 0.929])#target 為藍色

#為兩個點雲分別進行outlier removal
processed_source, outlier_index = o3d.geometry.radius_outlier_removal(source,
                                              nb_points=16,
                                              radius=0.5)
processed_target, outlier_index = o3d.geometry.radius_outlier_removal(target,
                                              nb_points=16,
                                              radius=0.5)
o3d.visualization.draw_geometries([processed_source,processed_target])

6.1. 降采樣

根據體素降采樣從點雲輸入到點雲輸出。如果法線和顏色存在則將其平均

voxel_down_sample(self,voxel_size)

使用geometry.PointCloud.VoxelDownSample下采樣的函數。 在下采樣之前還記錄點雲索引

voxel_down_sample_and_trace(self,
    voxel_size,
    min_bound,  # 體素邊界的最小坐標
    max_bound,  # 體素邊界的最大坐標
    approximate_class=False) ->
        tuple(open3d.geometry.PointCloud, numpy.ndarray[int32[m, n]]),  # 采樣后的稀疏點雲數據
        list(open3d.utility.IntVector)  # 立方編號

幾何坐標的最小/大范圍:

get_min_bound()
get_max_bound()

用法示例

min_bound = dense_pcd.get_min_bound() - voxel_size * 0.5  #獲取最小點坐標 - 0.05 * 0.5
max_bound = dense_pcd.get_max_bound() + voxel_size * 0.5

6.2. 點法線估計

estimate_normals() 用於計算每個點的法線。該函數查找相鄰點並使用協方差分析計算相鄰點的主軸。該函數將 KDTreeSearchParamHybrid 類的實例作為參數。 兩個關鍵參數 radius = 0.1max_nn = 30 指定搜索半徑和最大最近鄰居。它的搜索半徑為10厘米,最多可考慮30個鄰居,以節省計算時間。

print("Recompute the normal of the downsampled point cloud")
downpcd.estimate_normals(search_param=o3d.geometry.KDTreeSearchParamHybrid(
    radius=0.1, max_nn=30))
o3d.visualization.draw_geometries([downpcd])

提示:協方差分析算法產生兩個相反的方向作為正常候選。如果不知道幾何體的全局結構,兩者都可以是正確的。這就是所謂的法向問題。Open3D嘗試調整法線的方向,使其與原始法線對齊(如果存在)。否則,Open3D會隨機猜測。如果需要考慮方向,則需要調用其他方向函數,如 orient_normals_to_align_with_directionorient_normals_towards_camera_location

使用 draw_geometries() 可視化點雲:

  • 按n查看法線
  • 鍵-和鍵+可用於控制法線的長度

7. kdtree

Open3D使用 FLANN 來構建 KDTree,以快速檢索最近的鄰居。

print("Testing kdtree in open3d ...")
print("Load a point cloud and paint it gray.")
pcd = o3d.io.read_point_cloud("../../TestData/Feature/cloud_bin_0.pcd")
pcd.paint_uniform_color([0.5, 0.5, 0.5])
pcd_tree = o3d.geometry.KDTreeFlann(pcd)

該腳本讀取點雲並構建KDTree。 這是用於以下最近鄰居查詢的預處理步驟。

print("Paint the 1500th point red.")
pcd.colors[1500] = [1, 0, 0]

我們選擇第1500個點作為錨點並將其塗成紅色。

print("Find its 200 nearest neighbors, paint blue.")
[k, idx, _] = pcd_tree.search_knn_vector_3d(pcd.points[1500], 200)
np.asarray(pcd.colors)[idx[1:], :] = [0, 0, 1]

函數 search_knn_vector_3d() 返回錨點的k個最近鄰居的索引列表。這些相鄰點塗有藍色。請注意,我們將pcd.colors轉換為numpy數組以批量訪問點顏色,並向所有選定點廣播藍色[0,0,1]。 我們跳過第一個索引,因為它是錨點本身。

print("Find its neighbors with distance less than 0.2, paint green.")
[k, idx, _] = pcd_tree.search_radius_vector_3d(pcd.points[1500], 0.2)
np.asarray(pcd.colors)[idx[1:], :] = [0, 1, 0]

同樣,我們可以使用search_radius_vector_3d查詢所有距錨點距離小於給定半徑的點。 我們將這些點塗成綠色。

print("Visualize the point cloud.")
o3d.visualization.draw_geometries([pcd])
print("")

8. 顏色映射

wechat: 點雲PCL

將顏色映射到從深度相機重建的幾何形狀。由於彩色幀和深度幀不一定是完美對齊的,所以使用彩色圖像進行紋理映射的結果會導致一個模糊的彩色映射。Open3d提供了基於“Zhou2014”中的彩色映射優化算法。

下面的代碼讀取彩色和深度圖像對,並且生成 rgbd_image。注意convert_rgb_to_intensity標志位設置為 False。只是為了保留8-bit彩色通道而不是使用單通道浮點型圖像。

def sorted_alphanum(file_list_ordered):
    convert = lambda text: int(text) if text.isdigit() else text
    alphanum_key = lambda key: [convert(c) for c in re.split('([0-9]+)', key)]
    return sorted(file_list_ordered, key=alphanum_key)


def get_file_list(path, extension=None):
    if extension is None:
        file_list = [path + f for f in os.listdir(path) if os.path.isfile(join(path, f))]
    else:
        file_list = [
            path + f
            for f in os.listdir(path)
            if os.path.isfile(os.path.join(path, f)) and os.path.splitext(f)[1] == extension
        ]
    file_list = sorted_alphanum(file_list)
    return file_list

#####################################################################

path = o3dtut.download_fountain_dataset()
debug_mode = False  # 選擇是否可視化RGBD圖像

rgbd_images = []
depth_image_path = get_file_list(os.path.join(path, "depth/"),
                                 extension=".png")
color_image_path = get_file_list(os.path.join(path, "image/"),
                                 extension=".jpg")
assert (len(depth_image_path) == len(color_image_path))
for i in range(len(depth_image_path)):
    depth = o3d.io.read_image(os.path.join(depth_image_path[i]))
    color = o3d.io.read_image(os.path.join(color_image_path[i]))
    rgbd_image = o3d.geometry.RGBDImage.create_from_color_and_depth(
        color, depth, convert_rgb_to_intensity=False)
    if debug_mode:
        pcd = o3d.geometry.PointCloud.create_from_rgbd_image(
            rgbd_image,
            o3d.camera.PinholeCameraIntrinsic(
                o3d.camera.PinholeCameraIntrinsicParameters.
                PrimeSenseDefault))
        o3d.visualization.draw_geometries([pcd])
    rgbd_images.append(rgbd_image)

下面的代碼讀取相機軌跡和網格數據:

camera = o3d.io.read_pinhole_camera_trajectory(
    os.path.join(path, "scene/key.log"))
mesh = o3d.io.read_triangle_mesh(
    os.path.join(path, "scene", "integrated.ply"))

為了可視化出相機的姿態不適合顏色映射,下面的代碼故意設置迭代次數為0,也就是不對其映射做優化。color_map_optimization 使用對應的相機姿態和RGBD圖像來繪制網格。如果沒有優化的話,可以看到紋理很模糊。

# Before full optimization, let's just visualize texture map
# with given geometry, RGBD images, and camera poses.
option = o3d.color_map.ColorMapOptimizationOption()
option.maximum_iteration = 0
with o3d.utility.VerbosityContextManager(o3d.utility.VerbosityLevel.Debug) as cm:
    o3d.color_map.color_map_optimization(mesh, rgbd_images, camera, option)
o3d.visualization.draw_geometries([mesh])

注意:該算法預先計算所有點在 epsilon 半徑上的所有鄰近點。如果選擇的 epsilon 太大,這可能需要大量內存。

8.1. 剛性優化

下一步優化相機參數來獲得清晰的彩色圖像。下面的代碼設置最大迭代次數為300。

# Optimize texture and save the mesh as texture_mapped.ply
# This is implementation of following paper
# Q.-Y. Zhou and V. Koltun,
# Color Map Optimization for 3D Reconstruction with Consumer Depth Cameras,
# SIGGRAPH 2014
option.maximum_iteration = 100 if is_ci else 300
option.non_rigid_camera_coordinate = False
with o3d.utility.VerbosityContextManager(o3d.utility.VerbosityLevel.Debug) as cm:
    o3d.color_map.color_map_optimization(mesh, rgbd_images, camera, option)
o3d.visualization.draw_geometries([mesh])

8.2. 非剛性優化

為了有一個更好的映射質量,需要使用非剛性優化。要啟用非剛性優化只需要在調用 color_map_optimization() 前將 option.non_rigid_camera_coordinate 設置為True。除了六維相機姿態以外,非剛性優化甚至考慮了由錨點表示的局部圖像變形。這種方式更加靈活並且會有着更高的彩色映射質量。殘差也會小於剛性優化的情況。

option.maximum_iteration = 100 if is_ci else 300
option.non_rigid_camera_coordinate = True
with o3d.utility.VerbosityContextManager(o3d.utility.VerbosityLevel.Debug) as cm:
    o3d.color_map.color_map_optimization(mesh, rgbd_images, camera, option)
o3d.visualization.draw_geometries([mesh])

9. 畫3D矩形框方法:

def custom_draw_geometry(pcd,linesets):
    vis = o3d.visualization.Visualizer()
    vis.create_window()
    vis.add_geometry(pcd)
    vis.add_geometry(linesets)
    render_option = vis.get_render_option()
    render_option.point_size = 4
    render_option.background_color = np.asarray([0, 0, 0])
    vis.run()
    vis.destroy_window()

pc = np.ones((100, 3),dtype = np.float32) #點雲
lines_box = np.array([[0, 1], [1, 2], [0, 3], [2, 3], [4, 5], [4, 7], [5, 6], [6, 7],[0, 4], [1, 5], [2, 6], [3, 7]])
colors = np.array([[0, 1, 0] for j in range(len(lines_box))])
line_set = o3d.geometry.LineSet()
line_set.points = o3d.utility.Vector3dVector(points_box)
line_set.lines = o3d.utility.Vector2iVector(lines_box)
line_set.colors = o3d.utility.Vector3dVector(colors)
point_cloud = o3d.geometry.PointCloud()
point_cloud.points = o3d.utility.Vector3dVector(pc)
# generateBox(point_cloud)
custom_draw_geometry(point_cloud, line_set)

10. 連續幀點雲流可視化方法:

import os
import numpy as np
import open3d as o3d

files = os.listdir("0806_reconstruction/")

vis = o3d.visualization.Visualizer()
vis.create_window()
pointcloud = o3d.geometry.PointCloud()
to_reset = True
vis.add_geometry(pointcloud)
for f in files:
    pcd = o3d.io.read_point_cloud("0806_reconstruction/" + f)
    pcd = np.asarray(pcd.points).reshape((-1, 3))
    pointcloud.points = o3d.utility.Vector3dVector(pcd)
    vis.update_geometry()
    # 注意,如果使用的是open3d 0.8.0以后的版本,這句話應該改為下面格式
    # vis.update_geometry(pointcloud)
    if to_reset:
        vis.reset_view_point(True)
        to_reset = False
    vis.poll_events()
    vis.update_renderer()


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM