Register fragments

Once the fragments of the scene are created, the next step is to align them in a global space.

Input arguments

This script runs with python run_system.py [config] --register. In [config], ["path_dataset"] should have subfolders fragments which stores fragments in .ply files and a pose graph in a .json file.

The main function runs make_posegraph_for_scene and optimize_posegraph_for_scene. The first function performs pairwise registration. The second function performs multiway registration.

Preprocess point cloud

39# examples/python/reconstruction_system/register_fragments.py
40def preprocess_point_cloud(pcd, config):
41    voxel_size = config["voxel_size"]
42    pcd_down = pcd.voxel_down_sample(voxel_size)
43    pcd_down.estimate_normals(
44        o3d.geometry.KDTreeSearchParamHybrid(radius=voxel_size * 2.0,
45                                             max_nn=30))
46    pcd_fpfh = o3d.pipelines.registration.compute_fpfh_feature(
47        pcd_down,
48        o3d.geometry.KDTreeSearchParamHybrid(radius=voxel_size * 5.0,
49                                             max_nn=100))
50    return (pcd_down, pcd_fpfh)

This function downsamples a point cloud to make it sparser and regularly distributed. Normals and FPFH feature are precomputed. See /tutorial/geometry/pointcloud.ipynb#voxel-downsampling, /tutorial/geometry/pointcloud.ipynb#vertex-normal-estimation, and /tutorial/pipelines/global_registration.ipynb#extract-geometric-feature for more details.

Compute initial registration

 81# examples/python/reconstruction_system/register_fragments.py
 82def compute_initial_registration(s, t, source_down, target_down, source_fpfh,
 83                                 target_fpfh, path_dataset, config):
 84
 85    if t == s + 1:  # odometry case
 86        print("Using RGBD odometry")
 87        pose_graph_frag = o3d.io.read_pose_graph(
 88            join(path_dataset,
 89                 config["template_fragment_posegraph_optimized"] % s))
 90        n_nodes = len(pose_graph_frag.nodes)
 91        transformation_init = np.linalg.inv(pose_graph_frag.nodes[n_nodes -
 92                                                                  1].pose)
 93        (transformation, information) = \
 94                multiscale_icp(source_down, target_down,
 95                [config["voxel_size"]], [50], config, transformation_init)
 96    else:  # loop closure case
 97        (success, transformation,
 98         information) = register_point_cloud_fpfh(source_down, target_down,
 99                                                  source_fpfh, target_fpfh,
100                                                  config)
101        if not success:
102            print("No reasonable solution. Skip this pair")
103            return (False, np.identity(4), np.zeros((6, 6)))
104    print(transformation)
105
106    if config["debug_mode"]:
107        draw_registration_result(source_down, target_down, transformation)
108    return (True, transformation, information)

This function computes a rough alignment between two fragments. If the fragments are neighboring fragments, the rough alignment is determined by an aggregating RGBD odometry obtained from Make fragments. Otherwise, register_point_cloud_fpfh is called to perform global registration. Note that global registration is less reliable according to [Choi2015].

Pairwise global registration

52# examples/python/reconstruction_system/register_fragments.py
53def register_point_cloud_fpfh(source, target, source_fpfh, target_fpfh, config):
54    distance_threshold = config["voxel_size"] * 1.4
55    if config["global_registration"] == "fgr":
56        result = o3d.pipelines.registration.registration_fgr_based_on_feature_matching(
57            source, target, source_fpfh, target_fpfh,
58            o3d.pipelines.registration.FastGlobalRegistrationOption(
59                maximum_correspondence_distance=distance_threshold))
60    if config["global_registration"] == "ransac":
61        result = o3d.pipelines.registration.registration_ransac_based_on_feature_matching(
62            source, target, source_fpfh, target_fpfh, True, distance_threshold,
63            o3d.pipelines.registration.TransformationEstimationPointToPoint(
64                False), 3,
65            [
66                o3d.pipelines.registration.
67                CorrespondenceCheckerBasedOnEdgeLength(0.9),
68                o3d.pipelines.registration.CorrespondenceCheckerBasedOnDistance(
69                    distance_threshold)
70            ],
71            o3d.pipelines.registration.RANSACConvergenceCriteria(
72                1000000, 0.999))
73    if (result.transformation.trace() == 4.0):
74        return (False, np.identity(4), np.zeros((6, 6)))
75    information = o3d.pipelines.registration.get_information_matrix_from_point_clouds(
76        source, target, distance_threshold, result.transformation)
77    if information[5, 5] / min(len(source.points), len(target.points)) < 0.3:
78        return (False, np.identity(4), np.zeros((6, 6)))
79    return (True, result.transformation, information)

This function uses /tutorial/pipelines/global_registration.ipynb#RANSAC or /tutorial/pipelines/global_registration.ipynb#fast-global-registration for pairwise global registration.

Multiway registration

110# examples/python/reconstruction_system/register_fragments.py
111def update_posegraph_for_scene(s, t, transformation, information, odometry,
112                               pose_graph):
113    if t == s + 1:  # odometry case
114        odometry = np.dot(transformation, odometry)
115        odometry_inv = np.linalg.inv(odometry)
116        pose_graph.nodes.append(
117            o3d.pipelines.registration.PoseGraphNode(odometry_inv))
118        pose_graph.edges.append(
119            o3d.pipelines.registration.PoseGraphEdge(s,
120                                                     t,
121                                                     transformation,
122                                                     information,
123                                                     uncertain=False))
124    else:  # loop closure case
125        pose_graph.edges.append(
126            o3d.pipelines.registration.PoseGraphEdge(s,
127                                                     t,
128                                                     transformation,
129                                                     information,
130                                                     uncertain=True))
131    return (odometry, pose_graph)

This script uses the technique demonstrated in /tutorial/pipelines/multiway_registration.ipynb. The function update_posegraph_for_scene builds a pose graph for multiway registration of all fragments. Each graph node represents a fragment and its pose which transforms the geometry to the global space.

Once a pose graph is built, the function optimize_posegraph_for_scene is called for multiway registration.

66# examples/python/reconstruction_system/optimize_posegraph.py
67def optimize_posegraph_for_scene(path_dataset, config):
68    pose_graph_name = join(path_dataset, config["template_global_posegraph"])
69    pose_graph_optimized_name = join(
70        path_dataset, config["template_global_posegraph_optimized"])
71    run_posegraph_optimization(pose_graph_name, pose_graph_optimized_name,
72            max_correspondence_distance = config["voxel_size"] * 1.4,
73            preference_loop_closure = \
74            config["preference_loop_closure_registration"])

Main registration loop

The function make_posegraph_for_scene below calls all the functions introduced above. The main workflow is: pairwise global registration -> multiway registration.

164# examples/python/reconstruction_system/register_fragments.py
165def make_posegraph_for_scene(ply_file_names, config):
166    pose_graph = o3d.pipelines.registration.PoseGraph()
167    odometry = np.identity(4)
168    pose_graph.nodes.append(o3d.pipelines.registration.PoseGraphNode(odometry))
169
170    n_files = len(ply_file_names)
171    matching_results = {}
172    for s in range(n_files):
173        for t in range(s + 1, n_files):
174            matching_results[s * n_files + t] = matching_result(s, t)
175
176    if config["python_multi_threading"] == True:
177        from joblib import Parallel, delayed
178        import multiprocessing
179        import subprocess
180        MAX_THREAD = min(multiprocessing.cpu_count(),
181                         max(len(matching_results), 1))
182        results = Parallel(n_jobs=MAX_THREAD)(delayed(
183            register_point_cloud_pair)(ply_file_names, matching_results[r].s,
184                                       matching_results[r].t, config)
185                                              for r in matching_results)
186        for i, r in enumerate(matching_results):
187            matching_results[r].success = results[i][0]
188            matching_results[r].transformation = results[i][1]
189            matching_results[r].information = results[i][2]
190    else:
191        for r in matching_results:
192            (matching_results[r].success, matching_results[r].transformation,
193                    matching_results[r].information) = \
194                    register_point_cloud_pair(ply_file_names,
195                    matching_results[r].s, matching_results[r].t, config)
196
197    for r in matching_results:
198        if matching_results[r].success:
199            (odometry, pose_graph) = update_posegraph_for_scene(
200                matching_results[r].s, matching_results[r].t,
201                matching_results[r].transformation,
202                matching_results[r].information, odometry, pose_graph)
203    o3d.io.write_pose_graph(
204        join(config["path_dataset"], config["template_global_posegraph"]),
205        pose_graph)

Results

The pose graph optimization outputs the following messages:

[GlobalOptimizationLM] Optimizing PoseGraph having 14 nodes and 42 edges.
Line process weight : 55.885667
[Initial     ] residual : 7.791139e+04, lambda : 1.205976e+00
[Iteration 00] residual : 6.094275e+02, valid edges : 22, time : 0.001 sec.
[Iteration 01] residual : 4.526879e+02, valid edges : 22, time : 0.000 sec.
[Iteration 02] residual : 4.515039e+02, valid edges : 22, time : 0.000 sec.
[Iteration 03] residual : 4.514832e+02, valid edges : 22, time : 0.000 sec.
[Iteration 04] residual : 4.514825e+02, valid edges : 22, time : 0.000 sec.
Current_residual - new_residual < 1.000000e-06 * current_residual
[GlobalOptimizationLM] total time : 0.003 sec.
[GlobalOptimizationLM] Optimizing PoseGraph having 14 nodes and 35 edges.
Line process weight : 60.762800
[Initial     ] residual : 6.336097e+01, lambda : 1.324043e+00
[Iteration 00] residual : 6.334147e+01, valid edges : 22, time : 0.000 sec.
[Iteration 01] residual : 6.334138e+01, valid edges : 22, time : 0.000 sec.
Current_residual - new_residual < 1.000000e-06 * current_residual
[GlobalOptimizationLM] total time : 0.001 sec.
CompensateReferencePoseGraphNode : reference : 0

There are 14 fragments and 52 valid matching pairs among the fragments. After 23 iterations, 11 edges are detected to be false positives. After they are pruned, pose graph optimization runs again to achieve tight alignment.