Refine registration

Input arguments

This script runs with python run_system.py [config] --refine. In [config], ["path_dataset"] should have subfolders fragments which stores fragments in .ply files and a pose graph in a .json file.

The main function runs local_refinement and optimize_posegraph_for_scene. The first function performs pairwise registration on the pairs detected by Register fragments. The second function performs multiway registration.

Fine-grained registration

 63        odometry_inv = np.linalg.inv(odometry)
 64                ),
 65                o3d.pipelines.registration.ICPConvergenceCriteria(
 66                    max_iteration=iter))
 67        else:
 68            source_down.estimate_normals(
 69                o3d.geometry.KDTreeSearchParamHybrid(radius=voxel_size[scale] *
 70                                                     2.0,
 71                                                     max_nn=30))
 72            target_down.estimate_normals(
 73                o3d.geometry.KDTreeSearchParamHybrid(radius=voxel_size[scale] *
 74                                                     2.0,
 75                                                     max_nn=30))
 76            if config["icp_method"] == "point_to_plane":
 77                result_icp = o3d.pipelines.registration.registration_icp(
 78                    source_down, target_down, distance_threshold,
 79                    current_transformation,
 80                    o3d.pipelines.registration.
 81                    TransformationEstimationPointToPlane(),
 82                    o3d.pipelines.registration.ICPConvergenceCriteria(
 83                        max_iteration=iter))
 84            if config["icp_method"] == "color":
 85                # Colored ICP is sensitive to threshold.
 86                # Fallback to preset distance threshold that works better.
 87                # TODO: make it adjustable in the upgraded system.
 88                result_icp = o3d.pipelines.registration.registration_colored_icp(
 89                    source_down, target_down, voxel_size[scale],
 90                    current_transformation,
 91                    o3d.pipelines.registration.
 92                    TransformationEstimationForColoredICP(),
 93                    o3d.pipelines.registration.ICPConvergenceCriteria(
 94                        relative_fitness=1e-6,
 95                        relative_rmse=1e-6,
 96                        max_iteration=iter))
 97            if config["icp_method"] == "generalized":
 98                result_icp = o3d.pipelines.registration.registration_generalized_icp(
 99                    source_down, target_down, distance_threshold,
100                    current_transformation,
101                    o3d.pipelines.registration.
102                    TransformationEstimationForGeneralizedICP(),
103                    o3d.pipelines.registration.ICPConvergenceCriteria(
104                        relative_fitness=1e-6,
105                        relative_rmse=1e-6,
106                        max_iteration=iter))
107        current_transformation = result_icp.transformation
108        if i == len(max_iter) - 1:
109            information_matrix = o3d.pipelines.registration.get_information_matrix_from_point_clouds(
110                source_down, target_down, voxel_size[scale] * 1.4,
111                result_icp.transformation)
112
113    if config["debug_mode"]:
114        draw_registration_result_original_color(source, target,
115                                                result_icp.transformation)
116    return (result_icp.transformation, information_matrix)
117
118
119def local_refinement(source, target, transformation_init, config):
120    voxel_size = config["voxel_size"]
121    (transformation, information) = \
122            multiscale_icp(
123            source, target,
124            [voxel_size, voxel_size/2.0, voxel_size/4.0], [50, 30, 14],
125            config, transformation_init)
126
127    return (transformation, information)
128
129
130def register_point_cloud_pair(ply_file_names, s, t, transformation_init,
131                              config):
132    print("reading %s ..." % ply_file_names[s])
133    source = o3d.io.read_point_cloud(ply_file_names[s])
134    print("reading %s ..." % ply_file_names[t])
135    target = o3d.io.read_point_cloud(ply_file_names[t])
136    (transformation, information) = \

Two options are given for the fine-grained registration. The color option is recommended since it uses color information to prevent drift. See [Park2017] for details.

Multiway registration

40        odometry_inv = np.linalg.inv(odometry)
41                                                     information,
42                                                     uncertain=True))
43    return (odometry, pose_graph)
44
45
46def multiscale_icp(source,
47                   target,
48                   voxel_size,
49                   max_iter,
50                   config,
51                   init_transformation=np.identity(4)):
52    current_transformation = init_transformation
53    for i, scale in enumerate(range(len(max_iter))):  # multi-scale approach
54        iter = max_iter[scale]
55        distance_threshold = config["voxel_size"] * 1.4
56        print("voxel_size {}".format(voxel_size[scale]))
57        source_down = source.voxel_down_sample(voxel_size[scale])
58        target_down = target.voxel_down_sample(voxel_size[scale])
59        if config["icp_method"] == "point_to_point":
60            result_icp = o3d.pipelines.registration.registration_icp(
61                source_down, target_down, distance_threshold,
62                current_transformation,
63                o3d.pipelines.registration.TransformationEstimationPointToPoint(

This script uses the technique demonstrated in /tutorial/pipelines/multiway_registration.ipynb. Function update_posegraph_for_scene builds a pose graph for multiway registration of all fragments. Each graph node represents a fragment and its pose which transforms the geometry to the global space.

Once a pose graph is built, function optimize_posegraph_for_scene is called for multiway registration.

63    pose_graph = o3d.io.read_pose_graph(pose_graph_name)

Main registration loop

The function make_posegraph_for_refined_scene below calls all the functions

introduced above.

173        odometry_inv = np.linalg.inv(odometry)
174        results = Parallel(n_jobs=MAX_THREAD)(
175            delayed(register_point_cloud_pair)(
176                ply_file_names, matching_results[r].s, matching_results[r].t,
177                matching_results[r].transformation, config)
178            for r in matching_results)
179        for i, r in enumerate(matching_results):
180            matching_results[r].transformation = results[i][0]
181            matching_results[r].information = results[i][1]
182    else:
183        for r in matching_results:
184            (matching_results[r].transformation,
185                    matching_results[r].information) = \
186                    register_point_cloud_pair(ply_file_names,
187                    matching_results[r].s, matching_results[r].t,
188                    matching_results[r].transformation, config)
189
190    pose_graph_new = o3d.pipelines.registration.PoseGraph()
191    odometry = np.identity(4)
192    pose_graph_new.nodes.append(
193        o3d.pipelines.registration.PoseGraphNode(odometry))
194    for r in matching_results:
195        (odometry, pose_graph_new) = update_posegraph_for_scene(
196            matching_results[r].s, matching_results[r].t,
197            matching_results[r].transformation, matching_results[r].information,
198            odometry, pose_graph_new)
199    print(pose_graph_new)
200    o3d.io.write_pose_graph(
201        join(config["path_dataset"], config["template_refined_posegraph"]),
202        pose_graph_new)
203
204
205def run(config):
206    print("refine rough registration of fragments.")
207    o3d.utility.set_verbosity_level(o3d.utility.VerbosityLevel.Debug)
208    ply_file_names = get_file_list(
209        join(config["path_dataset"], config["folder_fragment"]), ".ply")
210    make_posegraph_for_refined_scene(ply_file_names, config)
211    optimize_posegraph_for_refined_scene(config["path_dataset"], config)
212
213    path_dataset = config['path_dataset']
214    n_fragments = len(ply_file_names)
215
216    # Save to trajectory
217    poses = []
218    pose_graph_fragment = o3d.io.read_pose_graph(
219        join(path_dataset, config["template_refined_posegraph_optimized"]))
220    for fragment_id in range(len(pose_graph_fragment.nodes)):
221        pose_graph_rgbd = o3d.io.read_pose_graph(
222            join(path_dataset,
223                 config["template_fragment_posegraph_optimized"] % fragment_id))

The main workflow is: pairwise local refinement -> multiway registration.

Results

The pose graph optimization outputs the following messages:

[GlobalOptimizationLM] Optimizing PoseGraph having 14 nodes and 35 edges.
Line process weight : 789.730200
[Initial     ] residual : 1.208286e+04, lambda : 1.706306e+01
[Iteration 00] residual : 2.410383e+03, valid edges : 22, time : 0.000 sec.
[Iteration 01] residual : 8.127856e+01, valid edges : 22, time : 0.000 sec.
[Iteration 02] residual : 8.031355e+01, valid edges : 22, time : 0.000 sec.
Delta.norm() < 1.000000e-06 * (x.norm() + 1.000000e-06)
[GlobalOptimizationLM] total time : 0.001 sec.
[GlobalOptimizationLM] Optimizing PoseGraph having 14 nodes and 35 edges.
Line process weight : 789.730200
[Initial     ] residual : 8.031355e+01, lambda : 1.716826e+01
Delta.norm() < 1.000000e-06 * (x.norm() + 1.000000e-06)
[GlobalOptimizationLM] total time : 0.000 sec.
CompensateReferencePoseGraphNode : reference : 0

There are 14 fragments and 52 valid matching pairs between fragments. After 23 iterations, 11 edges are detected to be false positives. After they are pruned, pose graph optimization runs again to achieve tight alignment.