Make fragments
The first step of the scene reconstruction system is to create fragments from short RGBD sequences.
Input arguments
The script runs with python run_system.py [config] --make
. In [config]
,
["path_dataset"]
should have subfolders image
and depth
to store the
color images and depth images respectively. We assume the color images and the
depth images are synchronized and registered. In [config]
, the optional
argument ["path_intrinsic"]
specifies the path to a json file that stores
the camera intrinsic matrix (See
/tutorial/pipelines/rgbd_odometry.ipynb#read-camera-intrinsic for
details). If it is not given, the PrimeSense factory setting is used instead.
Register RGBD image pairs
56# examples/python/reconstruction_system/make_fragments.py
57def register_one_rgbd_pair(s, t, color_files, depth_files, intrinsic,
58 with_opencv, config):
59 source_rgbd_image = read_rgbd_image(color_files[s], depth_files[s], True,
60 config)
61 target_rgbd_image = read_rgbd_image(color_files[t], depth_files[t], True,
62 config)
63
64 option = o3d.pipelines.odometry.OdometryOption()
65 option.max_depth_diff = config["max_depth_diff"]
66 if abs(s - t) != 1:
67 if with_opencv:
68 success_5pt, odo_init = pose_estimation(source_rgbd_image,
69 target_rgbd_image,
70 intrinsic, False)
71 if success_5pt:
72 [success, trans, info
73 ] = o3d.pipelines.odometry.compute_rgbd_odometry(
74 source_rgbd_image, target_rgbd_image, intrinsic, odo_init,
75 o3d.pipelines.odometry.RGBDOdometryJacobianFromHybridTerm(),
76 option)
77 return [success, trans, info]
78 return [False, np.identity(4), np.identity(6)]
79 else:
80 odo_init = np.identity(4)
81 [success, trans, info] = o3d.pipelines.odometry.compute_rgbd_odometry(
82 source_rgbd_image, target_rgbd_image, intrinsic, odo_init,
83 o3d.pipelines.odometry.RGBDOdometryJacobianFromHybridTerm(), option)
84 return [success, trans, info]
The function reads a pair of RGBD images and registers the source_rgbd_image
to the target_rgbd_image
. The Open3D function compute_rgbd_odometry
is
called to align the RGBD images. For adjacent RGBD images, an identity matrix is
used as the initialization. For non-adjacent RGBD images, wide baseline matching
is used as the initialization. In particular, the function pose_estimation
computes OpenCV ORB feature to match sparse features over wide baseline images,
then performs 5-point RANSAC to estimate a rough alignment, which is used as
the initialization of compute_rgbd_odometry
.
Multiway registration
86# examples/python/reconstruction_system/make_fragments.py
87def make_posegraph_for_fragment(path_dataset, sid, eid, color_files,
88 depth_files, fragment_id, n_fragments,
89 intrinsic, with_opencv, config):
90 o3d.utility.set_verbosity_level(o3d.utility.VerbosityLevel.Error)
91 pose_graph = o3d.pipelines.registration.PoseGraph()
92 trans_odometry = np.identity(4)
93 pose_graph.nodes.append(
94 o3d.pipelines.registration.PoseGraphNode(trans_odometry))
95 for s in range(sid, eid):
96 for t in range(s + 1, eid):
97 # odometry
98 if t == s + 1:
99 print(
100 "Fragment %03d / %03d :: RGBD matching between frame : %d and %d"
101 % (fragment_id, n_fragments - 1, s, t))
102 [success, trans,
103 info] = register_one_rgbd_pair(s, t, color_files, depth_files,
104 intrinsic, with_opencv, config)
105 trans_odometry = np.dot(trans, trans_odometry)
106 trans_odometry_inv = np.linalg.inv(trans_odometry)
107 pose_graph.nodes.append(
108 o3d.pipelines.registration.PoseGraphNode(
109 trans_odometry_inv))
110 pose_graph.edges.append(
111 o3d.pipelines.registration.PoseGraphEdge(s - sid,
112 t - sid,
113 trans,
114 info,
115 uncertain=False))
116
117 # keyframe loop closure
118 if s % config['n_keyframes_per_n_frame'] == 0 \
119 and t % config['n_keyframes_per_n_frame'] == 0:
120 print(
121 "Fragment %03d / %03d :: RGBD matching between frame : %d and %d"
122 % (fragment_id, n_fragments - 1, s, t))
123 [success, trans,
124 info] = register_one_rgbd_pair(s, t, color_files, depth_files,
125 intrinsic, with_opencv, config)
126 if success:
127 pose_graph.edges.append(
128 o3d.pipelines.registration.PoseGraphEdge(
129 s - sid, t - sid, trans, info, uncertain=True))
130 o3d.io.write_pose_graph(
131 join(path_dataset, config["template_fragment_posegraph"] % fragment_id),
132 pose_graph)
This script uses the technique demonstrated in
/tutorial/pipelines/multiway_registration.ipynb. The function
make_posegraph_for_fragment
builds a pose graph for multiway registration of
all RGBD images in this sequence. Each graph node represents an RGBD image and
its pose which transforms the geometry to the global fragment space.
For efficiency, only key frames are used.
Once a pose graph is created, multiway registration is performed by calling the
function optimize_posegraph_for_fragment
.
54# examples/python/reconstruction_system/optimize_posegraph.py
55def optimize_posegraph_for_fragment(path_dataset, fragment_id, config):
56 pose_graph_name = join(path_dataset,
57 config["template_fragment_posegraph"] % fragment_id)
58 pose_graph_optimized_name = join(
59 path_dataset,
60 config["template_fragment_posegraph_optimized"] % fragment_id)
61 run_posegraph_optimization(pose_graph_name, pose_graph_optimized_name,
62 max_correspondence_distance = config["max_depth_diff"],
63 preference_loop_closure = \
64 config["preference_loop_closure_odometry"])
This function calls global_optimization
to estimate poses of the RGBD images.
Make a fragment
134# examples/python/reconstruction_system/make_fragments.py
135def integrate_rgb_frames_for_fragment(color_files, depth_files, fragment_id,
136 n_fragments, pose_graph_name, intrinsic,
137 config):
138 pose_graph = o3d.io.read_pose_graph(pose_graph_name)
139 volume = o3d.pipelines.integration.ScalableTSDFVolume(
140 voxel_length=config["tsdf_cubic_size"] / 512.0,
141 sdf_trunc=0.04,
142 color_type=o3d.pipelines.integration.TSDFVolumeColorType.RGB8)
143 for i in range(len(pose_graph.nodes)):
144 i_abs = fragment_id * config['n_frames_per_fragment'] + i
145 print(
146 "Fragment %03d / %03d :: integrate rgbd frame %d (%d of %d)." %
147 (fragment_id, n_fragments - 1, i_abs, i + 1, len(pose_graph.nodes)))
148 rgbd = read_rgbd_image(color_files[i_abs], depth_files[i_abs], False,
149 config)
150 pose = pose_graph.nodes[i].pose
151 volume.integrate(rgbd, intrinsic, np.linalg.inv(pose))
152 mesh = volume.extract_triangle_mesh()
153 mesh.compute_vertex_normals()
154 return mesh
Once the poses are estimates, /tutorial/pipelines/rgbd_integration.ipynb is used to reconstruct a colored fragment from each RGBD sequence.
Batch processing
191# examples/python/reconstruction_system/make_fragments.py
192def run(config):
193
194 print("making fragments from RGBD sequence.")
195 make_clean_folder(join(config["path_dataset"], config["folder_fragment"]))
196
197 [color_files, depth_files] = get_rgbd_file_lists(config["path_dataset"])
198 n_files = len(color_files)
199 n_fragments = int(
200 math.ceil(float(n_files) / config['n_frames_per_fragment']))
201
202 if config["python_multi_threading"] is True:
203 from joblib import Parallel, delayed
204 import multiprocessing
205 import subprocess
206 MAX_THREAD = min(multiprocessing.cpu_count(), n_fragments)
207 Parallel(n_jobs=MAX_THREAD)(delayed(process_single_fragment)(
208 fragment_id, color_files, depth_files, n_files, n_fragments, config)
209 for fragment_id in range(n_fragments))
210 else:
211 for fragment_id in range(n_fragments):
212 process_single_fragment(fragment_id, color_files, depth_files,
213 n_files, n_fragments, config)
The main function calls each individual function explained above.
Results
Fragment 000 / 013 :: RGBD matching between frame : 0 and 1
Fragment 000 / 013 :: RGBD matching between frame : 0 and 5
Fragment 000 / 013 :: RGBD matching between frame : 0 and 10
Fragment 000 / 013 :: RGBD matching between frame : 0 and 15
Fragment 000 / 013 :: RGBD matching between frame : 0 and 20
:
Fragment 000 / 013 :: RGBD matching between frame : 95 and 96
Fragment 000 / 013 :: RGBD matching between frame : 96 and 97
Fragment 000 / 013 :: RGBD matching between frame : 97 and 98
Fragment 000 / 013 :: RGBD matching between frame : 98 and 99
The following is a log from optimize_posegraph_for_fragment
.
[GlobalOptimizationLM] Optimizing PoseGraph having 100 nodes and 195 edges.
Line process weight : 389.309502
[Initial ] residual : 3.223357e+05, lambda : 1.771814e+02
[Iteration 00] residual : 1.721845e+04, valid edges : 157, time : 0.022 sec.
[Iteration 01] residual : 1.350251e+04, valid edges : 168, time : 0.017 sec.
:
[Iteration 32] residual : 9.779118e+03, valid edges : 179, time : 0.013 sec.
Current_residual - new_residual < 1.000000e-06 * current_residual
[GlobalOptimizationLM] total time : 0.519 sec.
[GlobalOptimizationLM] Optimizing PoseGraph having 100 nodes and 179 edges.
Line process weight : 398.292104
[Initial ] residual : 5.120047e+03, lambda : 2.565362e+02
[Iteration 00] residual : 5.064539e+03, valid edges : 179, time : 0.014 sec.
[Iteration 01] residual : 5.037665e+03, valid edges : 178, time : 0.015 sec.
:
[Iteration 11] residual : 5.017307e+03, valid edges : 177, time : 0.013 sec.
Current_residual - new_residual < 1.000000e-06 * current_residual
[GlobalOptimizationLM] total time : 0.197 sec.
CompensateReferencePoseGraphNode : reference : 0
The following is a log from integrate_rgb_frames_for_fragment
.
Fragment 000 / 013 :: integrate rgbd frame 0 (1 of 100).
Fragment 000 / 013 :: integrate rgbd frame 1 (2 of 100).
Fragment 000 / 013 :: integrate rgbd frame 2 (3 of 100).
:
Fragment 000 / 013 :: integrate rgbd frame 97 (98 of 100).
Fragment 000 / 013 :: integrate rgbd frame 98 (99 of 100).
Fragment 000 / 013 :: integrate rgbd frame 99 (100 of 100).
The following images show some of the fragments made by this script.



