We have three methods for registering range image data. One is user-assisted, and was used for all the warped data sets described in this paper. The other two are automatic.

In the user-assisted process, the user selects points from 3 corresponding planes in each data set [10]. The data is shown in a 3D reprojection that can be translated and rotated for easy selection of the points. Once the 3 planes are selected, the fundamental matrix can be found. Error metrics are given, and the process can be repeated as needed.

Other selection techniques have been suggested for plane selection using the 2D display of the range images, including using ordinary box, lasso, and spray paint models. These may be quicker, as it is simple to interact with 2D data displayed on a monitor and manipulated with a mouse.

One automatic method is called the Empty Space Registration Method [18], which is a variant of the Iterated Closest Point Algorithm [3]. In ICP, the data being registered is assumed to be ``full,'' that is, there are no shadows or occlusions and the data is sampled similarly in each data set. In data sets of real environments, there are bound to be occlusions. The empty space registration method considers empty space to the first visible surface and shadow volumes explicitly, knowing that nothing can occupy the empty space and anything can be in the shadows. Results of the search are shown in figure 6, where 2 source views of a computer on a table are shown. The merging shows their almost-correct registration, despite the fact that very little of the scene is shared between the views.

Our second automatic registration method is under development, and is
based on a 3D Hough transform of the range data.
The edge-detection
step is not necessary with the rangefinder data, as the data collected
represents the first ``edge'' in 3D (surface).
This method takes each sample and performs the
standard Hough transform operation by incrementing all possible
buckets of *r*, ,
and
for each sample.
A sample of plane detection is shown in figure 7, where the
pixels showing the ceiling of the room have been recovered from the
Hough transform of the range data.