The goal of the alignment procedure is to find the orientation of the camera image that best correlates with the rangefinder image. The laser intensity image cannot be directly correlated to the color image (or even the red channel of the color image) because the illumination of the two images is so different that straightforward image correlation gives poor results. The laser image is illuminated directly by the laser and nothing else (ambient light is removed). The laser image has no shadows, the entire scene is equally illuminated, and specularities occur in different places than in the normally lit scene.
Instead, we perform the alignment on the edges in the images. Edge-detection algorithms work well on the data from the laser rangefinder, but tend to show the high frequency noise in the color images. To solve this problem, we apply a variable conductance diffusion operation (VCD)  to the color images. Edge detection on the blurred image then finds only the salient edges. The edge pixels are undistorted according to the distortion parameters found in the camera calibration.
Edge detection is performed on both the range and intensity images from the rangefinder. The edges in these images are then blurred by convolving them with a kernel that has wide support, but whose gradient increases near the center of the kernel. This enhances the search by giving nearby solutions a small error value, but not nearly as small as an exact solution.
We only search over the three angles of registration between the spherical range image and planar color image since we know that the two images share the same center-of-projection. The error value is computed as the degree of edgeness from the rangefinder image that corresponds with the edges in the color image. We use a simulated annealing search strategy, which works well when presented with a reasonable starting point.
Having found values for the 3 angles, we return to the original color images, correct for distortion, and determine the proper distance information. To do so, we project the range information onto the planar grid, making a list of range values for each pixel. Resampling range data is problematic, so we perform a clustering algorithm on the range values for each pixel, setting the range as the average of the largest cluster. This method avoids the error of blending samples across multiple surfaces.