The data from the rangefinder is not on a regular grid, as there is no control between the scanning motor and the sampling hardware. We project all of the range samples onto a spherical grid, apply some error removal and hole filling heuristics, and then produce a spherical image of the range and intensity values.
If the laser beam spans two disparate surfaces during a sampling period, the resulting range is usually between the two surfaces (though not always). We use a voting scheme on our projection grid that looks at the range of the 8 nearest neighbors. If at least 4 are within some tolerance, the value is deemed to be valid, otherwise it is removed. This has the effect of removing all floating samples.
Since the rangefinder's ability to determine distance depends on the amount of light reflected, we cannot acquire range information for very dark or specular objects. Objects such as glossy (or even semi-glossy) furniture, dark metal, rubber or plastic objects (wall trim, electronic equipment, plastic trim on furniture), or metallic frames and light fixtures all cause problems.
We use a variation of the Splat-Pull-Push algorithm  to place the range data on a regular grid and fill in the holes. The algorithm was designed to perform well on sparse data, but also works very well on dense data like that from the laser rangefinder. The splat portion of the algorithm performs most of the work since the samples are about as dense as the image pixels. The pull and push phases interpolate the samples to fill in places that were not scanned well by the laser. We output two images from this process--a range image and an infrared laser intensity image. These images are used to align the color camera images with the range data.