Acquiring Immersive Virtual Environments with an Uncalibrated Camera

Leonard McMillan.

UNC-Chapel Hill Computer Science Technical Report #95-006, University of North Carolina, 1995.

Abstract

Immersion is arguably the most significant distinction of virtual environments from traditional three-dimensional computer graphics. When using standard geometric scene representations we need only provide rich object descriptions of appropriate scales to achieve such an immersive environment. The situation is more complicated in the case of image-based rendering systems.

In this report a method for generating a 360 degree field-of-view image from a sequence of planar projections is described. This method determines a camera model based on the analysis of an image sequence that results from the constant panning of a camera about its nodal point. Once this camera model is determined the images may be reprojected onto any arbitrary surface such as a cylinder. I also describe an important geometric constraint that is useful for relating the geometry between any pair of cylindrical projections. This constraint describes a curve in space which completely describes the apparent trajectory of points resulting from motion of the camera along the direction connecting the two projections. This constraint plays the same role for cylindrical projections that the epipolar constraint plays for planar projections.