Adrian Ilie
University of North Carolina at Chapel Hill
Computer Science Department
Home
Research
Publications
Coursework
Personal
Links
 

Camera Photometric and Geometric Calibration

 
This web page describes a program for photometric and geometric calibration developed in the 3D Telepresence for Medical Consultation: Extending Medical Expertise Throughout, Between, and Beyond Hospitals project. This material is also based upon work supported by the National Science Foundation under Grant No. 0121657.
 
The photometric calibration process is described in this tech report, and was published in the proceedings of the ICCV 2005 conference. The approach consists of two phases: an iterative closed-loop calibration phase that searches for the per-camera hardware register settings that best balance linearity and dynamic range, followed by a refinement phase that computes the per-camera parametric values for an additional software-based color mapping.
 
The images below show the impact of photometric calibration on a 3D reconstruction:
 
Before
After
Comparison between reconstruction results before and after photometric calibration.
 
Left: images from two cameras are more similar after calibration. Middle: the reconstructed depth map is smoother after calibration and has less artifacts. Right: the final reconstruction result has less artifacts after calibration.
 
The graphical user interface of the photometric calibration part of the program is shown below:
 
GUI
The GUI for photometric calibration.
 
The main window shows a camera image of a GretagMacBeth ColorChecker™ color target, mounted on a panel with a printed checkerboard pattern to allow automatic detection.
The Graph window on the top left shows a 3D RGB color space plot. Each colored sphere represents the position of a camera sample in the RGB color space. Each connected cluster of colored spheres corresponds to one of the 24 samples in the chart. The size of each sphere is proportional to the intra-sample variance. The small white spheres at the origin of each cluster represent the position in the RGB color space of the corresponding color samples in the chart. The Feedback window on the bottom left shows in real-time the evolution of the hardware calibration phase. The Tune camera window on the top right allows selecting the camera settings used in the optimization. The Camera parameters window allows setting the parameters for calibration.
 
The program incorporates geometric calibration from the Open Source Computer Vision library. The geometric calibration process consists in capturing a series of images of a checkerboard pattern, finding the corner locations and passing them to the calibration routine, which outputs the camera intrinsic and extrinsic parameters. Multiple cameras can be calibrated simultaneously, either by synchronizing them through hardware means while casually moving the pattern around in the cameras' field of view, or by taking multiple snapshots of the checkerboard pattern in various positions. The images below show the interface for geometric calibration:
 
Corners
Detecting corners during geometric calibration.
 
Geometric results
Visualizing the results of the geometric calibration.
 
Reprojection error
Visualizing the reprojection error in the geometric calibration.
 
The program is written in Borland C++ Builder. It currently supports PointGrey DragonFly cameras, and other FireWire cameras using the CMU 1394 Digital Camera Driver. Support for other types of cameras can be added by linking in driver libraries, and adding appropriate initialization files.
 
The software itself and the source code are available upon request.
 
Last modified Monday, June 23, 2008 by Adrian Ilie
Site Meter