UNC Hybrid Tracking Research

Augmented Reality Technology

Overview

The goal of this project is to develop and operate an accurate and robust tracking system for use in an augmented reality application. Augmented reality combines computer graphics and virtual-reality displays with images of the real world. The hybrid tracking project combines a commerically available magnetic tracker (Flock of Birds from Ascension Technology with vision-based tracking algorithm for superior registration.

Magnetic tracking systems have traditionally been prone to large amounts of error and jitter due to interference from metal in the environment, however they are popular because of their robustness and lack of constraints on user motion. (Calibration of the magnetic tracker is difficult and impractical; thus it may not overcome these problems. Check out our magnetic tracker calibration page for more details.) Vision-based trackers, on the other hand, are very accurate, but have stability problems that arise from their assumptions about the working environment and the user's movements. When a vision tracker's assumptions fail, the results can be catastrophic.

We have developed a hybrid tracking scheme which has the (static) registration accuracy of vision-based tracking systems and the robustness of magnetic tracking systems. This system works well for static scenes and for scenes in which only the camera moves. We still need other techniques for latency management to achieve dynamic registration.

The image at the top of this page demonstrates the accuracy of our tracker registration. The virtual teapot uses a reflection map from the real environment to simulate a chrome surface. Notice the reflection of Gentaro's hand and the flashlight beam in the computer generated object. In the real environment, a reflective sphere is sitting at the location of the teapot. In real-time we grab the image of the sphere in the AR head-mounted display (HMD). Only precise registration allows us to know where the sphere is located in the HMD image.

Quicktime Movies

Here are some Quicktime movies that we have made of demonstrating our system.

This movie shows the camera panning across our scene. Landmarks come in to view and then disappear. Notice that the virtual and real cuboids remain registered. This demonstrates the robustness of our system.

  

This video clip shows how our system reacts to landmark obscuration. Here a hand is waved in front of the camera, hiding landmarks from the tracking system. As the landmarks reappear in the camera's view, the system quickly re-acquires them.

  

This movie demonstates our landmark detection system at work. The rectangles represent landmark search areas. Because our landmarks are two-colored concentric disks, our system does not detect false landmarks in the multi-colored book cover.

This clip also shows the abilities of our landmark detector. We are still able to find all the landmarks even with the abrupt shaking of the camera.

  

Here is an application of our system. In real-time we have acquired the shape of a card prism. At the same time we have acquired the texture map of the prism faces. Then our user is free to manipulate the virtual prism free of the real one.

  

Here we show the virtual prism being rotated by a user (Gentaro). Notice that it correctly interpenetrates the real cuboids.

  

This movie shows the difference between the real card prism and the virtual card prism. As the focus of the camera is changed, the real card prism becomes blurry, as does the whole real scene. The virtual prism, on the other hand, does not.

  

This clip shows a virtual object casting a shadow on the real scene. Notice that the geometry of the shadow changes as the virtual knot rotates and properly falls on the real sculpture. The shape of the sculpture has been acquired beforehand.

  

Here is the Mac-Daddy of them all. This video clip (7.54 Meg) shows a virtual sphere morph into a teapot with reflection mapping.

  

  

  

  

  

For more details about this research, we have various versions of our paper from Siggraph 96 in New Orleans.

HTML version and HTML Table of Contents
binhexed Word 5.1 version
compressed Postscript version

Note that the pictures in the Word and Postscript verions are 8-bit gifs because of the limitations of Word, while the HTML version has nicer 24-bit jpegs.

Publications

Project Members and Collaborators

Research Sponsors


Created: 08.02.96 by Dave Chen
Last Modified: 29 Jul 97 by Mark A. Livingston

Mail Andrei State for more info