Much less attention has been paid to the field of Augmented Reality, although its potential is at least as great as that of Virtual Environments. In Augmented Reality, the user can see the real world around him, with computer graphics superimposed or composited with the real world. Instead of replacing the real world, we supplement it. Ideally, it would seem to the user that the real and virtual objects coexisted.
One way to implement Augmented Reality is with an optical see-through Head-Mounted Display. This device places optical combiners in front of the user's eyes. The combiners let light in from the real world, and they also reflect light from monitors displaying graphic images. The result is a combination of the real world and a virtual world drawn by the monitors.
What is Augmented Reality good for? Basically, applications of this technology use the virtual objects to aid the user's understanding of his environment. For example, a group at UNC scanned a fetus inside a womb with an ultrasonic sensor, then overlayed a three-dimensional model of the fetus on top of the mother's womb. The goal is to give the doctor "X-ray vision," enabling him to "see inside" the womb. Instructions for building or repairing complex equipment might be easier to understand if they were available not in the form of manuals with text and 2D pictures, but as 3D drawings superimposed upon the machinery itself, telling the mechanic what to do and where to do it. Groups at Boeing and Columbia are exploring these types of applications. Fundamentally, Augmented Reality is about augmentation of human perception: supplying information not ordinarily detectable by human senses.
While promising, Augmented Reality is barely at the demonstration phase today, and its full potential will not be realized until several technical challenges are overcome. One of the most basic is the registration problem. The real and virtual objects must be properly aligned with respect to each other, or the illusion that the two coexist will be compromised. For example, in the two pictures below, the user looks at a corner of a wooden frame. The three colored virtual axes are supposed to line up with the three edges attached to that frame corner. The second picture shows correct registration as seen by the user wearing the see-through HMD.
Unfortunately, registration is a difficult problem, for a number of reasons. First, the human visual system is very good at detecting even small misregistrations, because of the resolution of the fovea and the sensitivity of the human visual system to differences. Errors of just a few pixels are noticeable. Second, errors that can be tolerated in Virtual Environments are not acceptable in Augmented Reality. Incorrect viewing parameters, misalignments in the Head-Mounted Display, errors in the head-tracking system, and other problems that often occur in HMD-based systems may not cause detectable problems in Virtual Environments, but they are big problems in Augmented Reality. Finally, there's system delay: the time interval between measuring the head location to superimposing the corresponding graphic images on the real world. The total system delay makes the virtual objects appear to "lag behind" their real counterparts as the user moves around. The result is that in most Augmented Reality systems, the virtual objects appear to "swim around" the real objects, instead of staying registered with them. Until the registration problem is solved, Augmented Reality may never be accepted in serious applications.
My work is about improving registration. I developed calibration techniques, used inertial sensors to predict head motion, and built a real system that implements these techniques and demonstrates the improvements. From most viewpoints, static errors stay within +/- 5 mm, and the errors due to head motion are reduced by a factor of 5-10. While my system does not achieve perfect registration, it is significantly better than any previous demonstration (using optical see-through methods), and I believe this work puts us within striking distance of truly accurate and robust registration.
For more recent work involving video see-through, check out the web page for UNC's ultrasound visualization group.
Azuma, Ronald T. A Survey of Augmented Reality. Presence: Teleoperators and Virtual Environments 6, 4 (August 1997), 355 - 385. Earlier version appeared in Course Notes #9: Developing Advanced Virtual Reality Applications, ACM SIGGRAPH (Los Angeles, CA, 6-11 August 1995), 20-1 to 20-38.
Azuma, Ronald and Gary Bishop. Improving Static and Dynamic Registration in an Optical See-Through HMD. Proceedings of SIGGRAPH '94 (Orlando, FL, 24-29 July 1994), Computer Graphics, Annual Conference Series, 1994, 197-204 + CD-ROM appendix
Azuma, Ronald and Gary Bishop. A Frequency-Domain Analysis of Head-Motion Prediction. Proceedings of SIGGRAPH '95 (Los Angeles, CA, 6-11 August 1995), Computer Graphics, Annual Conference Series, 1995, 401-408.
Azuma, Ronald. Tracking Requirements for Augmented Reality. Communications of the ACM, 36, 7 (July 1993), 50-51.