The Walkthru Project

Overview

The overall goal of the Walkthrough Project is to create interactive computer graphics systems that enable a viewer to experience an architectural model by simulating a walk through of the model. Through the years the Walkthrough project has developed many different systems; with each it has been our goal to: a) drive existing dynamic graphics engines to the utmost, b) push forward the development of methods for tracking position and orientation, c) have users evaluate our systems frequently so that we can identify aspects of a system which most impair the illusion of real presence, and d) learn more about the behavior of people in simulations. Our long-term goal is to develop a personal, portable visualization system that will allow users to walk through and interact with models of meaningful complexity while receiving realistic visual, proprioceptive, and auditory feedback at interactive rates (>25 updates per second).

System Hardware and Software

The Walkthrough Project supports a wide variety of devices for rendering and viewing models, tracking a user's position and orientation, and allowing users to interact with a model. We currently use several parallel Silicon Graphics Onyx and Power Onyx machines, including the Reality Monster, a 32 processor SGI Onyx2(TM) with 8 Infinite Reality 2(TM) graphics pipes and UNC's Pixel-Flow graphics engine. For some modeling tasks, users visualize models on high-resolution 20" monitors (1280 x 1024 pixels). For most actual walkthroughs users view our models in stereo either on a projection screen television with LCD stereo glasses or through head-mounted displays (HMD). We support a number of different HMDs which use various technologies, including a commercial low-resolution (200 x 120 elements) LCD display, a medium-resolution (640 x 512 pixels) color-shutter CRT display that was designed in-house, and a commercial tiled, wide field-of-view LCD display. To track the position and orientation of the user's head and hand we employ short and long-range commercial magnetic tracking systems and a wide-area optical ceiling tracker that was designed and built in this department. Finally, a variety of joyballs, joysticks, wands, and other devices allow users to specify actions while walking through a model.

While the Walkthrough Project relies on the tracking and graphics hardware groups for advances in those areas, most of our research involves software efforts to make advances on four fronts: prettier models, faster display, real application and model building, and handier interfaces. Current efforts in these areas include:

Faster Display. Hierarchical view frustum culling processes only those portions of a model that are in the user's field of view. Fidelity-based level of detail creates lower complexity representations of objects and renders these simpler versions when the user can not discern the difference. Our most recent work in this area has been in the use of Hierarchical Levels of Detail (HLODs).  Distant geometry is replaced by images for gauranteed frame rates.  Potentially visible sets break a model into rooms and holes in walls between rooms, called "portals," thus making it possible to render only the current room and those rooms which are visible from the current room through a string of one or more portals.  Other occlusion culling techniques have been developed for more general models that may not be appropriate for the use of portals.  Texture mapped radiosity represents the high geometric complexity of a meshed primitive resulting from radiosity as a photo-texture on a single polygon.

Interaction. Collision detection and proximity queries with large models are used for evaluation of maintenace and operaton requirements.  We have developed techniques for performing these operations that are both fast enough for interactivity, and use a small enough memory footprint to be of use with models of large size (millions of primitives).

Prettier Models. Radiosity, a physically-based lighting model which simulates diffuse illumination in a model, generates realistic gradients and shadows on model primitives.  Photo- and procedural textures convey visually complex patterns, such as brick and woodgrain. A sound server generates a 3D audio environment for the user. Mirrors and glass windows display reflections of primitives in models.  Superposition Rendering allows for interactive rendering while accomidating sophisticated reflectance and transmission functions (BRDF/BTDF), such as glossy reflections.

Real Application and Model Building. A model of the Brooks House has been used in walkthroughs by the client, architect, and interior designer to evaluate remodeling options.  The model is now used for model validation and future planning.  Models of real or proposed structures worked on by the Walkthrough Project team include: Henderson County (NC) Courthouse, Orange Methodist Church (Chapel Hill, NC), Sitterson Hall (UNC Dept. of Computer Science building), Brooks House I and II, Frank Lloyd Wright's Fallingwater (model courtesy of Cornell University), the torpedo room of a notional submarine (model courtesy of the Electric Boat Division of General Dynamics), the auxilliary machine room of a notional submarine (also courtesy of the Electric Boat Division of General Dynamics), Yuan Ming Yuan garden (courtesy of Xing Xing Computer Graphics Inc), a coal-fired electric power plant, and a double-hull tanker ship. The torpedo room, auxilliary machine room, power plant, and tanker models are used to help determine the value of virtual-environment systems as tools for simulation-based design in order to avoid the cost and time required to build and revise physical models.

Handier Interface. Wide-area optical ceiling tracking allows users to navigate through and interact with a model in a natural fashion. High-resolution and wide field-of-view HMDs help the user better understand and view the models during walkthroughs. Object-to-object constraints and hand-tracked interaction with objects allow the user to manipulate the model naturally. We are investigating how people navigate and perform spatial problem solving in virtual environments. We are also exploring more effective ways to create models while immersed in a virtual environment.

Lessons Learned

We have learned many lessons through our work on the Walkthrough Project. Probably the most important is that in any virtual-environment system (such as our walkthroughs) many of the technical tasks are much more difficult when one uses real models of meaningful complexity, consisting of tens of millions of primitive elements.  Processing a model of this size is an effort comparable to developing and maintaining a program of large size. The software engineering techniques necessary for large programs are indeed appropriate for, and necessary for, model engineering. The second lesson has to do with recognizing that most of our efforts to increase the photorealism and size of models we can display at interactive rates are approximations to the exact solution of the same problem -- to process and render only those primitives which the user can see from the current viewpoint and to render those primitives only as faithfully as can be detected by the viewer.



Page maintained by walkweb@cs.unc.edu

Last updated: 2-17-01