Virtual Space Teleconferencing Using a Sea of Cameras

Henry Fuchs, Gary Bishop, Kevin Arthur, Leonard McMillan, Ruzena Bajcsy, Sang Lee, Hany Farid, and Takeo Kanade.

In Proceedings of the First International Symposium on Medical Robotics and Computer Assisted Surgery, (Pittsburg, PA) Sept. 22-24, 1994.

Abstract

A new approach to telepresence is presented in which a multitude of stationary cameras are used to acquire both photometric and depth information. A virtual environment is constructed by displaying the acquired data from the remote site in accordance with the head position and orientation of a local participant. Shown are preliminary results of a depth image of a human subject calculated from 11 closely spaced video camera positions. A user wearing a head-mounted display walks around this 3D data that has been inserted into a 3D model of a simple room. Future systems based on this approach may exhibit more natural and intuitive interaction among participants than current 2D teleconferencing systems.