Facetop - Transparent Video Interface

We have been thrilled by the response we've received regarding our Facetop project. The media links to the left point to several articles that have appeared, some from interviews we have given, and some that are second or third hand. Along the way, several misconceptions have arisen that we'd like to take a moment to correct.

•Facetop is not 'just' a transparent video conferencing tool. It is, first and foremost, a user interface driver. While most everyone has been picking up on the quote "The video shows a ghostly mirror image of the user so that when he points, his video reflection appears to touch objects on the screen," they've missed the second half: "The system tracks fingertip position in the video to allow the user to control the mouse pointer." It doesn't just look like you can manipulate objects on the screen, you can. Think of this as a replacement for the mouse.


   •The pictures available online with a single user visible are not the video conferencing mode. They are the single user mode, showing the user sitting at that computer. The user is seeing their own image. Why? By showing the user their own image, and having that video mirror their own movements, the user almost immediately figures out how far and in what direction to move their own hand in order to reach a particular destination on the screen. Previous approaches to allowing hand gesture and position based UI control have relied on determining the absolute position of the hand or finger in relation to the screen, and in some cases even the user's eyes and field of vision. These systems are, in general, complex and expensive, requiring specific spatial calibration constraints. We bypass all of that by using the most advanced spatial registration system we all have at our disposal: our own hand-eye coordination. From a practical standpoint, this means that the only requirements for the the spatial calibration of the camera, user and monitor are that the camera see the user, and the user see the displayed image. That's all. There is no setup calibration. In the image above, you can see the faint green rectangle around the user's fingertip, indicating the tracking algorithm at work. This is for debugging only, and in normal operation is not visible.  And no, that’s not the user’s reflection in the screen, as some have suggested - that’s the live video overlay.


The pictures showing two persons on the screen at one time are the video conferencing examples. The two people are not sitting side by side in the photo to the right - that is two video streams that are being composited - one from the local user, and the other from the remote user. The two users are not interacting as if they were on opposite sides of a pane of glass (as in ClearBoard), but instead just as if they were sitting side by side. In this way we reinforce the 'working together' aspect of collaboration, and also leverage the improved communication from having face to face interaction. In addition, either user can gesture and point to document content for rapid pinpointing of information regarding the collaborative effort. Instead of indicating a point of interest to the other user by saying "Third paragraph, fourth line, second word," one can simple point with their finger and say "There." Both users see the same composited video, and the same document. Optionally, either user can control both cursors on both ends of the conference, essentially tying the two UIs into one, with two users in different locations.
 

FaceTop

Selected Publications

  1. 1.Support for Distributed Pair Programming in the Transparent Video Facetop XP/Agile Universe 2004

  2. 2.FaceSpace; Endo- and Exo-Spatial Hypertext in the Transparent Video Facetop ACM Hypertext 2004

  3. 3.First Facetop demo UIST 2003


Media Coverage

  1. Slashdot.org, July 12, 2004

  2. Wired.com, July 9, 2004

  3. Seattle P-I Buzzworthy, July 9, 2004

  4. ACM Technology Review News, July 9, 2004

  5. ACM Technology Review News, July 7, 2004

  6. UNC Endeavors, Spring 2004

For More Information

We are very pleased to answer any general questions we can regarding this technology. Those interested in commercial or private deployment applications need to contact the Office of Technology Development here at UNC, and ask to speak to Lisa Darmo regarding the Facetop technologies. These technologies and methodologies are patent pending, and as such we are unable to discuss technical details beyond those already available in our publications. Until all patent(s) have issued, a confidential disclosure agreement (available through OTD) will be required to discuss proprietary technical details not covered in the above sources.