Interests
- Structure from motion/
Simultaneous localization and mapping(SLAM) (Estimation ofcamera pose
(external parameters) and 3D scene geometry from images or video.)
- Multi-sensor fusion
(Estimation of camera pose and geometric scene structure using vision
and other sensors such as GPS, gyros and accelerometers)
- Muiti-camera systems
(Estimation of the pose of systems of many, rigidly-coupled cameras
using constraints derived for the total system)
Awards
My collaborators and I won best demonstration
at computer vision and pattern recognition (CVPR) 2007 for "Real Time
3D Reconstruction of Urban Environments."
September 2008- Present
I am working on real-time visual simulataneous localization and mapping
for Honda's Asimo robot. We have created a system that can map a
small, single floor office building in real time and are working to
extend the system to larger environments.
January 2007- September 2008
I worked on the Video Analysis and Content Extraction project
funded by the Disruptive Technology Office. My work in this
project was on robust techniques for structure from motion.
June 2005 - December 2006
I worked for Marc
Pollefeys on the Urbanscape project.
The
goal of this project was to
produce a computer vision pipeline that can reconstruct 3D models from
video at near real-time speed. I developed a Kalman filter based
system for estimating the geo-location
of 3D models fusing vision and other sensor data, including GPS and INS.
UNC Computer Vision Group (Spring 2007)

Pictured clockwise from top: Philippos Mordohai, Paul Merrell,
Seon Joo Kim, Brian Clipp, Sudipta Sinha, Xiaowei Li, Changchang Wu, Li
Guan, David Gallup, Marc Pollefeys, Jean-Sebastian Franco, Jan-Michael
Frahm
Research Log
- Similarity transformation based geo-location of 3D models
- Kalman filter based sparse scene point and camera motion estimator
- Scaled motion estimation of non-overlapping multi-camera systems