I am a graduate student in the Computer Science Department at the University of Central Florida and working with Dr. Hassan Foroosh. I received a Master's degree from the University of North Carolina at Chapel Hill under the supervision of Dr. Jan-Michael Frahm. My research interests are in the area of Computer Vision, Machine Learning, and HCI. More specifically, my research focuses on human motion analysis and action recognition.
Previously, I used to work for Digital Media Communications R&D center, Samsung Electronics for about five years. I received a M.E. degree in 2007 under the supervision of Prof. HanSeok Ko from Korea University and was a student researcher at IMRC at Korea Institute of Science and Technology under the supervision of Dr. Yong-Moo Kwon. I received a B.E. degree in 2005 from Sogang University.
I am very excited about many extracurricular activities especially in sports. Some sports I like are: swimming, cycling, running, hiking, soccer, baseball, table tennis, tennis, ski, and so on. I also participate in challenging things like triathlon and marathons, and love travelling to places I have never been.
Research Assistant at UNC-CH • Aug 2013 - Jul 2014
FINDER is a query-based image retrieval system, a technique that outputs similar images from millions of database images given a query image. FINDER system tries to find images that share a common structure in the DB images to the query image. In the system, a geometric verification method is utilized to match the query image and database images. Image matching modules are implemented in C++ and CUDA, and an overall system is implemented in Python.
Research Assistant at UNC-CH • Aug 2012 - Jul 2013
Gyroscope sensors in mobile phones are used for measuring orientation of their devices when using AR/VR applications. However, these sensors are typically inaccurate. In our research, a camera can be converted into a visual gyroscope by estimating orientation using a captured sky image streams. We especially utilize cloud as our target object and estimate relative rotation of the camera with high precision by computing a Homography estimation between captured sky images. The system is implemented in C++ and Android.
Research Engineer • Feb 2007 - Jun 2012
Users can create their own contents using the easy programmable software. After creating contents, users can enjoy their contents in both PC and mobile devices.
A 3D indoor environment map can be generated by building a measurement-based 2D metric map which is acquired by using a laser range-finder with texture acquisition/stitching and texture-mapping to the corresponding 3D model. Obtaining an accurate 2D map is the core part of the algorithm and a 3D model is generated from it. The wall plane is assumed to be orthogonal to the floor plane and we build walls based on generated 2D map. Once the 3D indoor map is generated, it can be used to interact with a robot which exists in the same indoor environment. VIDEO
A gaze tracking system is developed with a camera facing to a user and two IR-LED arrays. A reflection position due to the IR-LEDs and pupil center can be extracted using an image processing method and gaze direction can be determined by computing the relation between the reflection and pupil center position. Before each person uses the gaze tracking system, one needs to calibrate the system for higher accuracy. This system runs at 10Hz with XGA resolution images and shows high accuracy (1.26 and 0.97 degree for x-axis and y-axis respectively). Some applications of the system are a gaze-based mouse for disabled people and an gaze analysis tool for those interested.
This project was started for a special exhibition, "10 years later - Robots are coming". In this project, a vision-based interaction program was developed. The program detects upper bodies of people in a captured video and overlays special shapes such as hearts on the detected upper bodies. Users can move the rendered special shapes to other people using their hands. The goal for this project was to enable people to interact with others by their gestures. We believe the interaction with other people through interaction with a robot or a computer is the key feature in the near future. Also, Human-Robot interaction will be common place in the future. This project is one of Human-Robot interaction examples.
A Smart Room is a small space which consists of 6 projectors, wall screens, 8 cameras, an 8.1 channel speaker, and a Smart Floor. We named a floor of the room as Smart Floor since positions of people can be detected by pressure sensors under the floor. The Smart Floor construction is completed with many calibrations of pressure sensors in order to detect different weights and correct positions.
This system calculates the height of people or objects automatically using Single View Geometry Technique. When an image is taken, the height of any object in the scene needs to be determined as reference length to calculate other interesting objects in the image. The key technique is how to accurately find directions of the vanishing points for three axis. The end points of an interesting object in the image are selected manually.
This robot, McStocker, moves automatically without colliding with surrounding objects. It runs with 7 ultrasonic sensors (6 for 3 sides and 1 for the remaining side) which can measure a distance between the surrounding obstacles and sensors. In addition, the sensor information helps the robot determine where to navigate. A moving direction is decided by the sensor information and driving mode decision. The system operates using 2 stepping motors based on different driving modes which help the robot avoid obstacles around it. Three different driving methods are possible: autonomous cruise, point to point navigation, and object following.
I have learned diverse topics on Computer Science as well as Mathematics and Statistics during a period in UNC-CH. Below are my choice of courses taken at UNC-CH.