This page contains information on some fun and interesting projects. I have made sure theres atleast a project report for each. Additionally, some have a short PPT for a quick overview. Over the next few weeks, I will try my best to dig up the code and make that available.

Studying Immersion in VR

The goal of this short assignment was to demonstrate the effectiveness of an immersive experience by having it evoke a natural fear-stress response of heights.


I built a scene in Unreal where the user finds himself sitting in a chair hovering above a lake in a mountainous region. The user can control his vertical movement by pressing up/down keys. The chair accelerates at a constant rate (9.8 m/s^2) in either direction. I conducted a few experiments using the Oculus DK2 and it seemed to evoke a fearful response from most participants when the chair pummels down from a great height, falling at the rate of gravity. However, rising did not evoke a significant response. In future iterations, I want to investigate whether slow ascent evokes greater fear and consequentially, greater sense of presence. Also, having a staircase or some object that provides a sense of the height being gained might further increase the sense of presence.

Aurally Guided Navigation in VR

In our increasingly information intensive world, attention is a limited and precious commodity. In many cases, the introduction of spatial audio cues can improve overall response times, user experience, and efficiency in tasks where the visual field is noisy. Studies have explored the use of audio cues with real sound sources in limited environments, but the advent of virtual reality and augmented reality have made exploring virtual spatial audio cues in real and virtual environments both feasible and attractive.


For the purpose of the project, we implemented integrated a local collision avoidance library (RVO2), a spatial sound library (GSound) and our custom navigation planner into the Unreal Engine. We decided to use the following task to demonstrate the efficacy of our system. Users were asked to follow a “leader” agent through a simulated environment. That leaders were either presented with visual cues, aural cues, or both, to help the subject navigate the environment. We tested the system with several friends and presented them with a short questionnaire after related to the efficacy of the system. The users were placed in an Oculus Rift with a Logitech G930 headset and offered the use of a keyboard or X-Box 360 controller. Our limited experiment suggests that we have been successful in producing a virtual sound system capable of replicating experimental results consistent with the prior work.

CryptoShooter

This project serves as a proof of concept for visual cryptography in virtual applications. Essentially, each character is assigned a visual share. The superimposition of two visual share may decode messages and enable clandestine information exchange between characters. Such an application would be guarantee fool-proof security and can have a multitude of applications in the age of augmented reality.


In order to illustrate this application, we developed a first person game with Unity Game Engine. The objective is to ‘tag’ secondary agents who are identified as 'target' agents. This step of identifying 'target' characters is done by using cryptographic techniques which require the user player to align an image token with the corresponding image token of the secondary character. All secondary characters in the game are programmed to sense and evade the main character. Points are awarded for every target agent tagged and deducted for every non-target agent tagged.
Checkout the youtube video for a quick peek!

Physically Based Modeling

I recently took a course on "Physically Based Modeling and Simulation" at UNC. Aside from studying numerical techniques, I also got the chance to explore the Unity 4 game engine. Here are some really cool physically based toys to analyze/play with:

Projectile Simulation

Here's a 3D artillery simulator built in Unity. The user can change the mass of the projectile, amount of powder, the azimuth and elevation of the gun barrel. The user can also select one of two numerical integration methods: Eular or 4th order Runge-Kutta. Note: Due to an issue with unity, the web player version doesnt accept multiple inputs requiring the user to refresh the page for each file. Try one of the executables instead.

Spring Simulation

The following application simulates a vertical spring with a user defined mass and spring constant, not stretched at first, under the effect of spring and gravitational forces. As before, the user can select one of methods: Eular or 4th order Runge-Kutta.

Bead on a circle

The following application simulates the motion of a particle constrained by a unit circle. It accounts for the gravitational force, dampening and a optional user applied force. All parameters can be changed during runtime. Note: The web player doesn't visualize the circle, try one of the executables instead.

Learning Latent Factor Models of Human Travel

A 'good' travel model could yield new scientific insights into human behavior, aside for being useful in numerous applications. For example, travel models can help in predicting the spread of disease; surveying tourism, traffic, and special events mobility for urban planning; geolocating with computer vision; interpreting activity from movements; and recommending travel. The objective of our project was to implement a basic latent travel model. Essentially, we model travel probabilities as functions of spatially-varying latent properties of locations and travel distance. The latent factors represent interpretable properties: travel distance cost and desirability of locations. These latent factors are combined in a multiplicative model which easily lends itself to incorporating additional latent factors and sources of information.

Food Detection in Images

Estimating geographic information from an image is an excellent, difficult high-level computer vision problem with applicability in a wide array of disciplines. Intuitively, one can imagine that the socioeconomic status of an individual would play a key role in determining the individual's travel propensity. However, due to the unavailability of such a dataset, determining the socioeconomic status of an individual is a non-trivial task. For example, most recent works in this area have used the publicly-available Flickr.com image streams of individuals to build their travel models. In other words, our task is reduced to learning the socioeconomic status of an individual from his/her uploaded photographs. One factor that may be used to determine the socioeconomic status of an individual is the type of restaurants he or she frequents. For example, an individual who regularly frequents expensive restaurants is likely to be in a higher socioeconomic bracket than one who eats at fast food joints.

Graph based Image Segmentation

The goal of this project was to implement and thorougly analyze the image segmentation algorithm proposed by Felzenszwalkb et. al. (2004). The proposed method defines a metric for measuring the evidence of a boundary between two regions using a graph-based representation of the image. Based on the proposed metric, an efficient image segmentation algorithm is developed. Although this algorithm is a greedy algorithm, it respects some global properties of the image. Some important features of the proposed algorithm are that it runs in linear time and that it has the capability to adapt its behavior differently between regions of high-variability and low-variability. In particular, we demonstrate that it ignores details in high-variability regions.

Real Time Object Recognition

In 2012, I spent a wonderful summer interning at the Technischen Universität Berlin, with the Neural Information Processing Group under the guidance of Dr. Johannes Mohr and Prof. Klaus Obermayer. I worked on developing a recognition system that is largely scale-, illumination-, orientation-invariant and can be used on any object regardless of its shape or size. The recognition system is also capable of recognizing objects in a cluttered scene in almost real time.

Content based Image Retrieval & Browsing

In 2011, I interned at the Defence Research & Development Organization, India with the Defence Terrain Research Lab. During my time there I developed CTD, a unique feature descriptor,for color image retrievel and browsing. A key goal of the project was to allow satellite image browsing which can be used to detect military installations.

Global & Local Latency Analysis

Our goal was to study and analyze the patterns observed during IP packet transfers through various locations around the world. Packets take different routes to reach a specific destination and our goal was to observe the variation in their latencies and check if there is any behavioral pattern for traces on weekdays and weekends. Also we studied the dependence of latency with respect to distance of the destination from source and for different countries. Additionally, we observed that there are at times transatlantic hops in some ‘local’ pings and we reported the observation and its impact it has over the network.

Hand Gesture Recognition

During my final year of undergraduate studies, I worked on American Sign Language recognition using the Microsoft Kinect camera. I also designed and implemented specific GUI applications, such as paint, using hand gestures as input.

Code

The following application can be used to generate a random roadmap for a given scene. Notes on installation/compilation instructions for win 32 and further details on usage can be found in the Readme.