Assistant Research Professor
University of Maryland College Park, Computer Science/UMIACS
Dr. Aniket Bera is an Assistant Research Professor in the Department of Computer Science and University of Maryland Institute for Advanced Computer Studies (UMIACS). Prior to this, he was a Research Assistant Professor at University of North Carolina at Chapel Hill. He received his Ph.D. in 2017 from the University of North Carolina at Chapel Hill (where he was advised by Dinesh Manocha).
His core research interests are in Computer Graphics, AI, Social Robotics, Visual Crowd Tracking, Data-Driven Crowd Simulations, Cognitive modeling: Knowledge, reasoning, and planning for intelligent characters. He is working with the Geometric Algorithms for Modeling, Motion, and Animation (GAMMA) group.
His current research focuses on the social perception of intelligent agents and robotics. His research involves novel combinations of methods and collaborations in the field of computer graphics, physically-based simulation, statistical analysis, and machine learning to develop real-time computational models to classify such behaviors and validate their performance. Dr. Bera has previously worked in many research labs including Disney Research, Intel and Centre for Development of Advanced Computing (Govt. of India).
EVA: Modeling Perceived Emotions of Virtual Agents using Expressive Features of Gait and Gaze
ACM SAP 2019 Conference Papers Best Poster Award
Tanmay Randhavane, Aniket Bera, Kyra Kapsaskis, Kurt Gray, Dinesh Manocha
DensePeds: Pedestrian Tracking in Dense Crowds Using FRVO and Sparse Features
IROS 2019 Conference Papers
Rohan Chandra, Uttaran Bhattacharya, Aniket Bera, Dinesh Manocha
FVA: Modeling Perceived Friendliness of Virtual Agents Using Movement Characteristics
TVCG / ISMAR 2019 Journal Papers Conference Papers
Tanmay Randhavane, Aniket Bera, Kyra Kapsaskis, Kurt Gray, Dinesh Manocha
TraPHic: Trajectory Prediction in Dense and Heterogeneous Traffic Using Weighted Interactions
CVPR 2019 Conference Papers
Rohan Chandra, Uttaran Bhattacharya, Aniket Bera, Dinesh Manocha
LCrowdV: Generating labeled videos for pedestrian detectors training and crowd be-havior learning
Neurocomputing 2019 Journal Papers
Ernest Cheung, Tsan Kwong Wong,Aniket Bera, Dinesh Manocha
Pedestrian Dominance Modeling for Socially-Aware Robot Navigation
ICRA 2019 Conference Papers
Tanmay Randhavane,Aniket Bera, Emily Kubin, Austin Wang, Kurt Gray, Dinesh Manocha
TrackNPred: A Software Framework for End-to-End Trajectory Prediction
ACM CSCS 2019 Conference Papers
Rohan Chandra, Uttaran Bhattacharya, Aniket Bera, Dinesh Manocha
University of Maryland College Park, Computer Science/UMIACS
University of North Carolina Chapel Hill, Computer Science
University of North Carolina Chapel Hill, Computer Science
University of North Carolina Chapel Hill, Computer Science
Disney Research LA
Intel Corporation / Intel Labs
C-DAC (India)
Ph.D. in Computer Science
University of North Carolina at Chapel Hill
M.S. in Computer Science
University of North Carolina at Chapel Hill
M.B.A. in Information Systems
Jaypee Business School
B.Tech. in Computer Science and Engineering
Jaypee Institute of Information Technology
This publication list may be outdated. Please refer to Google Scholar to updated list: https://scholar.google.com/citations?user=q3UdHk4AAAAJ&hl=en
We present RobustTP, an end-to-end algorithm for predicting future trajectories of road-agents in dense traffic with noisy sensorinput trajectories obtained from RGB cameras (either static or moving) through a tracking algorithm. In this case, we consider noise as the deviation from the ground truth trajectory. The amount of noise depends on the accuracy of the tracking algorithm. Our approach is designed for dense heterogeneous traffic, where the road agents corresponding to a mixture of buses, cars, scooters, bicycles, or pedestrians. RobustTP is an approach that first computes trajectories using a combination of a non-linear motion model and a deep learning-based instance segmentation algorithm. Next, these noisy trajectories are trained using an LSTM-CNN neural network architecture that models the interactions between roadagents in dense and heterogeneous traffic. Our trajectory prediction algorithm outperforms state-of-the-art methods for end-to-end trajectory prediction using sensor inputs. We achieve an improvement of upto 18% in average displacement error and an improvement of up to 35.5% in final displacement error at the end of the prediction window (5 seconds) over the next best method. All experiments were set up on an Nvidia TiTan Xp GPU. Additionally, we release a software framework, TrackNPred. The framework consists of implementations of state-of-the-art tracking and trajectory prediction methods and tools to benchmark and evaluate them on real-world dense traffic datasets.
We present a novel, real-time algorithm, EVA, for generating virtual agents with various emotions. Our approach is based on using non-verbal movement cues such as gaze and gait to convey emotions corresponding to happy, sad, angry, or neutral. Our studies suggest that the use of EVA and gazing behavior can considerably increase the sense of presence in scenarios with multiple virtual agents. Our results also indicate that both gait and gazing features contribute to the perceptions of emotions in virtual agents.
We present a new approach to improve the friendliness and warmth of a virtual agent in an AR environment by generating appropriate movement characteristics. Our algorithm is based on a novel data-driven friendliness model that is computed using a user-study and psychological characteristics. We investigated the perception of a user in an AR setting and observed that an FVA has a statistically significant improvement in the perceived friendliness and social presence of a user compared to an agent without the friendliness modeling.
We present a pedestrian tracking algorithm, DensePeds, that tracks individuals in highly dense crowds (greater than 2 pedestrians per square meter). Our approach is designed for videos captured from front-facing or elevated cameras. We present a new motion model called Front-RVO (FRVO) for predicting pedestrian movements in dense situations using collision avoidance constraints and combine it with state-of-the-art Mask R-CNN to compute sparse feature vectors that reduce the loss of pedestrian tracks (false negatives). We evaluate DensePeds on the standard MOT benchmarks as well as a new dense crowd dataset. In practice, our approach is 4.5 times faster than prior tracking algorithms on the MOT benchmark and we are state-of-the-art in dense crowd videos by over 2.6% on the absolute scale on average.
We present a novel procedural framework to generate an arbitrary number of labeled crowd videos (LCrowdV). The resulting crowd video datasets are used to design accurate algorithms or training models for crowded scene understanding. Our overall approach is composed of two components: a procedural simulation framework to generate crowd movements and behaviors, and a procedural rendering framework to generate different videos or images. Each video or image is automatically labeled based on the environment, number of pedestrians, density, behavior (personality), flow, lighting conditions, viewpoint, type of noise, etc. Furthermore, we can increase the realism by combining synthetically-generated behaviors with real-world background videos. We demonstrate the benefits of LCrowdV over prior lableled crowd datasets by augmenting a real dataset with it and improving the accuracy in pedestrian detection and crowd classification. Furthermore, we evaluate the impact of removing the variety in different LCrowdV parameters to show the importance of the diversity of data generated from our framework. LCrowdV has been made available as an online resource.
We present a Pedestrian Dominance Model (PDM) to identify the dominance characteristics of pedestrians for robot navigation. Through a perception study on a simulated dataset of pedestrians, PDM models the perceived dominance levels of pedestrians with varying motion behaviors corresponding to trajectory, speed, and personal space. At runtime, we use PDM to identify the dominance levels of pedestrians to facilitate socially-aware navigation for the robots. PDM can predict dominance levels from trajectories with ~85% accuracy. Prior studies in psychology literature indicate that when interacting with humans, people are more comfortable around people that exhibit complementary movement behaviors. Our algorithm leverages this by enabling the robots to exhibit complementing responses to pedestrian dominance. We also present an application of PDM for generating dominance-based collision-avoidance behaviors in the navigation of autonomous vehicles among pedestrians. We demonstrate the benefits of our algorithm for robots navigating among tens of pedestrians in simulated environments
We present a new algorithm for predicting the near-term trajectories of road-agents in dense traffic videos. Our approach is designed for heterogeneous traffic, where the roadagents may correspond to buses, cars, scooters, bi-cycles, or pedestrians. We model the interactions between different roadagents using a novel LSTM-CNN hybrid network for trajectory prediction. In particular, we take into account heterogeneous interactions that implicitly accounts for the varying shapes, dynamics, and behaviors of different road agents. In addition, we model horizon-based interactions which are used to implicitly model the driving behavior of each road-agent. We evaluate the performance of our prediction algorithm, TraPHic, on the standard datasets and also introduce a new dense, heterogeneous traffic dataset corresponding to urban Asian videos and agent trajectories. We outperform state-of-the-art methods on dense traffic datasets by 30%.
We present a real-time, data-driven algorithm to enhance the social-invisibility of robots within crowds. Our approach is based on prior psychological research, which reveals that people notice and–importantly–react negatively to groups of social actors when they have high entitativity, moving in a tight group with similar appearances and trajectories. In order to evaluate that behavior, we performed a user study to develop navigational algorithms that minimize entitativity. This study establishes mapping between emotional reactions and multi-robot trajectories and appearances, and further generalizes the finding across various environmental conditions. We demonstrate the applicability of our entitativity modeling for trajectory computation for active surveillance and dynamic intervention in simulated robot-human interaction scenarios. Our approach empirically shows that various levels of entitative robots can be used to both avoid and influence pedestrians while not eliciting strong emotional reactions, giving multi-robot systems socially-invisibility.
We present a data-driven algorithm to model and predict the socio-emotional impact of groups on observers. Psychological research finds that highly entitative i.e. cohesive and uniform groups induce threat and unease in observers. Our algorithm models realistic trajectory-level behaviors to classify and map the motion-based entitativity of crowds. This mapping is based on a statistical scheme that dynamically learns pedestrian behavior and computes the resultant entitativity induced emotion through group motion characteristics. We also present a novel interactive multi-agent simulation algorithm to model entitative groups and conduct a VR user study to validate the socio-emotional predictive power of our algorithm. We further show that model-generated high-entitativity groups do induce more negative emotions than low-entitative groups.
We present a novel approach to automatically identify driver behaviors from vehicle trajectories and use them for safe navigation of autonomous vehicles. We propose a novel set of features that can be easily extracted from car trajectories. We derive a data-driven mapping between these features and six driver behaviors using an elaborate web-based user study. We also compute a summarized score indicating a level of awareness that is needed while driving next to other vehicles. We also incorporate our algorithm into a vehicle navigation simulation system and demonstrate its benefits in terms of safer realtime navigation, while driving next to aggressive or dangerous drivers.
We present a real-time, data-driven algorithm to enhance the social-invisibility of autonomous vehicles within crowds. Our approach is based on prior psychological research, which reveals that people notice and–importantly–react negatively to groups of social actors when they have high entitativity, moving in a tight group with similar appearances and trajectories. In order to evaluate that behavior, we performed a user study to develop navigational algorithms that minimize entitativity. This study establishes mapping between emotional reactions and multi-robot trajectories and appearances, and further generalizes the finding across various environmental conditions. We demonstrate the applicability of our entitativity modeling for trajectory computation for active surveillance and dynamic intervention in simulated robot-human interaction scenarios. Our approach empirically shows that various levels of entitative robots can be used to both avoid and influence pedestrians while not eliciting strong emotional reactions, giving multi-robot systems socially-invisibility.
We present a novel approach to automatically identify driver behaviors from vehicle trajectories and use them for safe navigation of autonomous vehicles. We propose a novel set of features that can be easily extracted from car trajectories. We derive a data-driven mapping between these features and six driver behaviors using an elaborate web-based user study. We also compute a summarized score indicating a level of awareness that is needed while driving next to other vehicles. We also incorporate our algorithm into a vehicle navigation simulation system and demonstrate its benefits in terms of safer real-time navigation, while driving next to aggressive or dangerous drivers.
This paper presents a planning system for autonomous driving among many pedestrians. A key ingredient of our approach is PORCA, a pedestrian motion prediction model that accounts for both a pedestrian’s global navigation intention and local interactions with the vehicle and other pedestrians. Unfortunately, the autonomous vehicle does not know the pedestrians’ intentions a priori and requires a planning algorithm that hedges against the uncertainty in pedestrian intentions. Our planning system combines a POMDP algorithm with the pedestrian motion model and runs in real time. Experiments show that it enables a robot scooter to drive safely, efficiently, and smoothly in a crowd with a density of nearly one person per square meter
We present a real-time, data-driven algorithm to enhance the social-invisibility of autonomous vehicles within crowds. Our approach is based on prior psychological research, which reveals that people notice and–importantly–react negatively to groups of social actors when they have high entitativity, moving in a tight group with similar appearances and trajectories. In order to evaluate that behavior, we performed a user study to develop navigational algorithms that minimize entitativity. This study establishes mapping between emotional reactions and multi-robot trajectories and appearances, and further generalizes the finding across various environmental conditions. We demonstrate the applicability of our entitativity modeling for trajectory computation for active surveillance and dynamic intervention in simulated robot-human interaction scenarios. Our approach empirically shows that various levels of entitative robots can be used to both avoid and influence pedestrians while not eliciting strong emotional reactions, giving multi-robot systems socially-invisibility.
We present a novel interactive multi-agent simulation algorithm to model pedestrian movement dynamics. We use statistical techniques to compute the movement patterns and motion dynamics from 2D trajectories extracted from crowd videos. Our formulation extracts the dynamic behavior features of real-world agents and uses them to learn movement characteristics on the fly. The learned behaviors are used to generate plausible trajectories of virtual agents as well as for long-term pedestrian trajectory prediction. Our approach can be integrated with any trajectory extraction method, including manual tracking, sensors, and online tracking methods. We highlight the benefits of our approach on many indoor and outdoor scenarios with noisy, sparsely sampled trajectory in terms of trajectory prediction and data-driven pedestrian simulation
We present a new method for training pedestrian detectors on an unannotated set of images. We produce a mixed reality dataset that is composed of real-world background images and synthetically generated static human-agents. Our approach is general, robust, and makes few assumptions about the unannotated dataset. We automatically extract from the dataset: i) the vanishing point to calibrate the virtual camera, and ii) the pedestrians' scales to generate a Spawn Probability Map, which is a novel concept that guides our algorithm to place the pedestrians at appropriate locations. After putting synthetic human-agents in the unannotated images, we use these augmented images to train a Pedestrian Detector, with the annotations generated along with the synthetic agents. We conducted our experiments using Faster R-CNN by comparing the detection results on the unannotated dataset performed by the detector trained using our approach and detectors trained with other manually labeled datasets. We showed that our approach improves the average precision by 5-13% over these detectors.
The problems of video-based pedestrian detection and path prediction have received considerable attention in robotics, traffic management, intelligent vehicles, video surveillance, and multimedia applications. In particular, some key challenges include the development of realtime or online methods as well as handling crowd videos with medium or high densities of pedestrians. We give an overview of realtime algorithms for extracting the trajectory of each pedestrian in a crowd video using a combination of nonlinear motion models and learning methods. These motion models are based on new collision-avoidance and local navigation algorithms that provide improved accuracy in dense settings. The resulting tracking algorithm can handle dense crowds with tens of pedestrians at realtime rates (25–30 fps). We also give an overview of techniques that combine these motion models with global movement patterns and Bayesian inference to predict the future position of each pedestrian over a long horizon. The combination of local and global features enables us to accurately predict the trajectory of each pedestrian in a dense crowd at realtime rates. We highlight the performance in real-world crowd videos with medium crowd density.
The ability to automatically recognize human motions and behaviors is a key skill for autonomous machines to exhibit to interact intelligently with a human-inhabited environment. The capabilities autonomous machines should have include computing the motion trajectory of each pedestrian in a crowd, predicting his or her position in the near future, and analyzing the personality characteristics of the pedestrian. Such techniques are frequently used for collision-free robot navigation, data-driven crowd simulation, and crowd surveillance applications. However, prior methods for these problems have been restricted to low-density or sparse crowds where the pedestrian movement is modeled using simple motion models. In this thesis, we present several interactive algorithms to extract pedestrian trajectories from videos in dense crowds. Our approach combines different pedestrian motion models with particle tracking and mixture models and can obtain an average of 20% improvement in accuracy in medium-density crowds over prior work. We compute the pedestrian dynamics from these trajectories using Bayesian learning techniques and combine them with global methods for long-term pedestrian prediction in densely crowded settings. Finally, we combine these techniques with Personality Trait Theory to automatically classify the dynamic behavior or the personality of a pedestrian based on his or her movements in a crowded scene. The resulting algorithms are robust and can handle sparse and noisy motion trajectories. We demonstrate the benefits of our long-term prediction and behavior classification methods in dense crowds and highlight the benefits over prior techniques. We highlight the performance of our novel algorithms on three different applications. The first application is interactive data-driven crowd simulation, which includes crowd replication as well as the combination of pedestrian behaviors from different videos. Secondly, we combine the prediction scheme with proxemic characteristics from psychology and use them to perform socially-aware navigation. Finally, we present novel techniques for anomaly detection in low-to medium-density crowd videos using trajectory-level behavior learning
We present an approach for multi-agent navigation that facilitates face-to-face interaction in virtual crowds, based on a novel interaction velocity prediction (IVP) algorithm. Our user evaluation indicates that such techniques enabling face-to-face interactions can improve the sense of presence felt by the user. The virtual agents using these algorithms also appear more responsive and are able to elicit more reaction from the users.n
We present a real-time algorithm, SocioSense, for socially-aware navigation of a robot amongst pedestrians. Our approach computes time-varying behaviors of each pedestrian using Bayesian learning and Personality Trait theory. These psychological characteristics are used for long-term path prediction and generating proxemic characteristics for each pedestrian. We combine these psychological constraints with social constraints to perform human-aware robot navigation in low- to medium-density crowds. The estimation of timevarying behaviors and pedestrian personalities can improve the performance of long-term path prediction by 21%, as compared to prior interactive path prediction algorithms. We also demonstrate the benefits of our socially-aware navigation in simulated environments with tens of pedestrians.
We present a real-time algorithm to automatically classify the dynamic behavior or personality of a pedestrian based on his or her movements in a crowd video. We present a statistical scheme that dynamically learns the behavior of every pedestrian in a scene and computes that pedestrian's motion model. This model is combined with global crowd characteristics to compute the movement patterns and motion dynamics, which can also be used to predict the crowd movement and behavior.
We present a novel procedural framework to generate an arbitrary number of labeled crowd videos (LCrowdV). The resulting crowd video datasets are used to design accurate algorithms or training models for crowded scene understanding. Our overall approach is composed of two components: a procedural simulation framework for generating crowd movements and behaviors, and a procedural rendering framework to generate different videos or images. Each video or image is automatically labeled based on the environment, number of pedestrians, density, behavior (agent personality), flow, lighting conditions, viewpoint, noise, etc. Furthermore, we can increase the realism by combining syntheticallygenerated behaviors with real-world background videos. We demonstrate the benefits of LCrowdV over prior lableled crowd datasets, by augmenting real dataset with it and improving the accuracy in pedestrian detection. LCrowdV has been made available as an online resource.
The proposed interactive crowd-behavior learning algorithms can be used to analyze crowd videos for surveillance and training applications. The authors' formulation combines online tracking algorithms from computer vision, nonlinear pedestrian motion models from computer graphics, and machine learning techniques to automatically compute trajectory-level pedestrian behaviors for each agent in the video. These learned behaviors are used to automatically detect anomalous behaviors, perform motion segmentation, and generate realistic behaviors for virtual reality training applications.
We present an online parameter learning algorithm for data-driven crowd simulation and crowd content generation. Our formulation is based on incrementally learning pedestrian motion models and behaviors from crowd videos. We combine the learned crowd-simulation model with an online tracker to compute accurate, smooth pedestrian trajectories. We refine the motion model using an optimization technique to estimate the agents׳ simulation parameters. We also use an adaptive-particle filtering scheme for improved computational efficiency. We highlight the benefits of our approach for improved data-driven crowd simulation, including crowd replication, augmented crowds and merging the behavior of pedestrians from multiple videos. We highlight our algorithm׳s performance in various test scenarios containing tens of human-like agents and evaluate it using standard metrics.
We present an algorithm for realtime anomaly detection in low to medium density crowd videos using trajectorylevel behavior learning. Our formulation combines online tracking algorithms from computer vision, non-linear pedestrian motion models from crowd simulation, and Bayesian learning techniques to automatically compute the trajectory-level pedestrian behaviors for each agent in the video. These learned behaviors are used to segment the trajectories and motions of different pedestrians or agents and detect anomalies. We demonstrate the interactive performance on the PETS 2016 ARENA dataset as well as indoor and outdoor crowd video benchmarks consisting of tens of human agents.
We present a novel real-time algorithm to predict the path of pedestrians in cluttered environments. Our approach makes no assumption about pedestrian motion or crowd density, and is useful for short-term as well as long-term prediction. We interactively learn the characteristics of pedestrian motion and movement patterns from 2D trajectories using Bayesian inference. These include local movement patterns corresponding to the current and preferred velocities and global characteristics such as entry points and movement features. Our approach involves no precomputation and we demonstrate the real-time performance of our prediction algorithm on sparse and noisy trajectory data extracted from dense indoor and outdoor crowd videos. The combination of local and global movement patterns can improve the accuracy of long-term prediction by 12-18% over prior methods in high-density videos.
We present an adaptive data-driven algorithm for interactive crowd simulation. Our approach combines realistic trajectory behaviors extracted from videos with synthetic multi-agent algorithms to generate plausible simulations. We use statistical techniques to compute the movement patterns and motion dynamics from noisy 2D trajectories extracted from crowd videos. These learned pedestrian dynamic characteristics are used to generate collision-free trajectories of virtual pedestrians in slightly different environments or situations. The overall approach is robust and can generate perceptually realistic crowd movements at interactive rates in dynamic environments. We also present results from preliminary user studies that evaluate the trajectory behaviors generated by our algorithm.
We present an interactive approach for analyzing crowd videos and generating content for multimedia applications. Our formulation combines online tracking algorithms from computer vision, non-linear pedestrian motion models from computer graphics, and machine learning techniques to automatically compute the trajectory-level pedestrian behaviors for each agent in the video. These learned behaviors are used to detect anomalous behaviors, perform crowd replication, augment crowd videos with virtual agents, and segment the motion of pedestrians. We demonstrate the performance of these tasks using indoor and outdoor crowd video benchmarks consisting of tens of human agents, moreover, our algorithm takes less than a tenth of a second per frame on a multi-core PC. The overall approach can handle dense and heterogeneous crowd behaviors and is useful for realtime crowd scene analysis applications.
We present a trajectory extraction and behavior-learning algorithm for data-driven crowd simulation. Our formulation is based on incrementally learning pedestrian motion models and behaviors from crowd videos. We combine this learned crowd-simulation model with an online tracker based on particle filtering to compute accurate, smooth pedestrian trajectories. We refine this motion model using an optimization technique to estimate the agents’ simulation parameters. We highlight the benefits of our approach for improved data-driven crowd simulation, including crowd replication from videos and merging the behavior of pedestrians from multiple videos. We highlight our algorithm’s performance in various test scenarios containing tens of human-like agents
We present a novel, real-time algorithm to extract the trajectory of each pedestrian in moderately dense crowd videos. In order to improve the tracking accuracy, we use a hybrid motion model that combines discrete and continuous flow models. The discrete model is based on microscopic agent formulation and is used for local navigation, interaction, and collision avoidance. The continuum model accounts for macroscopic behaviors, including crowd orientation and flow. We use our hybrid model with particle filters to compute the trajectories at interactive rates. We demonstrate its performance in moderately-dense crowd videos with tens of pedestrians and highlight the improved accuracy on different datasets.
We present a novel, realtime algorithm to compute the trajectory of each pedestrian in moderately dense crowd scenes. Our formulation is based on an adaptive particle filtering scheme that uses a multi-agent motion model based on velocity-obstacles, and takes into account local interactions as well as physical and personal constraints of each pedestrian. Our method dynamically changes the number of particles allocated to each pedestrian based on different confidence metrics. Additionally, we use a new high-definition crowd video dataset, which is used to evaluate the performance of different pedestrian tracking algorithms. This dataset consists of videos of indoor and outdoor scenes, recorded at different locations with 30-80 pedestrians. We highlight the performance benefits of our algorithm over prior techniques using this dataset. In practice, our algorithm can compute trajectories of tens of pedestrians on a multi-core desktop CPU at interactive rates (27-30 frames per second). To the best of our knowledge, our approach is 4-5 times faster than prior methods, which provide similar accuracy.
We present a novel realtime algorithm to compute the trajectory of each pedestrian in a crowded scene. Our formulation is based on an adaptive scheme that uses a combination of deterministic and probabilistic trackers to achieve high accuracy and efficiency simultaneously. Furthermore, we integrate it with a multi-agent motion model and local interaction scheme to accurately compute the trajectory of each pedestrian. We highlight the performance and benefits of our algorithm on well-known datasets with tens of pedestrians.
In this paper a line based script identification using a hierarchical classification scheme is proposed to identify the Indian scripts includes Hindi, Gurumukhi and Bangla. We model the problem as topological, structural classification problem and examine the features inspired by human visual perception. Our basic algorithm uses different feature set at different level of classifier to optimize the tradeoff between accuracy and speed. The feature extraction is done on the subsets of image which in turn increases the performance of algorithm. The proposed system attains overall classification accuracy of 90% over the 2500+ text image data set.
In this paper a new, faster approach which is different from all the other conventional image vectorization techniques. Using canny edge detection we are able to find the sharp edges in the image and the assigning shades to each identifiable segment using random colour extraction from the original image. Finally mapping the colour blobs with the SVG Schema and generating a scalable vector image. This technique is efficient for natural and well as non-It can be directly used in security cameras for live image enhancement.
There are various methods to estimate the scene flow. Most of the methods use motion estimation with stereo re-construction. This paper describes an interesting way to fuse the video from two camera's and create a 3D reconstruction. The proposed algorithm incorporates probabilistic distributions for optical flow and disparity. Multiple such re-created renderings can be put together to create re-timed movies of the event, with the resulting visual experience richer than that of a regular video clip, or switching between images from multiple cameras, do a head tracking of the viewer and change the view angle accordingly or view it on a mobile device using the accelerometer for camera tilting for the 3D effect.
/p>
Multi-agent systems are widely studied in different fields including robotics, AI, computer graphics and autonomous driving. In this course, we will cover the fundamentals of multi-agent simulation, mostly focusing on issues related to motion, behavior and navigation. The course will focus on three major applications: pedestrian and crowd simulation, real world pedestrian movement and analysis, and autonomous driving simulation. The course will consist of lectures by the instructors on the fundamental concepts in the areas, student lectures on selected topics of interests, and special guest lectures on recent research or work in progress. The goal of this class is to get students an appreciation of computational methods for motion planning and multi-agent simulation. We will discuss various considerations and tradeoffs used in designing various methodologies (e.g. time, space, robustness, and generality). This will include data structures, algorithms, computational methods, their complexity and implementation. Depending on the interests of the students, we may cover topics of interests in related areas. The course will include coverage of some software systems that are widely used to implement different motion planning and multi-agent simulation algorithms.
I would be happy to talk to you if you need my assistance in your research or whether you need bussiness administration support for your company. Though I have limited time for students but I Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
You can find me at my office located at Stanford University Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
I am at my office every day from 7:00 until 10:00 am, but you may consider a call to fix an appointment.
You can find me at my Work located at Stanford University Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
I am at my office every day from 7:00 until 10:00 am, but you may consider a call to fix an appointment.
You can find me at my office located at Stanford University Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
I am at my office every day from 7:00 until 10:00 am, but you may consider a call to fix an appointment.