Physically based modelling & simulation, Fall 2014 : Prof Ming Lin

Project : Estimation of physical properties of real world objects


This project is done in collaboration with Rohan Chabra. I would also like to thank Prof. E.Dunn for providing Kinect.


1.1 Description:

The objective of the project is to estimate real-world physical parameters like coefficient of restitution, coefficent of friction,etc from videos.

1.2 Motivation

Estimation of physical properties of objects can be useful in the field of 3D Scene reconstruction. These physical properties can be assigned to the objects that were reconstructed using Computer Vision techniques. These objects can now be made interactive in the reconstructed scene with their correct behaviour. Another application of this particular research can be in the robot industry where our intelligent system can be used to predict the collisions that can happen as a result of some motion. Our intelligent system can communicate these predictions to a robot to help it understand the environment better and perform motion accordingly. This research can also be used in the development of advanced augmented reality applications.


1.3 Data Acquisiton

Microsoft Kinect 1 is used to acquire the real world co-ordinates of the bouncing ball. OpenNI drivers and software package is used for RGBD data grabbing from the Kinect. The data from Kinect doesn’t have a constant fps, which varies from 15-30fps.This also depends upon the system used for recording. In our case, we couldn’t view and capture simultaneously due to slow computer. Recording at 30fps induces motion blur at very large velocities of the object. Due to this a simple tracking algorithm cannot be employed. Hence MIL tracking [6] is used to track the bouncing ball in depth data. This is used to get the real world coordinates of the ball. Sensors like Asus Xtion PRO support 60fps which can be used to get better frame rates and which will allow us the use of simpler tracking algorithms.
Finally, the data used by simulation and rendering engine is 4-dimensional and consists of (x,y, z,Δt).


1.4 Physics based simulation

The Kinect data is too noisy to estimate any parameters accurately. Hence we use physics based simulation to estimate them. We have used Bullet physics to simulate this but Bullet is designed for visually consistent motions. Hence, the accuracy level which we want is not there which has led us to the decision to write our own code for simulation. But this is not done completely and any demos and results we have reported are from Bullet only.


1.5 Future Work

  1. Incorporation of mesh
  2. Stereo estimation at 60/120 fps for better accuracy.
  3. Estimation of rolling friction.
  4. Validation using actual physics experiments. For ground truth,
    a. Accelerometer and gyroscope can be used to estimate ω and v
  5. Use of real-time 3D tracking algorithms
  6. Experiment with different surface pairs & objects of different sizes/shapes.


Demo

1.6 Some comments

With this brief overview if you are interested in the results or the actual methodologies used for this project please contact the authors. This site will be updated with more details when I have some more time.


If you are Prof. Lin, afs access has been given to you. If you cannot still access, maybe I did something wrong in setting up the permissions.Please tell me, I will correct it or share via google drive.
For understanding the code, please be sure to go through the README first. Thanks!



1.7 Powerpoint presentation

Download the pptx. Videos might not work!


pptx

1.8 References

  1. Binding vision to physics based simulation: The case study of a bouncing ball”. By N. Kyriazis, I. Oikonomidis, and A. Argyros. In Proc. BMVC, 2011
  2. Computer vision based material property extraction and data-driven deformable object modelling, Thesis by Steven Wilber, Metropolitan State College, Denver
  3. Bhat, Kiran S., et al. "Computing the physical parameters of rigid-body motion from video." Computer Vision—ECCV 2002. Springer Berlin Heidelberg, 2002. 551-565.
  4. Duff, Damien Jade, et al. "Physical simulation for monocular 3D model based tracking." Robotics and Automation (ICRA), 2011 IEEE International Conference on. IEEE, 2011.
  5. Kyriazis, Nikolaos, and Antonis Argyros. "Physically plausible 3d scene tracking: the single actor hypothesis." Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on. IEEE, 2013.
  6. Babenko, Boris, Ming-Hsuan Yang, and Serge Belongie. "Visual tracking with online multiple instance learning." Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. IEEE, 2009.
  7. Song, Shuran, and Jianxiong Xiao. "Tracking revisited using RGBD camera: Unified benchmark and baselines." Computer Vision (ICCV), 2013 IEEE International Conference on. IEEE, 2013.