COMP 239 Final Project: Realistic Avatar Movement Using Multiple Trackers
For my final project, I set out to take some steps towards more realistic,
convincing avatars in Virtual Environments.
Introduction
Most of today's Virtual Envionments (VEs) try to provide their users with
an enhanced sense of presence. While some try to achieve this goal by providing
high fidelity details in the environment itself (such as lighting, textures,
models), others target social presence, by making social interactions more
believable. More realistic avatars help a lot with making VEs more suitable
for social interactions.
In this project, I concentrate on two aspects of believable avatars: their
appearance and their movement.
Avatar Appearance
A very important factor for avatar believability is the avatar's appearance.
My virtual environment consisted of an avatar and a simple mirror where
the user could see the avatar's reflection. This way, not only can the user
see his/her own body parts, but also the mirror image that provides a visual
feedback on his/her own movement.
Figure 1: A screenshot of my VE.
To make the avatars' appearance as believable as possible, I used the state-of-the-art
open source character animation library, Cal3d. Cal3d is "a skeletal based
3d character animation library written in C++ in a platform-/graphic API-independent
way". As seen in the screenshot below, a model is composed of a skeleton
and a mesh:
Figure 2: A screenshot of Cal3d.
Cal3d was specifically designed to import, blend and play animations of
characters designed in 3D Studio Max and its
plugin, Character Studio.
The library automatically takes care of moving the mesh to match the movements
of the skeleton. For my project, I took advantage of the skeletal system
to make the models adopt the desired pose, and I disabled the animation system
entirely.
Avatar Movement
The other part of making believable avatars in VEs that I concentrate on
is making them move plausibly. The traditional way to implement plausible
movement is by employing expensive and cumbersome full body tracking.
For my project, I use a combined system formed by two HiBall optical trackers and
four Fastrack magnetic
trackers to track the upper part of a user's body and make the avatar inside
a VE move accordingly. The layout is as shown in the screenshot below: one
optical tracker on the user's head, the other one on the user's left hand,
one magnetic tracker on the torso, two magentic trackers on the elbows, and
the last magnetic tracker on the user's right hand.
Figure 3: Tracker layout.
To integrate the two tracking systems, I assume a static
location for the Fastrack sensor, and transform the readings of the magnetic
sensors in the HiBall's coordinate frame. Since the magnetic trackers have
lower accuracy, I provide a calibration step that effectively "moves" the
location of the Fastrack ceiling sensor with respect to which the positions
of the magnetic trackers are computed. The user gets his/her hands together
and presses a button. The locations of the optical tracker in the left hand
and the magnetic tracker in the right hand are assumed to be the same, and
from that the new position of the ceiling sensor is computed.
The low accuracy of the magnetic tracker compared to the
accuracy of the optical tracker also led to changes in my initial plans
for the tracker layout. I initially wanted to place two trackers on the
torso, to provide more accurate information on the body's position and orientation.
However, it was quite difficult to rely on rotation information from the
magnetic trackers, because the noise made them jitter incontrollably. While
this may be acceptable for the hands, it is unacceptable for the torso and
the feet, which was one of the reasons I chose the layout in Figure 2.
I also implemented a simple Inverse Kinematics (IK)
algorithm, Cyclic Coordinate
Descent (CCD). As seen in the drawing below, CCD is an iterative relaxation
scheme, where 1 DOF IK problems are solved repeatedly going up a chain of
bones.
Figure 4: Cyclic Coordinate Descent.
When paired up with constraints for each angle, CCD works quite well.Since
the Cal3d library had no infrastructure to store bone movement constraints,
I only implemented unconstrained IK. This works out well for large angles
between bones such as the spine, but is unstable for small angles between
bones such as the elbows and knees. This is the reason why I used IK only
for the avatar's spine, and direct tracker input for the avatar's limbs.
The inability of the unconstrained IK system to deal with sharp angles forced
me to track the upper limbs more closely, by placing trackers on both hands
and elbows.
Implementation Details
To build the final application, I integrated the Cal3d library
with my own tracking and pose computation code.
I used the head tracker's position and orientation to provide
the user with the appropriate view. I only used the position of the other
trackers to compute the appropriate pose for the avatar, but using the rotation
information is also possible. To get the avatar model to adopt the pose
computed from the trackers positions, I made the bones point towards the
trackers' postions. E.g., the upper arms point towards the elbows and the
forearms point towards the hands.
Future Work
The next steps in this project would be to implement the features that
haven't been implemented due to the limited time available:
take rotation information into account to compute more accurately
the limbs' positions, especially the positions of the hands;
implement a constrained IK system that would allow me to move the
trackers form the elbows to other parts of the body.
Links
Source code: src.zip.
As requested by several users on the Cal3D forum, here is the source code. It is provided AS IS, with no warranties. It's very messy, since I had no time to revise it. Feel free to email me if you have any problems, but I can't guarantee I'll be able to reply. Be sure to modify the project settings to set the correct path to the Cal3D source code. Also, make sure you have the Cal3D version from the CVS repository, not the zipped version from the download site.
The code will not compile without having the VRPN library installed, for tracker support. If you are interested only in the Inverse Kinematics part, references to trackers are safe to remove.
Conclusion
This project granted me the opportunity to explore two of the aspects
of making avatars in VEs more believable: appearance and movement. I gained
experience using the skeletal animation system of the Cal3d library, which
proved to be an useful tool worth considering in implementing VEs. I learned
the hard way that trackers need to be more accurate. I also learned about
IK systems and how they can be used for avatars. Using bone constraints for
movements plays an essential role in making movements computed with IK believable.