ravishm at cs dot unc dot edu
We present a novel approach for wave-based sound propagation suitable for large, open spaces spanning hundreds of meters, with a small memory footprint. The scene is decomposed into disjoint rigid objects. The free-field acoustic behavior of each object is captured by a compact per-object transfer-function relating the amplitudes of a set of incoming equivalent sources to outgoing equivalent sources. Pairwise acoustic interactions between objects are computed analytically, yielding compact inter-object transfer functions. The global sound field accounting for all orders of interaction is computed using these transfer functions. The runtime system uses fast summation over the outgoing equivalent source amplitudes for all objects to auralize the sound field at a moving listener in real-time.We demonstrate realistic acoustic effects such as diffraction, low-passed sound behind obstructions, focusing, scattering, high-order reflections, and echoes, on a variety of scenes.
Project | Paper | Video
We present an interactive virtual percussion instrument system,
Tabletop Ensemble, that can be used by a group of collaborative
users simultaneously to emulate playing music in real world while
providing them with flexibility of virtual simulations. An optical
multi-touch tabletop serves as the input device. A novel touch handling algorithm for such devices is presented to translate users'
interactions into percussive control signals appropriate for music
playing. These signals activate the proposed sound simulation system for generating realistic user-controlled musical sounds. A fast
physically-based sound synthesis technique, modal synthesis, is
adopted to enable users to directly produce rich, varying musical
tones, as they would with the real percussion instruments. In addition, we propose a simple coupling scheme for modulating the
synthesized sounds by an accurate numerical acoustic simulator
to create believable acoustic effects due to cavity in music instruments. This paradigm allows creating new virtual percussion instruments of various materials, shapes, and sizes with little overhead. We believe such an interactive, multi-modal system would
offer capabilities for expressive music playing, rapid prototyping
of virtual instruments, and active exploration of sound effects determined by various physical parameters in a classroom, museum,
or other educational settings. Virtual xylophones and drums with
various physics properties are shown in the presented system
Project | Paper | Video
An efficient algorithm for time-domain solution of the acoustic wave equation for the purpose of room acoustics is presented. It is based on adaptive rectangular decomposition of the scene and uses analytical solutions within the partitions that rely on spatially invariant speed of sound.
This technique is suitable for auralizations and sound field visualizations, even on coarse meshes approaching the Nyquist limit. It is demonstrated that by carefully mapping all components of the algorithm to match the parallel processing capabilities of graphics processors (GPUs), significant improvement in performance is gained compared to the corresponding CPU-based solver, while maintaining the numerical accuracy. Substantial performance gain over a high-order finite-difference time-domain method is observed.
Using this technique, a 1 second long simulation can be performed on scenes of air volume 7500 cu. m till 1650 Hz within 18 minutes compared to the corresponding CPU-based solver that takes around 5 hours and a high-order finite-difference time-domain solver that could take up to three weeks on a desktop computer.
To the best of the authors' knowledge, this is the fastest time-domain solver for modeling the room acoustics of large, complex-shaped 3D scenes that generates accurate results for both auralization and visualization.
Project | Paper | Applied acoustics | Bibtex
We present a method for real-time sound propagation that captures all wave effects, including diffraction and reverberation, for multiple moving sources and a moving listener in a complex, static 3D scene. It performs an offline wave-based numerical simulation
over the scene and extracts the perceptually-salient information. To
obtain a compact representation, the scenes acoustic response is
broken into two phases: early reflections (ER), and late reverberation
(LR), based on a threshold on the temporal density of arriving
sound peaks. The LRs representation is computed and stored
once per room in the scene, while the ERs accounts for more detailed
spatial variation by recording multiple simulations over a uniform
grid of source locations. ER data is then compactly stored at
each source/receiver point pair as a set of peak delays/amplitudes
and a residual frequency response sampled in octave bands. We
then describe an efficient, real-time technique that uses this precomputed
representation to perform binaural sound rendering based
on frequency-domain convolutions. We also introduce a new technique
to perform artifact-free spatial interpolation of the ER data.
Our system demonstrates realistic, wave-based acoustic effects in
real time, including diffraction low-passing behind obstructions,
hollow reverberation in empty rooms, sound diffusion in fullyfurnished
rooms, and realistic late reverberation.
Project | Paper | Slides | Video | ACM |
We present a robust algorithm for estimating visibility from a given viewpoint for a point set containing concavities, non-uniformly spaced samples, and possibly corrupted with noise. Instead of performing an explicit surface reconstruction for the points set, visibility is computed based on a construction involving convex hull in a dual space, an idea inspired by the work of Katz et al. We derive theoretical bounds on the behavior of the method in the presence of noise and concavities, and use the derivations to develop a robust visibility estimation algorithm. In addition, computing visibility from a set of adaptively placed viewpoints allows us to generate locally consistent partial reconstructions. Using a graph based approximation algorithm we couple such reconstructions to extract globally consistent reconstructions. We test our method on a variety of 2D and 3D point sets of varying complexity and noise content.
Project | Paper(Low res, High res) | Computer & Graphics - Elsevier
Man-made objects are ubiquitous in the real world and in virtual environments. While such objects can be very detailed, capturing every small feature, they are often identified and characterized by a small set of defining curves. Compact, abstracted shape descriptions based on such curves are often visually more appealing than the original models, which can appear to be visually cluttered. We introduce a novel algorithm for abstracting three-dimensional geometric models using characteristic curves or contours as building blocks for the abstraction. Our method robustly handles models with poor connectivity, including the extreme cases of polygon soups, common in models of man-made objects taken from online repositories. In our algorithm, we use a two-step procedure that first approximates the input model using a manifold, closed envelope surface and then extracts from it a hierarchical abstraction curve network along with suitable normal information. The constructed curve networks form a compact, yet powerful, representation for the input shapes, retaining their key shape characteristics while discarding minor details and irregularities.
Project | Paper(Low res, High res) | Slides | Video | ACM
We present an efficient GPU technique for rendering rich geometric detail (e.g., surface mesostructure) of complex surfaces. We use sphere-tracing aided by directional distance maps (DDMs) to quickly find ray-mesostructure intersection. High accuracy is achieved by analytically detecting such intersections, and space requirement is significantly reduced by distance map compression. Our technique can handle complex scenes containing both height-field and non height-field mesostructures in real time, with correct self-occlusion, self-shadow, interpenetrations and silhouettes. We demonstrate our algorithm on a variety of test scenarios and compare it with previous techniques.
Project | Paper | Slides | Video |
Thanks to Anjul Patney for the website template