IEEE VR 2018, Proceedings of IEEE TVCG (First author)
We present a novel method to generate plausible diffraction effects for interactive sound propagation in dynamic scenes. Our approach precomputes a diffraction kernel for each dynamic object in the scene and combines them with interactive ray tracing algorithms at runtime. A diffraction kernel encapsulates the sound interaction behavior of individual objects in the free field and we present a new source placement algorithm to significantly accelerate the precomputation. Our overall propagation algorithm can handle highly-tessellated or smooth objects undergoing rigid motion. We have evaluated our algorithm’s performance on different scenarios with multiple moving objects and demonstrate the benefits over prior interactive geometric sound propagation methods. We also performed a user study to evaluate the perceived smoothness of the diffracted field and found that the auditory perception using our approach is comparable to that of a wave-based sound propagation method.
Journal of the Acoustical Society of America Express Letters, 2017 (First author)
Sound propagation encompasses various acoustic phenomena including reverberation. Current virtual acoustic methods ranging from parametric filters to physically accurate solvers can simulate reverberation with varying degrees of fidelity. The effects of reverberant sounds generated using different propagation algorithms on acoustic distance perception are investigated. In particular, two classes of methods for real time sound propagation in dynamic scenes based on parametric filters and ray tracing are evaluated. The study shows that ray tracing enables more distance accuracy as compared to the approximate, filter-based method. This suggests that accurate reverberation in VR results in better reproduction of acoustic distances.
ACM SAP, Proceedings of ACM Transactions on Applied Perception, 2016 (First author)
As sound propagation algorithms become faster and more accurate, the question arises as to whether the additional efforts to improve fidelity actually offer perceptual benefits over existing techniques. Could environmental sound effects go the way of music, where lower-fidelity compressed versions are actually favored by listeners? Here we address this issue with two acoustic phenomena that are known to have perceptual effects on humans and that, accordingly, might be expected to heighten their experience with simulated environments. We present two studies comparing listeners' perceptual response to both accurate and approximate algorithms simulating two key acoustic effects: diffraction and reverberation. For each effect, we evaluate whether increased numerical accuracy of a propagation algorithm translates into increased perceptual differentiation in interactive virtual environments. Our results suggest that auditory perception does benefit from the increased accuracy, with subjects showing better perceptual differentiation when experiencing the more accurate rendering method: The diffraction experiment shows a more linearly decaying sound field (with respect to the diffraction angle) for the accurate diffraction method, while the reverberation experiment shows that more accurate reverberation, after modest user experience, results in near-logarithmic response to increasing room volume.
IEEE VR 2016, Proceedings of IEEE TVCG (Honorable Mention for Best Paper) (First Author)
Recent research in sound simulation has focused on either sound synthesis or sound propagation, and many standalone algorithms have been developed for each domain. We present a novel technique for coupling sound synthesis with sound propagation to automatically generate realistic aural content for virtual environments. Our approach can generate sounds from rigid-bodies based on the vibration modes and radiation coefficients represented by the single-point multipole expansion. We present a mode-adaptive propagation algorithm that uses a perceptual Hankel function approximation technique to achieve interactive runtime performance. The overall approach allows for high degrees of dynamism - it can support dynamic sources, dynamic listeners, and dynamic directivity simultaneously. We have integrated our system with the Unity game engine and demonstrate the effectiveness of this fully-automatic technique for audio content creation in complex indoor and outdoor scenes. We conducted a preliminary, online user-study to evaluate whether our Hankel function approximation causes any perceptible loss of audio quality. The results indicate that the subjects were unable to distinguish between the audio rendered using the approximate function and audio rendered using the full Hankel function in the Cathedral, Tuscany, and the Game benchmarks.
IEEE VR 2015, Proceedings of IEEE TVCG (Co-Author)
We present an interactive wave-based sound propagation system that generates accurate, realistic sound in virtual environments for dynamic (moving) sources and listeners. We propose a novel algorithm to accurately solve the wave equation for dynamic sources and listeners using a combination of precomputation techniques and GPU-based runtime evaluation. Our system can handle large environments typically used in VR applications, compute spatial sound corresponding to listener’s motion (including head tracking) and handle both omnidirectional and directional sources, all at interactive rates. As compared to prior wave-based techniques applied to large scenes with moving sources, we observe significant improvement in runtime memory. The overall soundpropagation and rendering system has been integrated with the Half-Life 2 game engine, Oculus-Rift head-mounted display, and the Xbox game controller to enable users to experience high-quality acoustic effects (e.g., amplification, diffraction low-passing, highorder scattering) and spatial audio, based on their interactions in the VR application. We provide the results of preliminary user evaluations, conducted to study the impact of wave-based acoustic effects and spatial audio on users’ navigation performance in virtual environments.
IEEE Vis 2013, Proceedings of IEEE TVCG (First Author)
As the visualization field matures, an increasing number of general toolkits are developed to cover a broad range of applications. However, no general tool can incorporate the latest capabilities for all possible applications, nor can the user interfaces and workflows be easily adjusted to accommodate all user communities. As a result, users will often chose either substandard solutions presented in familiar, customized tools or assemble a patchwork of individual applications glued through ad-hoc scripts and extensive, manual intervention. Instead, we need the ability to easily and rapidly assemble the best-in-task tools into custom interfaces and workflows to optimally serve any given application community. Unfortunately, creating such meta-applications at the API or SDK level is difficult, time consuming, and often infeasible due to the sheer variety of data models, design philosophies, limits in functionality, and the use of closed commercial systems. In this paper, we present the ManyVis framework which enables custom solutions to be built both rapidly and simply by allowing coordination and communication across existing unrelated applications. ManyVis allows users to combine software tools with complementary characteristics into one virtual application driven by a single, custom-designed interface.