Content area

Abstract

Accurate sound reproduction in virtual reality (VR) is essential for creating realistic environments, enhancing presence and spatial immersion. A strong understanding of acoustic principles and computational methods is crucial for simulating sound propagation in VR worlds.

Advanced methods like ray tracing and wave-based simulations handle complex interactions between audio sources, surfaces, and spatial features, enabling accurate analysis of reverberation and diffraction. Interactive sound design and spatial audio rendering tools enhance immersion by dynamically adjusting based on user interactions. Moving to a higher level of analysis, the need to seamlessly integrate sound simulations into web-based applications becomes apparent, where efficiency and user-friendliness are paramount. In the context of web environments, there is a pressing need for more straightforward models of sound propagation that are not only precise and easy to implement, but also boast a simplified syntax with a hierarchical structure. These simplified hierarchical models enhance interactivity by mapping user actions to audio parameters, enabling developers to create immersive spatial audio experiences with optimal performance and ease of implementation.

This research, articulated in this thesis, aims to integrate synthesizing and processing high-quality audio in web environments. Particularly, the main purpose is to introduce mixing, processing, and filtering tasks, along with acoustic properties associated with scene geometry and three dimensional (3D) spatial sound. This is achieved within a programming standard that leverages the structure and functionality of an audio specification. As a result, developers can create interactive audio experiences directly in the browser. The use of an audio graph model simplifies the development of these audio applications. This thesis presents a literature study of recent findings and enhancements in spatial sound propagation. A systematic methodology to run a set of scenarios and evaluation techniques was developed, and a new framework was designed and evaluated to overcome the challenges of dedicated systems.

Significant outcomes have been achieved in enhancing high-fidelity audio integration in online settings. Applications using the proposed framework demonstrated resilience, flexibility, and practical potential through extensive testing. A demonstration application highlighted its strengths and user-focused design. Validation, including participant input, assessed key metrics like visual appeal, interaction clarity, and satisfaction, providing insights for improvement. However, the framework primarily focuses on individual user experiences, lacking support for collaborative interactions.

Future work could explore mechanisms to facilitate the simultaneous participation of multiple users in shared examples, fostering collaborative engagement. There is potential to explore more complex scenarios and include additional 3D objects with detailed materials and textures to further enhance realism. Security in 3D web environments is a concern, requiring protection against unauthorized access. Lastly, the framework could be extended for use across different platforms and tools, allowing for broader compatibility and integration within diverse development web environments.

Details

1010268
Business indexing term
Title
Immersive Sound Specification in Virtual Reality Environments
Number of pages
215
Publication year
2025
Degree date
2025
School code
2074
Source
DAI-A 87/1(E), Dissertation Abstracts International
ISBN
9798288898471
University/institution
University of South Wales (United Kingdom)
University location
United Kingdom
Degree
Ph.D.
Source type
Dissertation or Thesis
Language
English
Document type
Dissertation/Thesis
Dissertation/thesis number
32124178
ProQuest document ID
3235008844
Document URL
https://www.proquest.com/dissertations-theses/immersive-sound-specification-virtual-reality/docview/3235008844/se-2?accountid=208611
Copyright
Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.
Database
2 databases
  • ProQuest One Academic
  • ProQuest One Academic