Content area

Abstract

Imagine a narrative that unfolds as you explore the world around you, guiding your journey through sound. This thesis explores the procedural generation of sound walks: interactive, geolocated audio narratives that users experience by physically moving through real-world environments. Unlike traditional sound walks, which are confined to predefined routes, this work investigates how narrative structures can be dynamically mapped onto any walkable setting.

The central problem addressed is the lack of open, reusable systems capable of adapting interactive narratives to arbitrary geographic locations. To solve this, the main objective was to develop a modular and extensible framework that enables the deployment of narrative-driven sound walks in a location-aware and context-adaptive manner.

The methodology follows a Design Science Research approach, combining iterative design, development, and evaluation. An initial prototype supporting static sound walks was tested with users to gather requirements. Based on this, multiple procedural content generation (PCG) methods were designed and implemented to algorithmically map Points of Interest (POIs) from a narrative graph onto real-world pedestrian paths, using OpenStreetMap data and spatial constraints such as walking distances and audio durations. SoundSpot, a fully functional, cross-platform mobile web application was developed to deliver and manage these experiences in real time.

The resulting system introduces a novel approach to PCG by integrating spatial reasoning with narrative logic. Evaluation of the implemented methods demonstrated that structured and constraint-aware strategies significantly outperform naive approaches, ensuring spatial diversity while preserving narrative coherence. The prototype was successfully used to instantiate sound walks, including a dynamic version of the Play-The-Odds project, which supports communication around hereditary cancer. This lays the groundwork for future interdisciplinary collaborations in healthcare, as well as in fields such as tourism, education, and cultural heritage.

In conclusion, this work confirms the technical feasibility and experiential value of dynamically generated sound walks, establishing a foundation for scalable, context-sensitive audio storytelling in pervasive applications.

Details

1010268
Business indexing term
Title
Procedural Generation of Interactive Audio Narratives for Pervasive Application
Number of pages
157
Publication year
2025
Degree date
2025
School code
5896
Source
MAI 87/5(E), Masters Abstracts International
ISBN
9798265420039
University/institution
Universidade do Porto (Portugal)
University location
Portugal
Degree
M.C.E.
Source type
Dissertation or Thesis
Language
English
Document type
Dissertation/Thesis
Dissertation/thesis number
32306350
ProQuest document ID
3275477441
Document URL
https://www.proquest.com/dissertations-theses/procedural-generation-interactive-audio/docview/3275477441/se-2?accountid=208611
Copyright
Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.
Database
ProQuest One Academic