Content area
Full text
The global approach and organisation of this spatial performance tool were officially presented during the Journées d’Informatique Musicale (Lengelé 2017) and in the ICMC conference (Lengelé 2018), along with a review and a classification of some techniques for spatial composition. Whereas my paper in the 2021 ICMC proceedings (Lengelé 2021) details the compositional structure of the tool by describing how a series of values from different modules are combined together to compose the parameters of spatialised sound events, this article is more focused on the performative aspect and explains my conception of comprovisation (the composition of a performance tool) and more specifically the word ‘imposition’. After summarising the objectives for this research and composition methods in the spatial audio context, the tool is here explained and illustrated through three spatial concerts, by describing in detail how the parameters (especially playback speeds) of numerous spatialised sound events are modified in real time via a combination of different controllers.
As Ott, Tutescu, Wienboker, Rosenbauer and Gorne (2019) remark on the production of spatial audio content:
in conventional music, audio drama or sound design productions it is very common to think of sound production and ‘spatialization’ (typically in form of a stereo or surround mix) as independent production steps. However, from our experience this does not apply to spatial audio, as the spatialization here becomes a substantial part of the artistic process. There is a remarkable difference between a workflow where the sounds are produced in a conventional studio and then ‘mixed’ in the loudspeaker dome, and a workflow where the composition is made in and for the immersive environment. (Ott etal. 2019: 185)
Moreover, a lot of spatial tools only think space in terms of trajectories. Some extend this concept by integrating dynamic spatialisation linked to gravity, velocity or acceleration (Penha and Oliveira 2013) or a new graphical design (Dilger 2013). But only a few libraries offer new ways to play with space via decorrelation and parameter distribution of syntheses or effects (Negrao 2014; Bonardi and Guillot 2015; Sédes 2015; Nyström 2018), which require a closer integration of sound and space, of the sound engine and of the rendering algorithm.





