Content area
Audio analysis is a rapidly advancing field that spans various domains, including speech, music, and environmental sound data. Using spectrograms with Convolutional Neural Networks (CNNs) enables the visualization and extraction of critical audio features by combining time-frequency representations with deep learning. Pooling plays a crucial role in this process, as it reduces dimensionality while retaining essential information. However, existing evaluations of pooling methods primarily emphasize downstream task performance, such as classification accuracy, often overlooking their effectiveness in preserving critical signal features. To address this gap, we use 17 distinct metrics, categorized into four domains, to comprehensively assess various pooling operations. Furthermore, we explore the underex-amined relationship between specific pooling techniques and their impact on feature retention across diverse audio applications. Our analysis encompasses spectrograms from three audio domains (speech, music, and environmental sound), identifying their key characteristics, and grouping them accordingly. Using this setup, we evaluate the performance of 12 pooling methods across these applications. By investigating the features critical to each task and evaluating how well different pooling techniques preserve them, we give insights into their suitability for specific applications. This work aims to guide researchers in selecting the most appropriate pooling strategies for their applications, enabling more granular evaluations, improving explainability, and thereby advancing the precision and efficiency of audio analysis pipelines.