Content area
This paper presents the design and implementation of an automatic music transcription algorithm for piano audio, utilizing an optimized convolutional neural network with optimal parameters. In this study, we adopt the cepstral coefficient derived from cochlear filters, a method commonly used in speech signal processing, for extracting features from transformed musical audio. Conventional convolutional neural networks often rely on a universally shared convolutional kernel when processing piano audio, but this approach fails to account for the variations in information across different frequency bands. To address this, we select 24 Mel filters, each featuring a distinct center frequency ranging from 105 to 19,093 Hz, which aligns with the 44,100 Hz sampling rate of the converted music. This setup enables the system to effectively capture the key characteristics of piano audio signals across a wide frequency range, providing a solid frequency-domain foundation for the subsequent music transcription algorithms.
Details
Musical instruments;
Sound filters;
Machine learning;
Music;
Automatic classification;
Signal processing;
Wavelet transforms;
Artificial intelligence;
Adaptability;
Fourier transforms;
Artificial neural networks;
Optimization;
Neural networks;
Frequency ranges;
Frequencies;
Algorithms;
Information processing;
Audio signals;
Audio data;
Pianos;
Information retrieval;
Parameter estimation