Full text

Turn on search term navigation

© 2025 Wu et al. This is an open access article distributed under the terms of the Creative Commons Attribution License: http://creativecommons.org/licenses/by/4.0/ (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

Automating the task of music genre classification offers opportunities to enhance user experiences, streamline music management processes, and unlock insights into the rich and diverse world of music. In this paper, an improved ViT model is proposed to extract more comprehensive music genre features from Mel spectrograms by leveraging the strengths of both convolutional neural networks and Transformers. Also, the paper incorporates a channel attention mechanism by amplifying differences between channels within the Mel spectrograms of individual music genres, thereby facilitating more precise classification. Experimental results on the GTZAN dataset show that the proposed model achieves an accuracy of 86.8%, paving the way for more accurate and efficient music genre classification methods compared to earlier approaches.

Details

Title
An improved ViT model for music genre classification based on mel spectrogram
Author
Wu, Pingping; Gao, Weijie; Chen, Yitao; Xu, Fangfang; Ji, Yanzhe; Tu, Juan; Lin, Han  VIAFID ORCID Logo 
First page
e0319027
Section
Research Article
Publication year
2025
Publication date
Mar 2025
Publisher
Public Library of Science
e-ISSN
19326203
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
3177093316
Copyright
© 2025 Wu et al. This is an open access article distributed under the terms of the Creative Commons Attribution License: http://creativecommons.org/licenses/by/4.0/ (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.