Full text

Turn on search term navigation

© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

With conventional stethoscopes, the auscultation results may vary from one doctor to another due to a decline in his/her hearing ability with age or his/her different professional training, and the problematic cardiopulmonary sound cannot be recorded for analysis. In this paper, to resolve the above-mentioned issues, an electronic stethoscope was developed consisting of a traditional stethoscope with a condenser microphone embedded in the head to collect cardiopulmonary sounds and an AI-based classifier for cardiopulmonary sounds was proposed. Different deployments of the microphone in the stethoscope head with amplification and filter circuits were explored and analyzed using fast Fourier transform (FFT) to evaluate the effects of noise reduction. After testing, the microphone placed in the stethoscope head surrounded by cork is found to have better noise reduction. For classifying normal (healthy) and abnormal (pathological) cardiopulmonary sounds, each sample of cardiopulmonary sound is first segmented into several small frames and then a principal component analysis is performed on each small frame. The difference signal is obtained by subtracting PCA from the original signal. MFCC (Mel-frequency cepstral coefficients) and statistics are used for feature extraction based on the difference signal, and ensemble learning is used as the classifier. The final results are determined by voting based on the classification results of each small frame. After the testing, two distinct classifiers, one for heart sounds and one for lung sounds, are proposed. The best voting for heart sounds falls at 5–45% and the best voting for lung sounds falls at 5–65%. The best accuracy of 86.9%, sensitivity of 81.9%, specificity of 91.8%, and F1 score of 86.1% are obtained for heart sounds using 2 s frame segmentation with a 20% overlap, whereas the best accuracy of 73.3%, sensitivity of 66.7%, specificity of 80%, and F1 score of 71.5% are yielded for lung sounds using 5 s frame segmentation with a 50% overlap.

Details

Title
Development of an Electronic Stethoscope and a Classification Algorithm for Cardiopulmonary Sounds
Author
Yu-Chi, Wu 1   VIAFID ORCID Logo  ; Chin-Chuan, Han 2 ; Chao-Shu, Chang 3 ; Fu-Lin, Chang 1 ; Shi-Feng, Chen 1 ; Tsu-Yi Shieh 4 ; Chen, Hsian-Min 5   VIAFID ORCID Logo  ; Jin-Yuan, Lin 1 

 Department of Electrical Engineering, National United University, Miaoli City 36003, Taiwan; [email protected] (F.-L.C.); [email protected] (S.-F.C.); [email protected] (J.-Y.L.) 
 Department of Computer Science and Information Engineering, National United University, Miaoli City 36003, Taiwan; [email protected] 
 Department of Information Management, National United University, Miaoli City 36003, Taiwan; [email protected] 
 Section of Clinical Training, Department of Medical Education, Taichung Veterans General Hospital, Taichung City 40705, Taiwan; [email protected]; Division of Allergy, Immunology and Rheumatology, Taichung Veterans General Hospital, Taichung City 40705, Taiwan 
 Center for Quantitative Imaging in Medicine (CQUIM), Department of Medical Research, Taichung Veterans General Hospital, Taichung City 40705, Taiwan; [email protected] 
First page
4263
Publication year
2022
Publication date
2022
Publisher
MDPI AG
e-ISSN
14248220
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2674407087
Copyright
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.