1. Introduction
1.1. Context and Objectives
Respiratory conditions are among the most common diseases associated with substantial morbidity and mortality [1], representing a growing health burden. Rapidly and reliably diagnosing pulmonary diseases is vital for establishing appropriate medical management and preventing further respiratory decompensation. Most conventional diagnostic tools (e.g., chest radiographs) can only be performed intermittently, and the standard physical exam (e.g., visual inspection and percussion) offers limited diagnostic accuracy [2,3,4]. Pulmonary auscultation is a noninvasive, safe, inexpensive, and easy-to-perform way to rapidly evaluate patients with pulmonary symptoms, making it an essential component of the clinical examination [5]. However, auscultation is operator-dependent and subject to inherent interobserver variability [2,3].
Deep learning (DL) is a subfield of machine learning (ML) and has seen increased exploration with the recent increasing computational power and large database availability [6]. In lay terms, ML allows a machine to learn rules and insights from input data, thus allowing it to apply those rules to generate predictions from data in new situations [7]. DL takes advantage of its multilayered architecture by sequentially feeding the representations into multiple layers, generating more distinguishable data points. This process allows the machine to learn highly complex functions [6].
ML and DL have shown encouraging results in healthcare when diagnosing diseases, primarily by analyzing images. For instance, radiology and pathology have benefitted from DL in disease diagnosis [8]. By utilizing large databases, classification algorithms have become increasingly accurate for detecting abnormalities in images and classifying them into multiple disease types [9], promising to reduce physician burnout and enhance test interpretations. Similarly, ML and DL can process audio signals and therefore classify sounds, such as those captured by auscultation, offering to aid clinicians in detecting and classifying heart [10] and lung [11] pathologies.
Respiratory sounds (RS) comprise relevant diagnostic information for pulmonary diseases [12]. These are heard over the chest wall and originate from the air movement in and out of the lungs during the respiratory cycle. RS interpretation in auscultation is often used in diagnosing lung pathologies, such as obstructive or restrictive respiratory diseases. As expected, these sounds are nonstationary and nonlinear, prone to noise contamination, making it hard for clinicians to detect abnormalities [13]. The diagnostic value of auscultation in detecting abnormal RSs could be improved if an objective and standardized interpretation approach is implemented [14,15]. This review aims to assess the diagnostic accuracy of ML and DL algorithms in abnormal lung sound detection and classification and evaluate the differences in methodology and reporting in the published literature to identify common issues that potentially slow down the progress of this promising field.
1.2. Process of Automated Abnormal Lung Sounds Classification
DL can recognize lung disorders and abnormalities based on RS analysis. These computer-assisted techniques increase the objectivity in detecting and diagnosing adventitious or pathological sounds. Figure 1 illustrates an overview of the automatic abnormal lung sounds classification process, which typically includes the following steps: audio recording, file preprocessing, feature extraction, and classification.
1.2.1. Lung Sound Recording
Lung sounds are typically recorded for training healthcare workers and for research analysis; these audio samples can be broken down to objectively describe their duration, waveform, and frequency components [16]. Recordings are obtained in one of two ways, either directly by trained personnel that perform the auscultation with a device designed or adapted (with a microphone) for sound recording or by attaching sensors to the subject’s chest, which allows prolonged or continuous recording [17]. The most used sensors are piezoelectric microphones, contact microphones, electret microphones, and the more widely distributed electronic stethoscopes [11]. However, this step is subject to variability among study designs due to differences in auscultation points, recording devices, and environmental conditions.
1.2.2. Audio Preprocessing
Preprocessing is an essential step, as it allows to modify the samples to better fit the purpose of the intended analysis, reduce the storage burden, and facilitate the extraction of features [18]. Among the components of preprocessing is denoising, which aims to eliminate signals that correspond to interference sources such as background noise, heartbeats, and movement [19] while preserving the valuable information; consequently, the resulting signal is cleaner and more suitable for further analysis. The most widespread denoising techniques are discrete wavelet transform (DWT), singular value decomposition (SVD), and adaptive filtering, which provide robust denoising but can be computationally expensive [20]. Smoothing is another approach, where multiple techniques are used to minimize the fluctuations in a signal, regardless of noise [21]. Other preprocessing methods include segmentation to separate breath cycles into their corresponding phases and amplitude normalization to reduce amplitude variations attributable to factors like a gain of the recording tool or subject demographics [22]. The adequate preprocessing of the audio files impacts the overall accuracy of the models [20].
1.2.3. Feature Extraction
Feature extraction is identifying a set of unique properties from a signal that will be used for comparison in the classification stage. In this step, a large input signal with many redundant components can be transformed into a smaller set of representative features able to describe the original signal accurately to facilitate and expedite the classification step [23]. In general, the features are extracted from one of the following: time, frequency, and time–frequency domains [11]. Some of the established techniques for feature extraction include autoregressive models, characterized for their short training time and low variance); mel-frequency cepstral coefficients (MFCCs), which are effective for reducing dimensionality but may not capture all the nuances of complex data; and spectral and wavelet-based features, which offer multiresolution analysis and precise feature localization [11].
1.2.4. Classification
ML and DL algorithms can classify the preprocessed signals and extracted features based on their characteristics, allowing them to differentiate between normal and abnormal sounds automatically. Two ways exist to feed the data into the model: holdout validation and cross-validation. In holdout validation, the dataset is divided into fixed splits of training, validation, and testing sets. The model uses training data to learn the parameters; then, the validation data allows the algorithm to search for the optimal set of hyperparameters for the model; finally, the test data is hidden during the whole model building and is used to assess the performance [24]. In the cross-validation approach, multiple partitions of the dataset are generated, allowing each partition to be used multiple times and with different purposes, potentially improving the statistical reliability of the classification results [25]. The goal of classification is to divide the sound signals into normal or abnormal [11], and more complex algorithms may go as far as differentiating between types of sounds or even underlying conditions. The performance metrics are derived from the results of this step, and measures such as accuracy or sensitivity can be calculated. Of note, the performance metrics not only depend on the used classifier but on all the previous steps.
1.3. Public Lung Sound Databases
The increasing popularity of artificial intelligence (AI) in biosignal classification coexists with a significant interest in developing public databases that provide the much-needed clinical data essential for developing classification models. Previous reviews have stated that biosignal databases have a clear tendency to use electrocardiogram (ECG) data [26]. Nonetheless, publicly available databases have been essential in developing abnormal lung sound [11] and cardiac [10] classification models. Undoubtedly, the interest in automatic lung sound detection has resurfaced mainly due to the widespread growth in ML and DL techniques, as well as the apparition of the mentioned publicly accessible databases [27], which narrow the gap between ML developers and available lung sound audio data. Despite the surge in the usage of large lung sound databases for DL algorithms development, a systematic evaluation has yet to examine the accuracy and reporting variations in the corresponding papers published in the last ten years.
2. Materials and Methods
2.1. Bibliographic Search
The systematic review was performed following the recommendations of the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) statement [28]. The comprehensive literature search for articles published between January 1990 and December 2022 was carried out by an experienced specialist medical librarian (D.J.G.) on five databases, including MEDLINE, Embase, Cochrane Central Register of Controlled Trials, Web of Science, and Scopus. The full search strategy can be found in the Supplementary Files. This was confirmed by two authors independently (J.G.-M. and A.L.). The final study protocol was registered on the OSF server:
2.2. Eligibility Criteria
For inclusion criteria, we defined studies that (a) proposed an ML classifier for the detection of adventitious and pathological lung sounds in adults; (b) used publicly available (online or CD) lung sounds databases; and (c) reported at least one performance metric for adequate classification, such as sensitivity, specificity, or accuracy. Book chapters, review papers, abstracts of communications or meetings, letters to the editor, commentaries to articles, unpublished works, and study protocols were excluded. Studies focused on the pediatric population or using nonpublic audio recordings were excluded. A complementary search using the references in the included papers was also conducted. Table 1 includes the detailed eligibility criteria.
2.3. Article Selection
Abstracts were screened by H.-Y.W. and J.G.-M. using the inclusion criteria. Full texts were independently reviewed in duplicate by eight reviewers organized in pairs (H.-Y.W., S.H., Y.P., A.T., J.G.-M., I.A., I.K., and A.L.). Disagreements were resolved during consensus meetings with a third reviewer (V.H.). Covidence software [29] was used for data collection. The studies’ outcomes were reported as the diagnostic accuracy for abnormal sound or pathology detection (sensitivity, specificity, and accuracy, when available). The types of performance measures reported depended on the approach of each study.
2.4. Data Extraction
The study details for the included articles were abstracted by ten independent researchers (H.-Y.W., S.H., Y.P., A.T., K.L., D.V., S.Q., J.G.-M., I.A., and I.K.) using a standardized data extraction form, and each article was assessed by two different researchers. The reviewers resolved discrepancies by consensus or in consultation with a third party, as needed. The data abstracted included the baseline details (year of publication and first author); study design (type of lung sound or pathology evaluated, DL algorithm used, feature extraction techniques, training/validation/test split, and evidence of external validation); dataset characteristics (number of recordings, auscultation points, the sensor used, and reference standard); and the performance metrics (reported as accuracy, sensitivity, and specificity).
2.5. Quality Assessment
We assessed the risk of bias (ROB) and applicability concerns for every included study using a modified QUADAS-2 (Quality Assessment of Diagnostic Accuracy Studies-2) instrument [30]. Ten researchers independently assessed the included articles. The quality assessment for each article was performed at least by two authors. Final adjudication and discrepancies were solved by consultation with a third author (A.L.). Given the poor standards of quality assessment (QA) reporting for AI-based diagnostic accuracy studies and the lack of validated QA tools [31], we modified the QUADAS-2 instrument to fit the purposes of this review. The four core domains for ROB evaluation were maintained, and new signaling questions tailored for this review were assessed. Given that the eligible studies used audio files from publicly available lung sound databases, such data sources were accessed when possible. This allowed for the assessment of the ROB during the audio recording creation of the database. When the corresponding lung sound database was not accessible anymore, the signaling question was answered as “N/A”, indicating a lack of information. The ROB for each domain was judged as low only when the answers to all signaling questions were “yes”; conversely, the ROB was deemed high in the presence of at least one signaling question responded to as “no”. If at least half of the signaling questions of a domain could not be assessed due to a lack of information, the ROB for the domain was deemed “unclear”. When the reference standard used to determine the sound ground truth classification was interpreted by a human or expert, this was listed as a potential source of bias and the corresponding question responded to as “no”. Applicability concerns were evaluated in the reference standard, index test, and patient selection domains, as recommended by the original QUADAS-2 instrument [32]. Notably, a significant portion of the studies used databases known to contain pediatric patients; therefore, these studies were classified as having a “high” risk regarding applicability.
3. Results
A standardized approach was used for this systematic review. A database search identified a total of 3143 records. The removal of 650 duplicates left 2493 articles. Of these, 2311 articles were excluded based on title and abstract screening. From the screening, 182 full-text articles were assessed for eligibility. The main reasons for exclusion were not using audio recordings from publicly available databases and not proposing a ML/DL algorithm for abnormal lung sound classification. A few studies developed an algorithm but did not test it with patient data or lacked a performance metrics report. This study selection resulted in a total of 62 articles included in the qualitative synthesis. Figure 2 depicts this process in detail. Supplementary Table S1 presents the characteristics of each included study, namely the classifier and database used, best obtained performance metrics, and classification categories.
3.1. Sources of Lung Sound Recordings
As mentioned earlier, this review focuses on studies that used abnormal lung sound recordings from public databases as opposed to studies that recorded their own audio samples for the study. Creating such databases involves a series of features, including data recording protocol, recording and storage hardware, time and place of collection, and audio file labeling. Having a number of features, these biosignal repositories are prone to heterogeneity in every aspect, as well as inconsistencies, even within the same database. For this reason, the characteristics of the databases were retrieved for quality assessment, as stated in the Methods section.
As AI applications in healthcare continue to expand, the amount of available data repositories continues to grow. In this review, 17 different data sources were identified. Forty-nine articles used recordings from a single source, whereas thirteen combined audio files from multiple sources. The most frequently used online databases were the International Conference in Biomedical and Health Informatics (ICBHI) 2017 database [27] (66%) and the Respiration Acoustics Laboratory Environment (R.A.L.E.) Lung Sounds database [33] (23%), whereas other databases such as the King Abdullah University Hospital (KAUH) database or the Stethographics Lung Sound Samples were used much less often. Some studies used not currently available online databases [34,35] or only CD-accessible [36,37,38,39] databases, which prevented the quality assessment of their creation process. It is worth noting that the introduction of databases like the one by Rocha et al. [27] in 2017 led to a surge in the production of articles, as observed in Figure 3, which describes the number of studies per year of publication.
3.2. Features of Lung Sounds Databases
The ICBHI 2017 database contains recordings from 126 individuals, obtained by two groups of researchers using the AKG C417L Microphone (AKGC417L), 3M Littmann Classic II SE Stethoscope (LittC2SE), 3M Littmann 3200 Electronic Stethoscope (Litt3200), and Welch Allyn Meditron Master Elite Electronic Stethoscope (Meditron) at university hospitals in Portugal and Greece [27]. Respiratory experts annotated the lung sounds as “crackles, wheezes, a combination of them, or no adventitious respiratory sounds”, and the patients had conditions such as asthma, bronchiectasis, bronchiolitis, COPD, and upper and lower respiratory tract infections. As mentioned earlier, lung sounds from this database were used by most articles, as it is an open-access, readily available database that covers a wide range of diseases and abnormal sounds. In addition, the database authors suggest calculating a series of standard performance metrics, further facilitating the comparison and validation of new classification models.
The other frequently used source was the R.A.L.E. Lung Sounds database [33]. These researchers from Canada used the 3 M Littmann3200 Electronic Stethoscope (Litt3200) and Welch Allyn Meditron Master Elite Electronic Stethoscope (Meditron) to capture over 50 recordings of lung sounds, including wheezes, rhonchi, crackles, squeaks, squawks, and pleural friction rubs, annotated by respiratory experts. This database is commercially available; a license must be acquired before access. Although this resource has been available for over 20 years, a significantly smaller number of the included studies opted to use it. The license includes access to clinical cases and quizzes related to lung sounds.
Notably, one-quarter of the reported databases are only accessible via the physical acquisition of a CD-ROM [40,41,42,43,44], which impairs the quality assessment and the description of characteristics in this review. Finally, seven of all the mentioned databases were not accessible when this review was performed, in all cases due to outdated internet sources. Therefore, their characteristics could only be derived from the included articles’ descriptions in studies where combined databases were described as a whole, preventing a distinction between sources and halting their separate assessments. Further features of all the databases are described in Table 2.
3.3. Types of Sounds Analyzed
All eligible articles in this review targeted pulmonary sounds, but their algorithms classified sounds differently. Thirty-eight studies (61%) created algorithms that classified sounds into normal or adventitious lung sounds, with the most common ones being crackles and wheezes, although some algorithms also identified rhonchi or stridor. Twenty-one studies (34%) classified recordings into different diseases, namely with chronic obstructive pulmonary disease (COPD), asthma, pneumonia, and bronchiectasis being the most common ones. Finally, three studies (5%) created separate algorithms to distinguish adventitious lung sounds and lung pathologies.
3.4. Classification Models
Table 3 contains the most used classifiers in this review, a general description, and the included references corresponding to each model. As explained earlier, these techniques are the final step in the process, and they allow to classify the abnormal sounds into different categories based on the similarities and differences of their features.
Among the included manuscripts, the most used classifiers were artificial neural networks (ANN) and their subtypes and support vector machines (SVM). These techniques are examples of supervised learning algorithms, which must be trained with labeled data before classifying the unseen data points [52]. These two models can generalize appropriately these unseen data points by minimizing the risk of overfitting, resulting from having a model that learned in a way that can only apply to the training sample and poorly generalizes to unseen data [53]. Notably, many variations of ANN were tested in the included studies, ranging from the basic multilayer perceptron (MLP), composed of a series of fully connected layers [54], to the more complex recurrent neural networks (RNN) and convoluted neural networks (CNN). Ensemble methods such as Random Forests and Boosting algorithms, which combine multiple learning algorithms to improve estimates and the classification performance [55], were occasionally used in the manuscripts.
Table 3The most used machine learning classification techniques.
Name | Features | Refs. | |
---|---|---|---|
ANN | CNN |
Inspired by networks of neurons, ANN models contain multiple layers of computing nodes that operate as nonlinear summing devices. These nodes communicate with each other by connection lines; the weight of each line is adjusted as the model is trained [56]. | [18,35,36,38,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91] |
SVM | This maximal margin classifier aims to find the hyperplane in an N-dimensional space that distinctly classifies the data points [92]. | [14,37,59,63,65,66,78,87,93,94,95,96,97,98,99] | |
k-NN | This classifier intends to classify a set of unlabeled data by assigning it to the class that contains the most similar labeled data points [100]. | [14,39,59,63,65,98,99] | |
DT | This technique classifies data by posing questions regarding the item’s features. Each question is represented in a node, and every node directs to a series of child nodes, one for each possible answer, forming a hierarchical tree [101]. | [59,87,98,102,103,104] | |
DA | This unsupervised learning technique intends to transform the features from a data point into a lower dimensional space, hereby maximizing the ratio of the between-class variance to the within-class variance, which results in maximized class separability [105]. | [87,106,107] | |
RF | Random Forest is a classifier that builds multiple decision trees by using random samples of data points for each tree and random samples of the predictors; the resulting forest provides fitted values more accurate than those of a single tree [108]. | [78,109] | |
GMM | Mixture models are derived from the idea that any distribution can be expressed as a mixture of distributions of known parameterization (such as Gaussians). Then, an optimization technique (such as expectation maximization) can be used to calculate estimates of the parameters of each component distribution [110]. | [34,35,111] | |
HMM | The hidden Markov model creates a sequence of GMM models to explain the input data. Its main difference from GMM is that it takes account of the temporal progression of the data, whereas GMM treats each sound as a single entity [112]. | [111,113,114,115] | |
GB | The main idea behind boosting techniques is to add a series of models into an ensemble sequentially. At each iteration, a new model is trained concerning the error of the whole ensemble [116]. | [99,117] | |
LR | Logistic regression is a technique that describes and tests hypotheses about relationships between a categorical (outcome) variable and one or more categorical or continuous predictor variables [118]. | [63,119] | |
NB | This supervised learning algorithm is based on the Bayes theorem. This technique works on probability distribution. The features present in the dataset are used to determine the outcome, but they are not related to other features [120]. | [39] |
Abbreviations: ANN: Artificial Neural Network; CNN: Convoluted Neural Network; RNN: Recurrent Neural Network; DNN: Deep Neural Network; DBN: Deep Belief Network; MLP: Multilayer Perceptron; SVM: Support Vector Machine; k-NN: k-Nearest Neighbors; DT: Decision Tree; DA: Discriminant Analysis; RF: Random Forest; GMM: Gaussian Mixture Model; HMM: Hidden Markov Model; GB: Gradient Boosting; LR: Logistic Regression; NB: Naive Bayes.
3.5. Performance Metrics
The evaluation of the ability of a model to adequately classify lung sounds into the appropriate category yields a series of metrics. It is of utmost importance to remember that the performance of a model not only depends on the ML/DL classifier but also on all the steps that precede it (audio recording, preprocessing, feature selection, and model training). These metrics are helpful when comparing different models that use the same data sources but, understandably, are not a reliable way to compare models across different databases. Some databases, like the ICBHI 2017 Challenge [27], suggest that researchers use specific performance metrics to evaluate their models; nonetheless, for this review, the evaluated performance metrics were accuracy and/or sensitivity and specificity. The accuracy for classification into abnormal sound categories ranged between 49.43 [102] and 100.00 [18]. Meanwhile, the sensitivity and specificity ranged between 17.80 [90] and 100.00 [18,65] and 59.69 [113] and 100.00 [38,64], respectively. On the other hand, the lowest and highest accuracies for models that classified sounds into disease classes were 69.40 [99] and 99.62 [69]. For the same studies, the sensitivity ranged between 28.00 [77] and 100.00 [63], whereas the specificity ranged between 81.00 [77] and 100.00 [88]. Remarkably, the reported metrics were highly heterogeneous between studies, limiting direct comparisons.
3.6. Quality Assessment
Given the lack of a validated tool for the quality assessment of diagnostic studies that use artificial intelligence, we optimized a version of the QUADAS-2 tool to evaluate the risk of bias and applicability concerns. After using this tool, all the studies were classified as having an overall high ROB, with most concerns over the patient selection and the reference standards. The high ROB in these domains directly relates to using public databases to obtain audio files. These sources often do not follow a specific sound recording protocol, use multiple devices, and rely on interpretation by an individual to assign labels to each recording. In addition, the characteristics of each database are rarely available, further halting the quality assessment process. None of the included studies had concerns regarding applicability in the index test domain, while almost all the studies had serious or unclear concerns in the patient selection and reference standard domains. The concern arose due to the poor description of the patient population in the included papers and/or data sources, which creates a risk of including pediatric patients, for example. Also, using expert annotation as a reference standard precludes the reliability of the labels for each study, raising concerns in this domain. Tables S2 and S3 in the Supplementary Files contain the individual assessment results of the risk of bias and applicability concerns, respectively. Figure 4 summarizes the quality assessment findings.
4. Discussion
Our systematic review provides a comprehensive update on using contemporary ML and DL models. To the best of our knowledge, this work offers a much-needed update that highlights the advances in automatic lung sound classification during the last six years, focusing on the introduction of large public databases that have encouraged further research in the field. The apparition of large public data sources in recent years has led to an increasing number of studies to share their lung sound audio samples, ideally facilitating comparisons between models. Nonetheless, a detailed description of the databases and studies is necessary to identify the emerging issues in the field and the progress made so far. Supplementary Table S1 highlights the models identified in our systematic review with the best accuracy, sensitivity, and specificity performance metrics.
4.1. Clinical and Scientific Relevance
Machine learning (ML) and deep learning (DL) techniques are of increasing importance and great functionality in the identification and classification of normal and abnormal lung sounds [121], although, historically, a bedside clinician has been the key decider for identifying and classifying various normal and abnormal lung sounds, such as vesicular lung sounds, crackles, and wheezes. This information carries various degrees of diagnostic certainty, depending on the experience level and skill set. The inability to identify and accurately classify lung sounds could significantly impact the delay in diagnosis and downstream management [122]. Güler et al. described the initial work of utilizing a neural networks-genetic algorithm approach to advance the field in the lung sounds classification [123]. Additionally, they employed a multilayer perception neural network employing a backpropagation training algorithm to predict normal or abnormal lung sounds (such as crackles or wheezes), ultimately yielding a model with promising performance, with correct classification rates of up to 93% for all lung sounds. Early studies like the aforementioned served as the groundwork for future authors that intended to improve the methodology and capabilities of their models.
The traditional methods of lung sound analysis depend heavily on the expertise of bedside clinician, which has a significant subjectivity. Their results could be prone to interobserver variability, and the same observer could potentially classify the same lung sounds differently. ML and DL algorithms could minimize that variability and could provide objectivity, offering several advantages. In addition to this, the ML and DL methods could extract the relevant features from lung sound recordings, capturing characteristics that were not picked up by pulmonary auscultation [124,125], such as the frequency content, temporal patterns, and spectral properties, to name a few. These additional characteristics could further enrich a training dataset’s diversity and variability, enabling accurate classification and identification for future studies.
With the technical advances in computing, machine learning in deep planning models such as support vector machines (SVM), Random Forests, and neural networks have been utilized at an increasing pace to label and classify lung sound data [126]. The increasing fidelity and improvement in the performance of the resulting models could provide accurate diagnostic and predictive enrichment for specific disease states, such as pneumonia, pleural effusions, consolidations, and airway diseases (rhonchi and wheezing), among others.
Deep learning models such as neural networks (NNs) could provide the benefit of real-time monitoring of lung sounds. If developed and validated clinically, these models could be used for real-time lung sound monitoring in acute care settings (such as hospitals) and remote monitoring environments such as nursing homes, rehabilitation facilities, or even at home [119,127]. The real-time analysis could allow for the early detection of disease states, enabling an actionable point of timely intervention and overall improvement in healthcare delivery. Potential challenges that could be anticipated include difficulty in noise reduction, thereby impeding the signal-to-noise ratio and diluting the diagnostic information present in the audio signals. With the advent of precision and personalized medicine, these machine learning and deep planning models can be trained on high-quality datasets with high signal-to-noise ratios, thereby allowing the further design of personalized models that could consider individual variations in lung sounds, accounting for age, sex, body habitus, disease progression, ethnicity, and other factors contributing to patient-to-patient variability [128,129,130].
4.2. Opportunities and Barriers
Utilizing machine learning and deep learning techniques in this realm has several strengths and advantages. ML and DL algorithms will enable the automated analysis of lung sounds, thereby relying less on human subjective nature and interpretation. This automation will improve efficiency with a reduction in interobserver variability. ML and DL models also excel in recognizing complex patterns in data that are either unknown or difficult to recognize by humans; this concept also holds true in lung sound identification [131,132]. As highlighted above, one of the biggest advantages will be the real-time monitoring of patients’ lung sounds remotely in a hospital setting and their community (at home). This will facilitate the early detection of physiological abnormality, and we will provide an actionable point of timely intervention. Adaptability and self-limiting from new data will allow for continuous improvement in performance and fidelity over time. Despite all the advantages highlighted above, these ML and DL models have inherent weaknesses. The availability of high-quality and labeled lung sound datasets can be a challenge, as highlighted by many manuscripts included in our systematic review. Heterogeneity in the database creation process inevitably leads to a scenario where comparisons between models are not possible. Stakeholder engagement for creating well-annotated datasets with patient populations can be time-consuming and expensive. Databases lacking in diversity could affect the generalizability and potentially increase healthcare disparities in diagnostics and healthcare delivery. Physiologically, lung sounds could vary significantly due to various patient factors such as body habitus, body position, patient movement, disease timeline, and recording conditions. This variability in lung sound recording could present hurdles in realizing consistent and accurate classification if not accurately annotated.
4.3. Strengths and Limitations
The strengths of this review include the extensive literature search, as well as the individual evaluations and detailed descriptions of the data sources. Furthermore, we developed a new approach to the quality assessment of the included articles, given the lack of validated assessment tools for diagnostic accuracy studies that use artificial intelligence. Our study was limited by the impossibility to perform a meta-analysis, given the heterogeneity in the performance reporting and data sources. Similarly, we could not access a large portion of the older databases, preventing us from evaluating and describing their characteristics. Notably, our review focused on studies in English that used public databases as their source of audio samples, excluding those published in other languages and those that opted for a different approach, such as collecting their own sounds. Although omitted in our work, these studies may provide valuable contributions to the development of the field.
4.4. Future Work
As noted, while the machine learning and deep learning techniques have, so far, offered valuable strengths in the accurate identification and classification of lung sounds, improved efficiency, and provided the possibility of real-time remote monitoring, they also face certain limitations. To harness the full potential of these techniques in healthcare, we need to overcome the challenges surrounding data availability, data security, accurate labeling and interpretation, and domain expertise. As evidenced by the results of this review, public databases are an essential component in the progress of the field of automatic lung sound classification, but researchers interested in developing their own database should aim to create a standardized approach to the recording, storage, and share processes, which will ultimately lead to more reliable comparisons between models. Utilizing ML and DL techniques for lung sound analysis could raise ethical concerns regarding patient privacy, data security, and other regulatory oversight needs [133]. Therefore, these concerns should be clearly addressed when developing public databases.
5. Conclusions
In conclusion, we see a rising trend of more ML and DL techniques demonstrating promise in appropriate identification and classification, increasing the accuracy for various lung sound characteristics. Automating the analysis process and enriching the currently publicly available databases could offer a precious source of objective and accurate diagnostic utility. With further advancements in computational prowess, these techniques have the potential to provide better-personalized precision medicine and accurate assessments of respiratory conditions, aiding in diagnosis, monitoring, and treatment.
Conceptualization, V.H., D.D. and B.W.P.; methodology, H.-Y.W., V.H. and S.H.; search strategy development and resources, D.J.G.; data extraction and curation, J.P.G.-M., H.-Y.W., S.H., A.T., Y.P., K.L., D.J.G., I.N.A., I.K. and S.Q.; writing—original draft preparation, J.P.G.-M., H.-Y.W., S.H., A.T., Y.P., K.L., I.N.A., I.K. and S.Q.; writing—review and editing, D.D., V.H., B.W.P. and A.L.; and supervision, V.H. and A.L. All authors have read and agreed to the published version of the manuscript.
Not applicable.
Not applicable.
Not applicable.
The authors declare no conflict of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 2. Preferred Reporting Items for Systematic Reviews and Meta-Analyses flow diagram.
Figure 4. Quality assessment summary plots for the risk of bias (top) and applicability concerns (bottom). Presented as the number of articles with high, unclear, or low risk/concerns across each domain of the modified QUADAS-2 tool. (Green: low risk of bias; red: high risk of bias; yellow: unclear risk of bias).
Population, Intervention, Comparator, Outcome, and Study Design (PICOS) eligibility criteria for the systematic review.
Parameter | Inclusion Criteria | Exclusion Criteria |
---|---|---|
Population |
|
|
Intervention |
|
|
Comparator |
|
|
Outcomes |
|
|
Study Designs |
|
|
Abnormal lung sounds sources are mentioned in the included articles. Some databases are no longer accessible or their characteristics are not described. (Contents are sorted by availability, last column, and country of origin, second column).
Database or Author Name | Country | Participants Number |
Abnormal Lung Sounds Labeled | Pathologies Labeled | Availability 1 | Ref. |
---|---|---|---|---|---|---|
R.A.L.E. Lung Sounds 3.2 | Canada | 70 (-); 17 | Crackles, Wheezes, Squawk, Stridor, Rhonchi | Asthma, COPD, Bronchiolitis, Laryngeal web, Bronchogenic carcinoma, Lung fibrosis, Cystic fibrosis. | Available online | [ |
ICBHI 2017 Challenge Database | Greece, |
126 (46/79); 26 | Crackles, Wheezes, Crackles + Wheezes | Asthma, Bronchiectasis, Bronchiolitis, COPD, Pneumonia, LRTI, URTI | Available online | [ |
KAUH database | Jordan | 120 (43/69); 35 | Crackles, Wheezes, Crepitations, Bronchial sounds, Crackles + Wheezes, Crackles + Bronchial | Asthma, Pneumonia, COPD, Bronchitis, Heart failure, Lung fibrosis, Pleural effusion | Available online | [ |
RespiratoryDatabase@TR | Turkey | 77 (64/13); 30 | Crackles, Wheezes | Asthma, COPD | Available online | [ |
Thinklabs Lung Sounds Library | United States | - | Crackles, Wheezes, Pleural rub, Rhonchi, Stridor | Asthma, Bronchiolitis, COPD, Laryngomalacia, Pulmonary edema | Available online | [ |
East Tennessee State University Pulmonary Breath Sounds | United States | - | Crackles, Pleural rub, Stridor, Wheezing, Rhonchus | - | Available online | [ |
ASTRA database | France | - | - | - | CD-ROM | [ |
Auscultation Skills: Breath & Heart Sounds | United States | - | - | - | CD-ROM | [ |
Fundamentals of Lung and Heart Sounds | United States | - | - | - | CD-ROM | [ |
Heart and Lung Sounds Reference Library, Wrigley | United States | - | Bronchial, Bronchovesicular, Rhonchi, Pneumonia, Wheezes, Bronchophony, Crackles, Stridor, | - | CD-ROM | [ |
Understanding Lung Sounds, Lehrer | United States | - | Crackles, Wheezes | - | CD-ROM | [ |
Bahoura 1999 | France | - | - | - | Undefined | [ |
Hsiao 2020 | Taiwan | 22 (12/10); - | Crackles, Wheezes | - | Undefined | [ |
Bogazici University Lung Acoustics Laboratory | Turkey | - | - | Bronchiectasis, Interstitial lung disease | Undefined | - |
CORA database | Ukraine | - | - | Bronchitis, COPD | Undefined | [ |
Stethographics Lung Sound Samples 2 | United States | - | - | - | Undefined | - |
3M Littmann Lung Sounds Library | United States | - | - | - | Undefined | - |
Mediscuss Respiratory Sounds 2 | - | - | - | - | Undefined | - |
Abbreviations: M: Males; F: Females; HC: Healthy Controls; COPD: Chronic Obstructive Pulmonary Disease; LRTI: Lower Respiratory Tract Infection; URTI: Upper Respiratory Tract Infection. ETSU: East Tennessee State University; ICBHI: International Conference on Biomedical and Health Informatics; KAUH: King Abdullah University Hospital; R.A.L.E: Respiratory Acoustics Laboratory Environment. 1 Availability at the time of submission. 2 This database was mentioned in one of the included articles but could not be found in this review.
Supplementary Materials
The following supporting information can be downloaded at:
References
1. Labaki, W.W.; Han, M.K. Chronic respiratory diseases: A global view. Lancet Respir. Med.; 2020; 8, pp. 531-533. [DOI: https://dx.doi.org/10.1016/S2213-2600(20)30157-0] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/32526184]
2. Wipf, J.E.; Lipsky, B.A.; Hirschmann, J.V.; Boyko, E.J.; Takasugi, J.; Peugeot, R.L.; Davis, C.L. Diagnosing pneumonia by physical examination: Relevant or relic?. Arch. Intern. Med.; 1999; 159, pp. 1082-1087. [DOI: https://dx.doi.org/10.1001/archinte.159.10.1082]
3. Brooks, D.; Thomas, J. Interrater reliability of auscultation of breath sounds among physical therapists. Phys. Ther.; 1995; 75, pp. 1082-1088. [DOI: https://dx.doi.org/10.1093/ptj/75.12.1082] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/7501711]
4. Cardinale, L.; Volpicelli, G.; Lamorte, A.; Martino, J.; Andrea, V. Revisiting signs, strengths and weaknesses of Standard Chest Radiography in patients of Acute Dyspnea in the Emergency Department. J. Thorac. Dis.; 2012; 4, pp. 398-407. [DOI: https://dx.doi.org/10.3978/j.issn.2072-1439.2012.05.05] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/22934143]
5. Hopkins, R.L. Differential auscultation of the acutely ill patient. Ann. Emerg. Med.; 1985; 14, pp. 589-590. [DOI: https://dx.doi.org/10.1016/S0196-0644(85)80787-3]
6. Esteva, A.; Robicquet, A.; Ramsundar, B.; Kuleshov, V.; DePristo, M.; Chou, K.; Cui, C.; Corrado, G.; Thrun, S.; Dean, J. A guide to deep learning in healthcare. Nat. Med.; 2019; 25, pp. 24-29. [DOI: https://dx.doi.org/10.1038/s41591-018-0316-z]
7. Myszczynska, M.A.; Ojamies, P.N.; Lacoste, A.M.; Neil, D.; Saffari, A.; Mead, R.; Hautbergue, G.M.; Holbrook, J.D.; Ferraiuolo, L. Applications of machine learning to diagnosis and treatment of neurodegenerative diseases. Nat. Rev. Neurol.; 2020; 16, pp. 440-456. [DOI: https://dx.doi.org/10.1038/s41582-020-0377-8]
8. Hayashi, Y. The right direction needed to develop white-box deep learning in radiology, pathology, and ophthalmology: A short review. Front. Robot. AI; 2019; 6, 24. [DOI: https://dx.doi.org/10.3389/frobt.2019.00024]
9. Kim, M.; Yun, J.; Cho, Y.; Shin, K.; Jang, R.; Bae, H.-j.; Kim, N. Deep learning in medical imaging. Neurospine; 2019; 16, 657. [DOI: https://dx.doi.org/10.14245/ns.1938396.198]
10. Chen, W.; Sun, Q.; Chen, X.; Xie, G.; Wu, H.; Xu, C. Deep learning methods for heart sounds classification: A systematic review. Entropy; 2021; 23, 667. [DOI: https://dx.doi.org/10.3390/e23060667]
11. Palaniappan, R.; Sundaraj, K.; Ahamed, N.U. Machine learning in lung sound analysis: A systematic review. Biocybern. Biomed. Eng.; 2013; 33, pp. 129-135. [DOI: https://dx.doi.org/10.1016/j.bbe.2013.07.001]
12. Reichert, S.; Gass, R.; Brandt, C.; Andrès, E. Analysis of respiratory sounds: State of the art. Clin. Med. Circ. Respirat. Pulm. Med.; 2008; 2, pp. 45-58. [DOI: https://dx.doi.org/10.4137/CCRPM.S530] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/21157521]
13. Kandaswamy, A.; Kumar, C.S.; Ramanathan, R.P.; Jayaraman, S.; Malmurugan, N. Neural classification of lung sounds using wavelet coefficients. Comput. Biol. Med.; 2004; 34, pp. 523-537. [DOI: https://dx.doi.org/10.1016/S0010-4825(03)00092-1] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/15265722]
14. Palaniappan, R.; Sundaraj, K.; Sundaraj, S. A comparative study of the SVM and K-nn machine learning algorithms for the diagnosis of respiratory pathologies using pulmonary acoustic signals. BMC Bioinform.; 2014; 15, 223. [DOI: https://dx.doi.org/10.1186/1471-2105-15-223]
15. Richeldi, L.; Cottin, V.; Würtemberger, G.; Kreuter, M.; Calvello, M.; Sgalla, G. Digital Lung Auscultation: Will Early Diagnosis of Fibrotic Interstitial Lung Disease Become a Reality?. Am. J. Respir. Crit. Care Med.; 2019; 200, pp. 261-263. [DOI: https://dx.doi.org/10.1164/rccm.201902-0306LE]
16. Kraman, S.S.; Wodicka, G.R.; Pressler, G.A.; Pasterkamp, H. Comparison of lung sound transducers using a bioacoustic transducer testing system. J. Appl. Physiol.; 2006; 101, pp. 469-476. [DOI: https://dx.doi.org/10.1152/japplphysiol.00273.2006]
17. Gupta, P.; Wen, H.; Di Francesco, L.; Ayazi, F. Detection of pathological mechano-acoustic signatures using precision accelerometer contact microphones in patients with pulmonary disorders. Sci. Rep.; 2021; 11, 13427. [DOI: https://dx.doi.org/10.1038/s41598-021-92666-2]
18. Zulfiqar, R.; Majeed, F.; Irfan, R.; Rauf, H.T.; Benkhelifa, E.; Belkacem, A.N. Abnormal respiratory sounds classification using deep CNN through artificial noise addition. Front. Med.; 2021; 8, 714811. [DOI: https://dx.doi.org/10.3389/fmed.2021.714811]
19. Salman, A.H.; Ahmadi, N.; Mengko, R.; Langi, A.Z.; Mengko, T.L. Performance comparison of denoising methods for heart sound signal. Proceedings of the 2015 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS); Bali, Indonesia, 9–12 November 2015; pp. 435-440.
20. Li, S.; Li, F.; Tang, S.; Xiong, W. A review of computer-aided heart sound detection techniques. BioMed Res. Int.; 2020; 2020, 5846191. [DOI: https://dx.doi.org/10.1155/2020/5846191]
21. Barclay, V.; Bonner, R.; Hamilton, I. Application of wavelet transforms to experimental spectra: Smoothing, denoising, and data set compression. Anal. Chem.; 1997; 69, pp. 78-90. [DOI: https://dx.doi.org/10.1021/ac960638m]
22. Mondal, A.; Banerjee, P.; Tang, H. A novel feature extraction technique for pulmonary sound analysis based on EMD. Comput. Methods Programs Biomed.; 2018; 159, pp. 199-209. [DOI: https://dx.doi.org/10.1016/j.cmpb.2018.03.016] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/29650313]
23. Krishnan, S.; Athavale, Y. Trends in biomedical signal feature extraction. Biomed. Signal Process. Control; 2018; 43, pp. 41-63. [DOI: https://dx.doi.org/10.1016/j.bspc.2018.02.008]
24. Maleki, F.; Muthukrishnan, N.; Ovens, K.; Reinhold, C.; Forghani, R. Machine learning algorithm validation: From essentials to advanced applications and implications for regulatory certification and deployment. Neuroimaging Clin.; 2020; 30, pp. 433-445. [DOI: https://dx.doi.org/10.1016/j.nic.2020.08.004]
25. Ramezan, C.A.; Warner, T.A.; Maxwell, A.E. Evaluation of sampling and cross-validation tuning strategies for regional-scale machine learning classification. Remote Sens.; 2019; 11, 185. [DOI: https://dx.doi.org/10.3390/rs11020185]
26. Barbosa, L.C.; Moreira, A.H.; Carvalho, V.; Vilaça, J.L.; Morais, P. Biosignal Databases for Training of Artificial Intelligent Systems. Proceedings of the 9th International Conference on Bioinformatics Research and Applications; Berlin, Germany, 18–20 September 2022; pp. 74-81.
27. Rocha, B.M.; Filos, D.; Mendes, L.; Vogiatzis, I.; Perantoni, E.; Kaimakamis, E.; Maglaveras, N. A respiratory sound database for the development of automated classification. Precision Medicine Powered by Phealth and Connected Health; Springer: Berlin/Heidelberg, Germany, 2018; pp. 33-37. [DOI: https://dx.doi.org/10.1007/978-981-10-7419-6_6]
28. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G.; Group, P. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. J. Clin. Epidemiol.; 2009; 62, pp. 1006-1012. [DOI: https://dx.doi.org/10.1016/j.jclinepi.2009.06.005] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/19631508]
29. Innovation, V.H. Covidence Systematic Review Software. Available online: www.covidence.org (accessed on 1 August 2023).
30. Whiting, P.; Rutjes, A.W.; Reitsma, J.B.; Bossuyt, P.M.; Kleijnen, J. The development of QUADAS: A tool for the quality assessment of studies of diagnostic accuracy included in systematic reviews. BMC Med. Res. Methodol.; 2003; 3, 25. [DOI: https://dx.doi.org/10.1186/1471-2288-3-25]
31. Jayakumar, S.; Sounderajah, V.; Normahani, P.; Harling, L.; Markar, S.R.; Ashrafian, H.; Darzi, A. Quality assessment standards in artificial intelligence diagnostic accuracy systematic reviews: A meta-research study. NPJ Digit. Med.; 2022; 5, 11. [DOI: https://dx.doi.org/10.1038/s41746-021-00544-y]
32. Whiting, P.F.; Rutjes, A.W.; Westwood, M.E.; Mallett, S.; Deeks, J.J.; Reitsma, J.B.; Leeflang, M.M.; Sterne, J.A.; Bossuyt, P.M. QUADAS-2 Group. QUADAS-2: A revised tool for the quality assessment of diagnostic accuracy studies. Ann. Intern. Med.; 2011; 155, pp. 529-536. [DOI: https://dx.doi.org/10.7326/0003-4819-155-8-201110180-00009]
33. R.A.L.E. Lung Sounds 3.2. Available online: http://www.rale.ca/LungSounds.htm. (accessed on 1 August 2023).
34. Lu, X.; Bahoura, M. An integrated automated system for crackles extraction and classification. Biomed. Signal Process. Control; 2008; 3, pp. 244-254. [DOI: https://dx.doi.org/10.1016/j.bspc.2008.04.003]
35. Bahoura, M. Pattern recognition methods applied to respiratory sounds classification into normal and wheeze classes. Comput. Biol. Med.; 2009; 39, pp. 824-843. [DOI: https://dx.doi.org/10.1016/j.compbiomed.2009.06.011]
36. Tocchetto, M.A.; Bazanella, A.S.; Guimaraes, L.; Fragoso, J.; Parraga, A. An embedded classifier of lung sounds based on the wavelet packet transform and ANN. IFAC Proc. Vol.; 2014; 47, pp. 2975-2980. [DOI: https://dx.doi.org/10.3182/20140824-6-ZA-1003.01638]
37. Datta, S.; Choudhury, A.D.; Deshpande, P.; Bhattacharya, S.; Pal, A. Automated lung sound analysis for detecting pulmonary abnormalities. Proceedings of the 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (Embc); Jeju Island, Republic of Korea, 11–15 July 2017; pp. 4594-4598.
38. Oweis, R.; Abdulhay, E.; Khayal, A.; Awad, A. An alternative respiratory sounds classification system utilizing artificial neural networks. Biomed. J.; 2015; 38, pp. 153-161. [DOI: https://dx.doi.org/10.4103/2319-4170.137773]
39. Naves, R.; Barbosa, B.H.; Ferreira, D.D. Classification of lung sounds using higher-order statistics: A divide-and-conquer approach. Comput. Methods Programs Biomed.; 2016; 129, pp. 12-20. [DOI: https://dx.doi.org/10.1016/j.cmpb.2016.02.013]
40. Racineux, J. L’auscultation à L’écoute du Poumon ASTRA; CD-Phonopneumogrammes: Paris, France, 1994.
41. Coviello, J.S. Auscultation Skills: Breath & Heart Sounds; Lippincott Williams & Wilkins: Philadelphia, PA, USA, 2013.
42. Wilkins, R.; Hodgkin, J.; Lopez, B. Fundamentals of Lung and Heart Sounds, 3/e (Book and CD-ROM); CV Mosby: Maryland Heights, MO, USA, 2004.
43. Wrigley, D. Heart and Lung Sounds Reference Library; PESI HealthCare: Eau Claire, WI, USA, 2011.
44. Lehrer, S. Understanding Lung Sounds; Saunders: Philadelphia, PA, USA, 2018.
45. Fraiwan, M.; Fraiwan, L.; Khassawneh, B.; Ibnian, A. A dataset of lung sounds recorded from the chest wall using an electronic stethoscope. Data Brief; 2021; 35, 106913. [DOI: https://dx.doi.org/10.1016/j.dib.2021.106913] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/33732827]
46. Altan, G.; Kutlu, Y. RespiratoryDatabase@ TR (COPD Severity Analysis). 2020; Available online: https://data.mendeley.com/datasets/p9z4h98s6j/1 (accessed on 1 August 2023).
47. Thinklabs Medical LLC. Thinklabs One Lung Sounds Library. Available online: https://www.thinklabs.com/sound-library (accessed on 1 August 2023).
48. East Tennessee State University. Pulmonary Breath Sounds. Available online: https://faculty.etsu.edu/arnall/www/public_html/heartlung/breathsounds/contents.html (accessed on 1 August 2023).
49. Bahoura, M. Analyse des Signaux Acoustiques Respiratoires: Contribution à la Detection Automatique des Sibilants par Paquets D’ondelettes. Ph.D. Thesis; Université de Rouen: Mont-Saint-Aignan, France, 1999.
50. Hsiao, C.-H.; Lin, T.-W.; Lin, C.-W.; Hsu, F.-S.; Lin, F.Y.-S.; Chen, C.-W.; Chung, C.-M. Breathing sound segmentation and detection using transfer learning techniques on an attention-based encoder-decoder architecture. Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC); Montreal, QC, Canada, 20–24 July 2020; pp. 754-759.
51. Grinchenko, A.; Makarenkov, V.; Makarenkova, A. Kompjuternaya auskultaciya-novij metod objektivizacii harakterictik zvykov dihaniya [Computer auscultation is a new method of objectifying the lung sounds characteristics]. Klin. Inform. I Telemeditsina; 2010; 6, pp. 31-36.
52. Cunningham, P.; Cord, M.; Delany, S.J. Supervised learning. Machine Learning Techniques for Multimedia: Case Studies on Organization and Retrieval; Springer: Berlin/Heidelberg, Germany, 2008; pp. 21-49.
53. Mutasa, S.; Sun, S.; Ha, R. Understanding artificial intelligence based radiology studies: What is overfitting?. Clin. Imaging; 2020; 65, pp. 96-99. [DOI: https://dx.doi.org/10.1016/j.clinimag.2020.04.025]
54. Pal, S.K.; Mitra, S. Multilayer perceptron, fuzzy sets, classifiaction. IEEE Trans. Neural Netw.; 1992; 3, pp. 683-697. [DOI: https://dx.doi.org/10.1109/72.159058]
55. Bannick, M.S.; McGaughey, M.; Flaxman, A.D. Ensemble modelling in descriptive epidemiology: Burden of disease estimation. Int. J. Epidemiol.; 2020; 49, pp. 2065-2073. [DOI: https://dx.doi.org/10.1093/ije/dyz223]
56. Dayhoff, J.E.; DeLeo, J.M. Artificial neural networks: Opening the black box. Cancer Interdiscip. Int. J. Am. Cancer Soc.; 2001; 91, pp. 1615-1635. [DOI: https://dx.doi.org/10.1002/1097-0142(20010415)91:8+<1615::AID-CNCR1175>3.0.CO;2-L]
57. Acharya, J.; Basu, A. Deep Neural Network for Respiratory Sound Classification in Wearable Devices Enabled by Patient Specific Model Tuning. IEEE Trans. Biomed. Circuits Syst.; 2020; 14, pp. 535-544. [DOI: https://dx.doi.org/10.1109/TBCAS.2020.2981172]
58. Alqudah, A.M.; Qazan, S.; Obeidat, Y.M. Deep learning models for detecting respiratory pathologies from raw lung auscultation sounds. Soft Comput.; 2022; 26, pp. 13405-13429. [DOI: https://dx.doi.org/10.1007/s00500-022-07499-6] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/36186666]
59. Altan, G.; Kutlu, Y.; Allahverdi, N. Deep Learning on Computerized Analysis of Chronic Obstructive Pulmonary Disease. IEEE J. Biomed. Health. Inform.; 2019; [DOI: https://dx.doi.org/10.1109/JBHI.2019.2931395] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/31369388]
60. Bahoura, M. FPGA implementation of an automatic wheezing detection system. Biomed. Signal Process. Control; 2018; 46, pp. 76-85. [DOI: https://dx.doi.org/10.1016/j.bspc.2018.05.017]
61. Bardou, D.; Zhang, K.; Ahmad, S.M. Lung sounds classification using convolutional neural networks. Artif. Intell. Med.; 2018; 88, pp. 58-69. [DOI: https://dx.doi.org/10.1016/j.artmed.2018.04.008] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/29724435]
62. Basu, V.; Rana, S. Respiratory diseases recognition through respiratory sound with the help of deep neural network. Respiratory diseases recognition through respiratory sound with the help of deep neural network. Proceedings of the 2020 4th International Conference on Computational Intelligence and Networks (CINE); Kolkata, India, 27–29 February 2020; pp. 1-6.
63. Brunese, L.; Mercaldo, F.; Reginelli, A.; Santone, A. A Neural Network-Based Method for Respiratory Sound Analysis and Lung Disease Detection. Appl. Sci.; 2022; 12, 3877. [DOI: https://dx.doi.org/10.3390/app12083877]
64. Chen, H.; Yuan, X.; Pei, Z.; Li, M.; Li, J. Triple-Classification of Respiratory Sounds Using Optimized S-Transform and Deep Residual Networks. IEEE Access; 2019; 7, pp. 32845-32852. [DOI: https://dx.doi.org/10.1109/ACCESS.2019.2903859]
65. Chen, H.; Yuan, X.; Li, J.; Pei, Z.; Zheng, X. Automatic multi-level in-exhale segmentation and enhanced generalized S-transform for wheezing detection. Comput. Methods Programs Biomed.; 2019; 178, pp. 163-173. [DOI: https://dx.doi.org/10.1016/j.cmpb.2019.06.024] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/31416545]
66. Demir, F.; Sengur, A.; Bajaj, V. Convolutional neural networks based efficient approach for classification of lung diseases. Health Inf. Sci. Syst.; 2020; 8, 4. [DOI: https://dx.doi.org/10.1007/s13755-019-0091-3]
67. Demir, F.; Ismael, A.M.; Sengur, A. Classification of Lung Sounds With CNN Model Using Parallel Pooling Structure. IEEE Access; 2020; 8, pp. 105376-105383. [DOI: https://dx.doi.org/10.1109/ACCESS.2020.3000111]
68. Perna, D. Convolutional neural networks learning from respiratory data. Proceedings of the 2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM); Madrid, Spain, 3–6 December 2018; pp. 2109-2113.
69. Fraiwan, M.; Fraiwan, L.; Alkhodari, M.; Hassanin, O. Recognition of pulmonary diseases from lung sounds using convolutional neural networks and long short-term memory. J. Ambient. Intell. Humaniz. Comput.; 2022; 13, pp. 4759-4771. [DOI: https://dx.doi.org/10.1007/s12652-021-03184-y]
70. Gairola, S.; Tom, F.; Kwatra, N.; Jain, M. RespireNet: A Deep Neural Network for Accurately Detecting Abnormal Lung Sounds in Limited Data Setting. Ann. Int. Conf. IEEE Eng. Med. Biol. Soc.; 2021; 2021, pp. 527-530. [DOI: https://dx.doi.org/10.1109/EMBC46164.2021.9630091]
71. Garcia-Ordas, M.T.; Benitez-Andrades, J.A.; Garcia-Rodriguez, I.; Benavides, C.; Alaiz-Moreton, H. Detecting Respiratory Pathologies Using Convolutional Neural Networks and Variational Autoencoders for Unbalancing Data. Sensors; 2020; 20, 1214. [DOI: https://dx.doi.org/10.3390/s20041214] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/32098446]
72. Hazra, R.; Majhi, S. Detecting respiratory diseases from recorded lung sounds by 2D CNN. Proceedings of the 2020 5th International Conference on Computing, Communication and Security (ICCCS); Patna, India, 14–16 October 2020; pp. 1-6.
73. Jung, S.Y.; Liao, C.H.; Wu, Y.S.; Yuan, S.M.; Sun, C.T. Efficiently Classifying Lung Sounds through Depthwise Separable CNN Models with Fused STFT and MFCC Features. Diagnostics; 2021; 11, 732. [DOI: https://dx.doi.org/10.3390/diagnostics11040732]
74. Kochetov, K.; Putin, E.; Balashov, M.; Filchenkov, A.; Shalyto, A. Noise Masking Recurrent Neural Network for Respiratory Sound Classification. Artificial Neural Networks and Machine Learning ICANN 2018; Lecture Notes in Computer Science Springer: New York, NY, USA, 2018; pp. 208-217.
75. Li, J.; Wang, C.; Chen, J.; Zhang, H.; Dai, Y.; Wang, L.; Wang, L.; Nandi, A.K. Explainable CNN With Fuzzy Tree Regularization for Respiratory Sound Analysis. IEEE Trans. Fuzzy Syst.; 2022; 30, pp. 1516-1528. [DOI: https://dx.doi.org/10.1109/TFUZZ.2022.3144448]
76. Li, J.; Yuan, J.; Wang, H.; Liu, S.; Guo, Q.; Ma, Y.; Li, Y.; Zhao, L.; Wang, G. LungAttn: Advanced lung sound classification using attention mechanism with dual TQWT and triple STFT spectrogram. Physiol. Meas.; 2021; 4, 105006. [DOI: https://dx.doi.org/10.1088/1361-6579/ac27b9] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/34534977]
77. Minami, K.; Lu, H.; Kim, H.; Mabu, S.; Hirano, Y.; Kido, S. Automatic classification of large-scale respiratory sound dataset based on convolutional neural network. Proceedings of the 2019 19th International Conference on Control, Automation and Systems (ICCAS); Jeju, Republic of Korea, 15–18 October 2019; pp. 804-807.
78. Monaco, A.; Amoroso, N.; Bellantuono, L.; Pantaleo, E.; Tangaro, S.; Bellotti, R. Multi-Time-Scale Features for Accurate Respiratory Sound Classification. Appl. Sci.; 2020; 10, 8606. [DOI: https://dx.doi.org/10.3390/app10238606]
79. Mukherjee, H.; Sreerama, P.; Dhar, A.; Obaidullah, S.M.; Roy, K.; Mahmud, M.; Santosh, K.C. Automatic Lung Health Screening Using Respiratory Sounds. J. Med. Syst.; 2021; 45, 19. [DOI: https://dx.doi.org/10.1007/s10916-020-01681-9]
80. Ngo, D.; Pham, L.; Nguyen, A.; Phan, B.; Tran, K.; Nguyen, T. Deep Learning Framework Applied For Predicting Anomaly of Respiratory Sounds. Proceedings of the 2021 International Symposium on Electrical and Electronics Engineering (ISEE); Ho Chi Minh, Vietnam, 15–16 April 2021; pp. 42-47.
81. Nguyen, T.; Pernkopf, F. Lung sound classification using snapshot ensemble of convolutional neural networks. Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC); Montreal, QC, Canada, 20–24 July 2020; pp. 760-763.
82. Paraschiv, E.-A.; Rotaru, C.-M. Machine learning approaches based on wearable devices for respiratory diseases diagnosis. Proceedings of the 2020 International Conference on e-Health and Bioengineering (EHB); Iasi, Romania, 29–30 October 2020; pp. 1-4.
83. Petmezas, G.; Cheimariotis, G.A.; Stefanopoulos, L.; Rocha, B.; Paiva, R.P.; Katsaggelos, A.K.; Maglaveras, N. Automated Lung Sound Classification Using a Hybrid CNN-LSTM Network and Focal Loss Function. Sensors; 2022; 22, 1232. [DOI: https://dx.doi.org/10.3390/s22031232] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/35161977]
84. Pham, L.; Phan, H.; Palaniappan, R.; Mertins, A.; McLoughlin, I. CNN-MoE Based Framework for Classification of Respiratory Anomalies and Lung Disease Detection. IEEE J. Biomed. Health Inf.; 2021; 25, pp. 2938-2947. [DOI: https://dx.doi.org/10.1109/JBHI.2021.3064237]
85. Pham, L.; Phan, H.; Schindler, A.; King, R.; Mertins, A.; McLoughlin, I. Inception-Based Network and Multi-Spectrogram Ensemble Applied To Predict Respiratory Anomalies and Lung Diseases. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc.; 2021; 2021, pp. 253-256. [DOI: https://dx.doi.org/10.1109/EMBC46164.2021.9629857]
86. Pham Thi Viet, H.; Nguyen Thi Ngoc, H.; Tran Anh, V.; Hoang Quang, H. Classification of lung sounds using scalogram representation of sound segments and convolutional neural network. J. Med. Eng. Technol.; 2022; 46, pp. 270-279. [DOI: https://dx.doi.org/10.1080/03091902.2022.2040624] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/35212591]
87. Rocha, B.M.; Pessoa, D.; Marques, A.; Carvalho, P.; Paiva, R.P. Automatic Classification of Adventitious Respiratory Sounds: A (Un)Solved Problem?. Sensors; 2020; 21, 57. [DOI: https://dx.doi.org/10.3390/s21010057] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/33374363]
88. Shuvo, S.B.; Ali, S.N.; Swapnil, S.I.; Hasan, T.; Bhuiyan, M.I.H. A Lightweight CNN Model for Detecting Respiratory Diseases From Lung Auscultation Sounds Using EMD-CWT-Based Hybrid Scalogram. IEEE J. Biomed. Health Inf.; 2021; 25, pp. 2595-2603. [DOI: https://dx.doi.org/10.1109/JBHI.2020.3048006]
89. Tariq, Z.; Shah, S.K.; Lee, Y. Lung disease classification using deep convolutional neural network. Proceedings of the 2019 IEEE International Conference on Bioinformatics and Biomedicine (BIBM); San Diego, CA, USA, 18–21 November 2019; pp. 732-735.
90. Yang, Z.; Liu, S.; Song, M.; Parada-Cabaleiro, E.; Schuller, B.W. Adventitious Respiratory Classification Using Attentive Residual Neural Networks. Proceedings of the Interspeech 2020; Shanghai, China, 25–29 October 2020; pp. 2912-2916.
91. Ma, Y.; Xu, X.; Yu, Q.; Zhang, Y.; Li, Y.; Zhao, J.; Wang, G. Lungbrn: A smart digital stethoscope for detecting respiratory disease using bi-resnet deep learning algorithm. Proceedings of the 2019 IEEE Biomedical Circuits and Systems Conference (BioCAS); Nara, Japan, 17–19 October 2019; pp. 1-4.
92. Stitson, M.; Weston, J.; Gammerman, A.; Vovk, V.; Vapnik, V. Theory of support vector machines. Univ. Lond.; 1996; 117, pp. 188-191.
93. Boujelben, O.; Bahoura, M. Efficient FPGA-based architecture of an automatic wheeze detector using a combination of MFCC and SVM algorithms. J. Syst. Archit.; 2018; 88, pp. 54-64. [DOI: https://dx.doi.org/10.1016/j.sysarc.2018.05.010]
94. Sen, I.; Saraclar, M.; Kahya, Y. Computerized Diagnosis of Respira tory Disorders. Methods Inf. Med.; 2014; 53, pp. 291-295. [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/24993284]
95. Serbes, G.; Ulukaya, S.; Kahya, Y.P. An Automated Lung Sound Preprocessing and Classification System Based OnSpectral Analysis Methods. Precision Medicine Powered by pHealth and Connected Health; Springer: New York, NY, USA, 2018; pp. 45-49. [DOI: https://dx.doi.org/10.1007/978-981-10-7419-6_8]
96. Stasiakiewicz, P.; Dobrowolski, A.P.; Targowski, T.; Gałązka-Świderek, N.; Sadura-Sieklucka, T.; Majka, K.; Skoczylas, A.; Lejkowski, W.; Olszewski, R. Automatic classification of normal and sick patients with crackles using wavelet packet decomposition and support vector machine. Biomed. Signal Process. Control; 2021; 67, 102521. [DOI: https://dx.doi.org/10.1016/j.bspc.2021.102521]
97. Romero, E.; Lepore, N.; Sosa, G.D.; Cruz-Roa, A.; González, F.A. Automatic detection of wheezes by evaluation of multiple acoustic feature extraction methods and C-weighted SVM. Proceedings of the 10th International Symposium on Medical Information Processing and Analysis; Cartagena, Colombia, 14–16 October 2014.
98. Tasar, B.; Yaman, O.; Tuncer, T. Accurate respiratory sound classification model based on piccolo pattern. Appl. Acoust.; 2022; 188, 108589. [DOI: https://dx.doi.org/10.1016/j.apacoust.2021.108589]
99. Vidhya, B.; Nikhil Madhav, M.; Suresh Kumar, M.; Kalanandini, S. AI Based Diagnosis of Pneumonia. Wirel. Pers. Commun.; 2022; 126, pp. 3677-3692. [DOI: https://dx.doi.org/10.1007/s11277-022-09885-7]
100. Zhang, Z. Introduction to machine learning: K-nearest neighbors. Ann. Transl. Med.; 2016; 4, 218. [DOI: https://dx.doi.org/10.21037/atm.2016.03.37]
101. Kingsford, C.; Salzberg, S.L. What are decision trees?. Nat. Biotechnol.; 2008; 26, pp. 1011-1013. [DOI: https://dx.doi.org/10.1038/nbt0908-1011] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/18779814]
102. Chambres, G.; Hanna, P.; Desainte-Catherine, M. Automatic detection of patient with respiratory diseases using lung sound analysis. Proceedings of the 2018 International Conference on Content-Based Multimedia Indexing (CBMI); La Rochelle, France, 4–6 September 2018; pp. 1-6.
103. Kok, X.H.; Imtiaz, S.A.; Rodriguez-Villegas, E. A novel method for automatic identification of respiratory disease from acoustic recordings. Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC); Berlin, Germany, 23–27 July 2019; pp. 2589-2592.
104. Oletic, D.; Arsenali, B.; Bilas, V. Low-power wearable respiratory sound sensing. Sensors; 2014; 14, pp. 6535-6566. [DOI: https://dx.doi.org/10.3390/s140406535] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/24721769]
105. Tharwat, A.; Gaber, T.; Ibrahim, A.; Hassanien, A.E. Linear discriminant analysis: A detailed tutorial. AI Commun.; 2017; 30, pp. 169-190. [DOI: https://dx.doi.org/10.3233/AIC-170729]
106. Naqvi, S.Z.H.; Choudhry, M.A. An Automated System for Classification of Chronic Obstructive Pulmonary Disease and Pneumonia Patients Using Lung Sound Analysis. Sensors; 2020; 20, 6512. [DOI: https://dx.doi.org/10.3390/s20226512] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/33202613]
107. Porieva, H.; Ivanko, K.; Semkiv, C.; Vaityshyn, V. Investigation of lung sounds features for detection of bronchitis and COPD using machine learning methods. Radiotekhnika Radioaparatobuduvannia; 2021; 84, pp. 78-87. [DOI: https://dx.doi.org/10.20535/RADAP.2021.84.78-87]
108. Matsuki, K.; Kuperman, V.; Van Dyke, J.A. The Random Forests statistical technique: An examination of its value for the study of reading. Sci. Stud. Read.; 2016; 20, pp. 20-33. [DOI: https://dx.doi.org/10.1080/10888438.2015.1107073] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/26770056]
109. Jaber, M.M.; Abd, S.K.; Shakeel, P.M.; Burhanuddin, M.A.; Mohammed, M.A.; Yussof, S. A telemedicine tool framework for lung sounds classification using ensemble classifier algorithms. Measurement; 2020; 162, 107883. [DOI: https://dx.doi.org/10.1016/j.measurement.2020.107883]
110. Aristophanous, M.; Penney, B.C.; Martel, M.K.; Pelizzari, C.A. A Gaussian mixture model for definition of lung tumor volumes in positron emission tomography. Med. Phys.; 2007; 34, pp. 4223-4235. [DOI: https://dx.doi.org/10.1118/1.2791035]
111. Ntalampiras, S. Collaborative framework for automatic classification of respiratory sounds. IET Signal Process.; 2020; 14, pp. 223-228. [DOI: https://dx.doi.org/10.1049/iet-spr.2019.0487]
112. Brown, J.C.; Smaragdis, P. Hidden Markov and Gaussian mixture models for automatic call classification. J. Acoust. Soc. Am.; 2009; 125, pp. EL221-EL224. [DOI: https://dx.doi.org/10.1121/1.3124659]
113. Jakovljević, N.; Lončar-Turukalo, T. Hidden Markov Model Based Respiratory Sound Classification. Precision Medicine Powered by pHealth and Connected Health; Springer: New York, NY, USA, 2018; pp. 39-43.
114. Ntalampiras, S.; Potamitis, I. Automatic acoustic identification of respiratory diseases. Evol. Syst.; 2020; 12, pp. 69-77. [DOI: https://dx.doi.org/10.1007/s12530-020-09339-0]
115. Oletic, D.; Bilas, V. Asthmatic Wheeze Detection From Compressively Sensed Respiratory Sound Spectra. IEEE J. Biomed. Health Inf.; 2018; 22, pp. 1406-1414. [DOI: https://dx.doi.org/10.1109/JBHI.2017.2781135]
116. Natekin, A.; Knoll, A. Gradient boosting machines, a tutorial. Front. Neurorobotics; 2013; 7, 21. [DOI: https://dx.doi.org/10.3389/fnbot.2013.00021]
117. Tripathy, R.K.; Dash, S.; Rath, A.; Panda, G.; Pachori, R.B. Automated Detection of Pulmonary Diseases From Lung Sound Signals Using Fixed-Boundary-Based Empirical Wavelet Transform. IEEE Sens. Lett.; 2022; 6, pp. 1-4. [DOI: https://dx.doi.org/10.1109/LSENS.2022.3167121]
118. Peng, C.-Y.J.; Lee, K.L.; Ingersoll, G.M. An introduction to logistic regression analysis and reporting. J. Educ. Res.; 2002; 96, pp. 3-14. [DOI: https://dx.doi.org/10.1080/00220670209598786]
119. Pramono, R.X.A.; Bowyer, S.; Rodriguez-Villegas, E. Automatic adventitious respiratory sound analysis: A systematic review. PLoS ONE; 2017; 12, e0177926. [DOI: https://dx.doi.org/10.1371/journal.pone.0177926]
120. Reddy, E.M.K.; Gurrala, A.; Hasitha, V.B.; Kumar, K.V.R. Introduction to Naive Bayes and a Review on Its Subtypes with Applications. Bayesian Reasoning and Gaussian Processes for Machine Learning Applications; CRC Press: Boca Raton, FL, USA, 2022; pp. 1-14. [DOI: https://dx.doi.org/10.1201/9781003164265]
121. Koning, C.; Lock, A. A systematic review and utilization study of digital stethoscopes for cardiopulmonary assessments. J. Med. Res. Innov.; 2021; 5, pp. 4-14. [DOI: https://dx.doi.org/10.25259/JMRI_2_2021]
122. Arts, L.; Lim, E.H.T.; van de Ven, P.M.; Heunks, L.; Tuinman, P.R. The diagnostic accuracy of lung auscultation in adult patients with acute pulmonary pathologies: A meta-analysis. Sci. Rep.; 2020; 10, 7347. [DOI: https://dx.doi.org/10.1038/s41598-020-64405-6] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/32355210]
123. Güler, İ.; Polat, H.; Ergün, U. Combining neural network and genetic algorithm for prediction of lung sounds. J. Med. Syst.; 2005; 29, pp. 217-231. [DOI: https://dx.doi.org/10.1007/s10916-005-5182-9] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/16050077]
124. Xia, T.; Han, J.; Mascolo, C. Exploring machine learning for audio-based respiratory condition screening: A concise review of databases, methods, and open issues. Exp. Biol. Med.; 2022; 247, pp. 2053-2061. [DOI: https://dx.doi.org/10.1177/15353702221115428] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/35974706]
125. Heitmann, J.; Glangetas, A.; Doenz, J.; Dervaux, J.; Shama, D.M.; Garcia, D.H.; Benissa, M.R.; Cantais, A.; Perez, A.; Müller, D. DeepBreath—Automated detection of respiratory pathology from lung auscultation in 572 pediatric outpatients across 5 countries. NPJ Digit. Med.; 2023; 6, 104. [DOI: https://dx.doi.org/10.1038/s41746-023-00838-3]
126. Tran-Anh, D.; Vu, N.H.; Nguyen-Trong, K.; Pham, C. Multi-task learning neural networks for breath sound detection and classification in pervasive healthcare. Pervasive Mob. Comput.; 2022; 86, 101685. [DOI: https://dx.doi.org/10.1016/j.pmcj.2022.101685] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/36061371]
127. Zhai, Q.; Han, X.; Han, Y.; Yi, J.; Wang, S.; Liu, T. A contactless on-bed radar system for human respiration monitoring. IEEE Trans. Instrum. Meas.; 2022; 71, pp. 1-10. [DOI: https://dx.doi.org/10.1109/TIM.2022.3205006]
128. Johnson, K.; Wei, W.; Weeraratne, D.; Frisse, M.; Misulis, K.; Rhee, K.; Zhao, J.; Snowdon, J. Precision medicine, AI, and the future of personalized health care. Clin. Transl. Sci.; 2021; 14, pp. 86-93. [DOI: https://dx.doi.org/10.1111/cts.12884]
129. Lal, A.; Pinevich, Y.; Gajic, O.; Herasevich, V.; Pickering, B. Artificial intelligence and computer simulation models in critical illness. World J. Crit. Care Med.; 2020; 9, 13. [DOI: https://dx.doi.org/10.5492/wjccm.v9.i2.13]
130. Lal, A.; Li, G.; Cubro, E.; Chalmers, S.; Li, H.; Herasevich, V.; Dong, Y.; Pickering, B.W.; Kilickaya, O.; Gajic, O. Development and verification of a digital twin patient model to predict specific treatment response during the first 24 hours of sepsis. Crit. Care Explor.; 2020; 2, e0249. [DOI: https://dx.doi.org/10.1097/CCE.0000000000000249] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/33225302]
131. Davenport, T.; Kalakota, R. The potential for artificial intelligence in healthcare. Future Healthc. J.; 2019; 6, pp. 94-98. [DOI: https://dx.doi.org/10.7861/futurehosp.6-2-94]
132. Richens, J.G.; Lee, C.M.; Johri, S. Improving the accuracy of medical diagnosis with causal machine learning. Nat. Commun.; 2020; 11, 3923. [DOI: https://dx.doi.org/10.1038/s41467-020-17419-7]
133. Lal, A.; Dang, J.; Nabzdyk, C.; Gajic, O.; Herasevich, V. Regulatory oversight and ethical concerns surrounding software as medical device (SaMD) and digital twin technology in healthcare. Ann. Transl. Med.; 2022; 10, 950. [DOI: https://dx.doi.org/10.21037/atm-22-4203]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Pulmonary auscultation is essential for detecting abnormal lung sounds during physical assessments, but its reliability depends on the operator. Machine learning (ML) models offer an alternative by automatically classifying lung sounds. ML models require substantial data, and public databases aim to address this limitation. This systematic review compares characteristics, diagnostic accuracy, concerns, and data sources of existing models in the literature. Papers published from five major databases between 1990 and 2022 were assessed. Quality assessment was accomplished with a modified QUADAS-2 tool. The review encompassed 62 studies utilizing ML models and public-access databases for lung sound classification. Artificial neural networks (ANN) and support vector machines (SVM) were frequently employed in the ML classifiers. The accuracy ranged from 49.43% to 100% for discriminating abnormal sound types and 69.40% to 99.62% for disease class classification. Seventeen public databases were identified, with the ICBHI 2017 database being the most used (66%). The majority of studies exhibited a high risk of bias and concerns related to patient selection and reference standards. Summarizing, ML models can effectively classify abnormal lung sounds using publicly available data sources. Nevertheless, inconsistent reporting and methodologies pose limitations to advancing the field, and therefore, public databases should adhere to standardized recording and labeling procedures.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details





1 Department of Anesthesiology and Perioperative Medicine, Division of Critical Care, Mayo Clinic, Rochester, MN 55905, USA
2 Department of Medicine, Division of Pulmonary and Critical Care Medicine, Mayo Clinic, Rochester, MN 55905, USA
3 Department of Anesthesiology and Perioperative Medicine, Division of Critical Care, Mayo Clinic, Rochester, MN 55905, USA
4 Division of Pulmonary Medicine, Mayo Clinic Health Systems, Essentia Health, Duluth, MN 55805, USA
5 Department of Anesthesiology and Perioperative Medicine, Division of Critical Care, Mayo Clinic, Rochester, MN 55905, USA
6 Mayo Clinic Libraries, Mayo Clinic, Rochester, MN 55905, USA;