1. Introduction
A clinical decision support system (CDSS) [1] supports decision-making for clinicians in diagnosing and treating diseases based on the patient’s clinical information. With the advancing big data analysis and artificial intelligence (AI) lately, the research on the use of these techniques in CDSSs has gained considerable attention. Notably, AI-based CDSSs are highly valuable because of their effectiveness in supporting clinical diagnosis, prescription, prognosis, and treatment using AI models. Originally developed for image processing tasks, CNNs have been applied to a variety of medical imaging applications, including lesion detection, organ segmentation, and disease classification. By leveraging hierarchical feature extraction and spatial hierarchy, CNN-based CDSSs can leverage advanced computational techniques to analyze medical images in depth, which is a significant performance improvement over traditional machine learning CDSSs.
Traditional CDSSs are categorized into knowledge-based CDSSs, where rules are defined in advance, and non-knowledge-based CDSSs, where the rules are not predefined [2]. The principle of knowledge-based CDSSs is to make decisions using correlations and if–then rules for accumulated data. Knowledge-based CDSSs are beneficial as the decision-making process is transparent because it follows predefined rules. However, it has the limitation that knowledge and rules must be defined in advance for all cases.
Conversely, non-knowledge-based CDSSs provide decision-making by learning patterns found in past clinical information using machine learning or AI without rules. Non-knowledge-based CDSSs have been extensively studied in various medical areas [3,4], dealing with issues such as hypertension, heart failure, and lung disease. These CDSSs are expected to be a breakthrough methodology that can reduce the cost of knowledge construction and provide personalized treatment. However, the black-box problem [5], in which explaining the process behind the results derived by AI models is not possible, makes it difficult to apply them in healthcare, where transparency is essential. Therefore, to utilize non-knowledge-based CDSSs, it is necessary to introduce robust clinical validation and evaluations or provide convincing evidence to support the results. In other words, rules in knowledge-based CDSSs are generated by structuring literature-based, practice-based, or patient-driven evidence, which is too costly for experts, and non-knowledge-based CDSSs suffer from the inability to explain model results, making them unusable in practice in the healthcare domain.
EXplainable AI (XAI) [6] was proposed to explain the process of the results of AI-based systems. XAI is a set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms. The main features of XAI techniques involve creating a suite of ML techniques that (1) produce more explainable models while maintaining a high level of learning performance (e.g., prediction accuracy) and (2) enable humans to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners. The differences in knowledge-, non-knowledge-, and XAI-based CDSSs are shown in Figure 1 [7]. XAI can explain the process and rationale behind an AI model’s decision in a manner that can be interpreted by the user. XAI technologies can be broadly categorized into feature-, model-, complexity-, and methodology-based technologies. XAI-based CDSSs provide high performance of AI models in the healthcare domain, where transparency must be ensured, along with explainability and interpretability, enabling the development of real-world healthcare services. In particular, XAI-based CDSSs can explain the rationale and reasoning behind AI-based model results in real-world clinical processes. For example, in the field of disease detection, when an XAI-based CDSS detects a tumor in a patient’s MRI image, it explains how the image detected the tumor and what evidence exists for the result. In this study, we propose an XAI-based CDSS framework and introduce its application. Our main contributions are summarized as follows:
We perform a systematic review of explainable AI techniques that can ensure trust-worthiness and transparency in the medical domain and present a forward-looking roadmap for XAI-based CDSSs.
We categorized various studies ranging from traditional CDSSs to state-of-the-art XAI-based CDSSs using appropriate criteria and summarize the features and limitations of each CDSS to propose a new XAI-based CDSS framework.
We propose a novel CDSS framework using the latest XAI technology and show its potential value by introducing areas that can be utilized most effectively.
This paper is organized as follows. In Section 2, we describe the research trends related to knowledge-based, non-knowledge-based, and XAI-based CDSSs and the limitations of existing technologies. In Section 3, we present the XAI-based CDSS framework and introduce the datasets and models required for its construction. Applications of XAI-based CDSSs are presented in Section 4, and conclusion are presented in Section 5.
2. Related Work
This section categorizes CDSSs into knowledge-based, non-knowledge-based, and XAI-based CDSSs, and each section describes the main research, technologies, and methodologies, with an overview shown in Figure 2.
2.1. Knowledge-Based CDSS
To manage and utilize big data, an architecture called a knowledge base has emerged, and a methodology has been proposed to incorporate a knowledge base built by correlation of the accumulated data based on the experience of clinicians in a CDSS [8]. This is categorized as a knowledge-based CDSS that supports decision-making by inferring results from a rule-based knowledge base with an inference engine. Accordingly, it is important to design a knowledge base structure and appropriate rule system for each field. The features, functions and applied domain of knowledge-based CDSSs are organized in Table 1. Table 1 divides the categories based on the technologies used in knowledge-based CDSSs and details the characteristics of each technology, relevant research references, the main function (diagnosis and treatment as well as alert and monitoring) of the relevant CDSS studies, and the applied medical domain. Table 2 and Table 3 are presented in the same format.
Knowledge-based CDSSs define rules based on the literature, practice, or patient-oriented evidence [9] and are therefore often used in clinical practice based on clinical guidelines or in evidence-based medicine (EBM). Rule-based inference methodology using the evidential reasoning (RIMER) is based on a belief rule base (BRB) system [10,11,12]. BRB systems set belief degrees to represent different types of uncertain knowledge and extend the if–then rules to represent knowledge. Most BRB-based CDSS frameworks comprise an interfacial layer, an application processing layer, and a data management layer [13,14,15]. These frameworks have proven their performance in various fields, such as COVID-19 [16] and heart failure [17], psychogenic pain [18], tuberculosis [19], acute coronary syndrome (ACS) [20], and lymph node cancer metastasis [21].
Figure 2Overview of related work.
[Figure omitted. See PDF]
However, an effective knowledge representation for CDSSs could be decision tree, Bayesian network, or nearest neighbors [22]. A study leveraging decision trees proposed a knowledge modeling method in which a clinical model extracted from glaucoma clinical practice guidelines was represented as mind maps and converted into iterative decision trees by clinicians and engineers [23]. In a similar study, mind maps representing the clinical treatment process for thyroid nodules obtained from clinicians is converted into an iterative decision tree model to extract rules [24]. This is followed by the process of representing tacit knowledge to explicit knowledge and finally converting it to executable knowledge. In another study on a decision-tree-based CDSS, a pediatric allergy knowledge base, ONNX inference engine, and tree algorithm were used to provide knowledge of diagnosis and treatment to clinicians [25].
Additionally, Bayesian network-based CDSSs, which are used in various medical areas, such as liver disease [26], breast cancer [27], infectious diseases [28], diabetes [29], angina [30], respiratory diseases [31], and lymph node cancer metastasis [32], use a what–if analysis mechanism. In the field of dental hygiene, Bayesian network framework based on the expectation-maximization (EM) algorithm have recently been used to detect abnormal oral images [33]. Additionally, a hepatitis C virus (HCV) diagnosis system has been proposed using a fuzzy Bayesian network with a fuzzy ontology to resolve ambiguity and uncertainty in outbreaks [34].
Research using the K-nearest neighbor (KNN) algorithm structured medical information by classifying similar clinical cases using ontology extraction methods for case-based reasoning (CBR) [35]. Moreover, a computer-aided diagnosis (CAD) system proposed for melanoma diagnosis provides a related ontology based on asymmetry, border, color, and differential (ABCD) structure rules and classifies similar melanoma cases using the KNN algorithm [36]. Another study that uses similarity of knowledge for decision-making, as in the aforementioned studies, provides an appropriate diagnosis for patients semantically classified by time series similarity based on the patient’s medical history [37].
Knowledge-based CDSSs support decision-making based on a pre-built knowledge base, effective data modeling, and ongoing knowledge-based updating for each domain. Recently, the most influential research area in knowledge-based CDSSs has been genomics, where clinical genomic data models have been proposed to analyze and attribute clinical genomic workflows using genomic data for clinical applications of genomic information [38]. Additionally, various methodologies have also emerged to facilitate knowledge base updates by analyzing newly acquired textual knowledge through natural language processing to generate rules [39].
Knowledge-based CDSSs exhibit potential in that the decision-making process is clear and traceable. However, these systems are limited by maintenance and construction costs because they relies on medical specialists and knowledge engineers for standardization and error correction, as data quality control is essential [40].
Table 1Knowledge-based CDSSs.
Category | Features | Paper | Functions | Applied Domain | |
---|---|---|---|---|---|
Diagnosis and Treatment | Alert and Monitoring | ||||
BRB | An extension of if–then rules utilizing confidence rules that include confidence levels to represent knowledge | [18] | ✓ | Tuberculosis | |
[16] | ✓ | Heart Failure | |||
[19] | ✓ | Acute Coronary | |||
[14] | ✓ | Measles | |||
[20] | ✓ | Lymph Node Cancer | |||
[15] | ✓ | COVID-19 | |||
Decision Tree | Represent knowledge based on a tree structure with a hierarchy of knowledge relationships | [17] | ✓ | Psychogenic Pain | |
[22] | ✓ | Glaucoma | |||
[23] | ✓ | Thyroid | |||
[24] | ✓ | Allergies | |||
Bayesian Network | Utilizes probabilities based on Naïve Bayes to classify data | [38] | ✓ | Glaucoma | |
[25] | ✓ | Liver Disease | |||
[26] | ✓ | Breast Cancer | |||
[27] | ✓ | Infectious Diseases | |||
[29] | ✓ | Angina Pectoris | |||
[30] | ✓ | Respiratory Diseases | |||
[31] | ✓ | Lymph Node Cancer | |||
[32] | ✓ | Dental Hygiene | |||
[33] | ✓ | Hepatitis | |||
[28] | ✓ | Diabetes | |||
Nearest neighbors | Determines the class of a new instance using the attributes of the nearest neighbor | [35] | ✓ | Melanoma Diagnosis | |
[36] | ✓ | Diagnostics |
2.2. Non-Knowledge-Based CDSS
With the explosion of data and specialized knowledge, the amount of information that must be processed to make clinical decisions is growing astronomically. By learning on its own like a human using massive amounts of data, deep learning and AI, which are based on artificial neural networks, can be used to support clinical decision-making. These methods analyze patterns in patient data to draw associations between symptoms and diagnoses. Moreover, deep learning and AI can be used to analyze various data, including text, images, videos, audio, and signals, enabling the development of non-knowledge-based CDSSs that can understand the overall clinical situation and context. The first step toward a non-knowledge-based CDSS begins with analyzing images and using them to make clinical decisions. A convolutional neural network (CNN) [41] that trains image patterns by mimicking the structure of the human optic nerve has been used to diagnose obstructive sleep apnea by learning high-order correlation features between polysomnography images and their labels [42,43], and an automated system has been proposed to optimize patient satisfaction by analyzing patients’ experiences with ambulance transport services with a combined model of CNN and word embeddings [44]. Similarly, a technique for diagnosing melanoma using a single CNN trained on a dataset of clinical images has been introduced [45].
There are also a number of cases of recurrent neural networks (RNNs) that can handle time series data. Electronic health records (EHRs) data, which is a digital version of a patient’s paper chart, are good candidates for using RNNs [46] because these data provide clinical records with temporal information. A previous study [47] applied RNNs to the EHR data of heart failure patients to predict heart failure, and the method outperformed machine learning methods such as SVMs [48], MLPs [49], logistic regression [50], and KNNs [51]. Because ECG data also contain temporal information, ECG signals can be analyzed using RNN-based models to detect sleep apnea [52].
When dealing with clinical data, owing to its long-term properties, the problem of forgetting previous data and ignoring past information may arise. Therefore, studies using LSTMs [53] to predict future data by considering past data have been proposed. An LSTM was used to learn multiple diagnostic labels to classify diagnoses [54] and oral–nasal thermal airflow, nasal pressure, and abdominal breathing-guided plethysmography data from polysomnography were analyzed with a bidirectional LSTM model to diagnose sleep apnea [55]. Deep learning is frequently applied in medical image analysis. Chest radiographs can be analyzed using deep learning to diagnose chest diseases such as lung nodules [56], lung cancer [57], and pulmonary tuberculosis [58].
Unlike traditional supervised and unsupervised learning, reinforcement learning [59] generated its own training data by observing the current state and selecting future actions. Because existing CDSSs are trained based on evaluations made by different clinicians with different criteria, interrelated symptoms are not considered in some cases. This problem can be solved by applying reinforcement learning, which is used to learn complex environments. A CDSS based on a deep reinforcement learning algorithm has been introduced to determine the initial dose for ICU patients, where an accurate medication prescription is critical and prevents mis-dosing and complications. [60] Reinforcement learning of secure computations enables the implementation of patient-centered clinical decision-making systems while protecting sensitive clinical data. A privacy-preserving reinforcement learning framework with iterative secure computation was proposed to provide dynamic treatment decisions without leaking sensitive information to unauthorized users [3]. A reinforcement learning-based conversational software for radiotherapy was also studied, where the framework used graph neural networks and reinforcement learning to improve clinical decision-making performance in radiology with many variables, uncertain treatment responses, and inter-patient heterogeneity [61].
BERT [62], a large language model based on a transformer [63], was used to develop a CDSS with natural language understanding capabilities. To reduce diagnostic errors, a framework for multi-classification of diagnosis codes in EHRs using BERT [64] has been developed to help clinicians predict the most likely diagnosis. However, specialists have raised concerns about the reliability and responsibility of these deep learning and AI models because of their inability to explain their decisions. Therefore, they are often unwilling to use them in diagnosis. To this end, it is necessary to adopt AI, which provides evidence for prediction and an understandable explanation.
2.3. XAI-Based CDSS
EXplainable AI (XAI) [5], has emerged to overcome the black-box [6] problem of deep learning models, which means that deep learning models have the highest performance compared to other rule-based or machine learning models but have the limitation of lacking interpretability. This can be described as the “performance–interpretability trade-off” and is shown in Figure 3 [65,66]. Performance is highest for deep learning, followed by machine learning models (decision tree, nearest neighbors, and Bayesian network), and rule-based models; however, transparency (interpretability) is inversely proportional. In other words, applying XAI to deep learning models is useful for explaining the reason and logic behind the results predicted by the model to ensure transparency and reliability of the results with high-performance deep learning. With attempts to apply XAI in various fields [67], XAI has gained attention as a solution to the uncertainty problem in CDSSs where accuracy and reliability are important [66].
Table 2Non-knowledge-based CDSSs.
Category | Features | Paper | Functions | Applied Domain | |
---|---|---|---|---|---|
Diagnosis and Treatment | Alert and Monitoring | ||||
CNN | Features extracted were analyzed by connecting them to convolutional layers | [42] | ✓ | Sleep Apnea | |
[43] | ✓ | Sleep Apnea | |||
[45] | ✓ | Melanoma | |||
[44] | ✓ | Ambulance assignment | |||
RNN | Process time series data to find sequential patterns in the data | [47] | ✓ | Heart Failure | |
[52] | ✓ | Sleep Apnea | |||
LSTM | Useful when dealing with long-term data and utilizing historical data to predict what to expect in the future. | [54] | ✓ | Diagnostics | |
[55] | ✓ | Sleep Apnea | |||
[56] | ✓ | Lung nodules | |||
[57] | ✓ | Lung cancer | |||
Reinforcement Learning | Trains software to make decisions to achieve the most optimal results | [58] | ✓ | Pulmonary tuberculosis | |
[60] | ✓ | Medications | |||
[3] | ✓ | Protecting patient information | |||
Transformer | Specializes in processing text data with large language models using transformers in an encoder–decoder structure | [61] | ✓ | Radiology | |
[64] | ✓ | Diagnosis code categorization |
The description techniques used in XAI can be broadly categorized into scoop-, model-, complexity-, and methodology-based techniques [67]. The most popular XAI methods in recent research include Shapley additive explanations (SHAP) [68], local interpretable model–agnostic explanations (LIME) [69], post hoc interpretability [70], and gradient-weighted class activation mapping (GradCAM) [70].
Scoop-based techniques determine the contribution of data based on the importance features to train the AI model. A prominent example is the local explainers method (LIME) [71], which is a method specific to a particular instance or outcome. LIME is a technique for generating local approximations to model predictions. For example, given the task of predicting emotions with a deep neural network, LIME focuses on surrounding local words to highlight words that are important for a particular prediction. LIME directly explain how the model’s input data change the outcome; after training the model, it can make guesses about samples that have not appeared before [72]. For COVID-19, LIME and traditional machine learning models were combined to identify the features that had the greatest impact on medical history, time of onset, and patients’ primary symptoms [68]. Similarly, an LSTM model was used in a study on depressive symptom detection, in conjunction with a LIME approach to identify text suggestive of depressive symptoms [73]. Other applications include the diagnosis of Parkinson disease [74], hip disease [75], Alzheimer’s disease, and mild cognitive impairment [76].
By contrast, SHAP uses the explanations on Shapley values that measure the contribution of each feature to the model and is a global explainer method that provides a theoretical interpretation of any dataset using cooperative game theory concepts [77] to calculate the contributions of biomarkers or clinical features (players) for a specific disease outcome (reward) [72]. To predict postoperative malnutrition in children with congenital heart disease, the XGBoost and SHAP algorithms were used to calculate the average of five risk factors (weight one month after surgery, weight at discharge, etc.) for all patients [78]. PHysiologicAl Signal Embeddings (PHASE), a method for transforming time series signals into input features, was first applied to embed body signals with an LSTM model to features extracted using SHAP from EMR/EHR data [79]. In addition, a multi-layer XAI framework utilizing multimodal data, such as MRI and PET images, cognitive scores, and medical histories, have been proposed [80].
SHAP is applied to all layers of the framework, where the first layer performs multiple classification for the early diagnosis of AD. In the second layer, the binary classification score is used to determine the transition from cognitive impairment to AD [80]. Similarly, SHAP has been widely used in various diseases and clinical domains such as predicting readmission [81,82], COVID-19 [83,84,85,86], liver cancer [87], influenza [88], and malignant cerebral edema [89].
More recently, researchers have utilized LIME and SHAP simultaneously to ensure a convincing description of the system. A hybrid approach combining vision transformer (ViT) and a gated recurrent unit was used to generate LIME heat maps using the top three features from the brain MRI images, and SHAP was used to visualize the model’s predictions to demonstrate the validity of data patterns [90]. In addition, the Department of Chronic Kidney Disease also used LIME and SHAP algorithms simultaneously to represent the importance of features in the best model trained by five machine learning methods (random forest, decision tree, naïve Bayes, XGBoost, and logistic regression) [91].
Model-based techniques can be classified into model-specific and model-agnostic methods. Model-specific methods utilize the unique features of a model to make decisions, indicating that they can only be applied to the internal operations of a specific model. An example is Score-CAM [92], which is based on CNNs and compares output for the given input features, thereby indicating their importance. A previous study proposed a system for classifying images from a clock drawing test as a tool for diagnosing dementia that was trained on API-Net [93] and visualized using Score-CAM to provide explainability and transparency [94]. However, model-agnostic methods are model independent and can be applied to any model or algorithm. As a CDSS tool that reduces the model dependency, a COVID-19 symptom severity classifier that utilizes different machine learning models to identify high-risk patients for COVID-19 has been proposed [95].
Complexity-based techniques make machine learning or deep learning models fully interpretable. Interpretability can be categorized into intrinsic interpretability [96] and post hoc interpretability [72] depending on the viewpoint. In general, intrinsic interpretability indicates that a model with a simple architecture can be explained by the trained model itself, whereas post hoc interpretability means that the trained model has a complex architecture and must be retrained to explain this phenomenon. In a study on brain tumor detection based on MRI images, three pre-trained CNNs, DarkNet53 [92], EfficientNet-B0 [97], and DenseNet201 [98], were used to extract features using a hybrid methodology to explain post-interpretability [99].
Another framework for brain tumor diagnosis, NeuroNet19, combines a 19-layer VGG19 that detects complex hierarchical features in images with an inverted pyramid pooling module (iPPM) model, which refines these features, leveraging post-interpretability [100]. Methodology-based techniques are categorized into backpropagation-based and perturbation-based methods [67], among which backpropagation-based GradCAM was proposed to describe CNN models with good performance [101]. GradCAM was applied to the convolutional layer at the end of the CNN and uses the gradient information of the layer to find the features that are highly involved in a particular decision [72,102]. To further improve classification performance, several studies have been proposed to predict oral cancer from oral images using guided attention inference network (GAIN) along with the aforementioned CNN-based VGG19 model GradCAM [103], and these methods are also used to diagnose glaucoma from fundus images using GradCAM’s heatmap and ResNet-50 model [104]. Because CNN models are widely applied in image classification and processing, GradCAM technology is used in several studies utilizing image data [105,106,107,108,109,110,111].
Table 3XAI-based CDSSs.
XAI | Techniques | Features | Paper | Functions | Applied Domain | |
---|---|---|---|---|---|---|
Diagnosis and Treatment | Alert and Monitoring | |||||
Scoop Based | Local | Considers the model as a black box and focus on the local variables that contribute to the decision | [73] | ✓ | Depression | |
[74] | ✓ | Parkinson | ||||
[75] | ✓ | Gait classification | ||||
[76] | ✓ | Alzheimer’s disease | ||||
[90] | ✓ | Alzheimer’s disease | ||||
[91] | ✓ | Chronic kidney disease | ||||
[68] | ✓ | COVID-19 | ||||
Global | Explains the contribution that relates to the output by getting an understanding of the interaction mechanism of the model variables | [78] | ✓ | Malnutrition and heart disease | ||
[83] | ✓ | Adverse | ||||
[84] | ✓ | COVID-19 | ||||
[86] | ✓ | COVID-19 | ||||
[90] | ✓ | Alzheimer’s disease | ||||
[91] | ✓ | Chronic kidney disease | ||||
[79] | ✓ | Surgical event | ||||
Scoop Based | Global | Explains the contribution that relates to the output by getting an understanding of the interaction mechanism of the model variables | [81] | ✓ | Hospital readmission risk | |
[82] | ✓ | Reattendant risk | ||||
[85] | ✓ | Hospital mortality | ||||
[87] | ✓ | Lung cancer | ||||
[88] | ✓ | Mortality | ||||
[89] | ✓ | Malignant | ||||
Model Based | Model Specific | Applied to a certain scope of application | [95] | ✓ | COVID-19 | |
Model-Agnostic | Has no special requirement for the model | [94] | ✓ | Visuospatial deficits | ||
Complexity Based | Intrinsic | Model is structured to be understandable | [112] | ✓ | Breast cancer | |
Post Hoc | Interpretable information obtained using external methods | [100] | ✓ | Heart failure | ||
Methodology Based | Backpropagation Based | Backpropagates a significant signal from the output to the input | [103] | ✓ | Oral cancer | |
[104] | ✓ | Glaucoma | ||||
[105] | ✓ | Breast cancer | ||||
[106] | ✓ | COVID-19 | ||||
[107] | ✓ | Glaucoma | ||||
[108] | ✓ | Fungal keratitis | ||||
[109] | ✓ | COVID-19 | ||||
[111] | ✓ | COVID-19 | ||||
[110] | ✓ | Hepatocellular carcinoma | ||||
Perturbation Based | Uses techniques to changes the feature set of a given input and investigate the impact of these changes on the network output | [113] | ✓ | Glioma |
3. Methods
3.1. Proposed Architecture
This study proposes an XAI-based CDSS framework that can handle both high performance and interpretation of the decision-making process. This is achieved by applying explainable AI technology to artificial intelligence models. Traditional CDSSs have evolved from rule-based to machine learning CDSSs to AI-based CDSSs. However, although AI-based CDSSs guarantees high performance, these systems have a black-box problem that cannot explain the process of model results. In this paper, we propose an XAI framework with explainable and interpretable techniques. As mentioned in Section 2.3, XAI technologies can be broadly categorized into scoop-based, model-based, complexity-based, and methodology-based technologies, and the proposed framework presents a flexible structure that can be applied to the four XAI technologies according to the desired purpose. In other words, the XAI CDSS framework, which ensures high performance of deep learning while ensuring interpretability and explainability, is a necessary technology for the medical domain that requires transparency of AI models. Details of this framework are shown in Figure 4.
The proposed framework handles multimodal data from various medical domains, including text, audio, images, and genomes. It applies the explainable AI methodology to representative deep learning models. Finally, we demonstrate the potential value of the proposed framework through the presentation of application plans, illustrating the circumstances under which they can be utilized effectively.
Most of the existing AI-based CDSSs are limited to text data. Even if the data usage range is extended, only one additional type of biometric signal or image is used. However, a multimodal data utilization plan is necessary to make clinical decisions considering factors that are difficult to record in formal form, such as the patient’s condition, facial expressions, and behavior that change in real time. Consequently, the proposed framework must be capable of expanding its knowledge through the continuous learning and analysis of information derived from multimodal data. Because multimodal CDSSs can handle different types of data in combination, they can analyze patient-related information (e.g., vital signs, test results images, and consultation records) in a multifaceted way and improve decision-making capabilities, especially in the context of patient condition monitoring at the bedside or tumor detection during surgery. As illustrated in Figure 2, a knowledge graph was constructed by extracting the relationship between features based on multimodal features obtained using models such as large-scale language models, VATT, and audio analysis models derived from multimodal data such as text, images, and signals. Furthermore, through reinforcement learning or continuous learning, knowledge graphs can expand automatically, enabling them to respond flexibly to new knowledge, while retaining previous information.
A multimodal clinical XAI learning model for an explanatory clinical decision-making system requires three elements: a deep explanatory label, an interpretable model, and model inference. First, the features that can explain the prediction results must be identified and labeled, and a model must be created in connection with a decision tree with high explanatory power. The explainable models were inferred from the black boxes, which are the largest problem of existing AI methodologies. Explanations are generated with predictions. All of the aforementioned processes [114] are conducted through interactions for patients, medical professionals, and clinical systems in real time. The predictions and explanations generated by analyzing the data obtained from each participant are provided to the participants once more to facilitate overall clinical decision-making, including monitoring, diagnosis, prescription, warning, and document management.
3.2. Dataset
3.2.1. Clinical Dataset
The first large-scale multimodal clinical dataset is UK Biobank [115]. Data have been collected since 2006, and the dataset includes several hierarchical data types such as lifestyle habits, body information, biological samples, electrocardiogram data, and EHR data from more than 500,000 participants. In addition to basic biometric data, the dataset provides genomic analysis, exon mutation testing, full-length genome sequencing, brain MRI, cardiac MRI, abdominal MRI, and carotid artery ultrasonography and radiographic findings. Similar datasets were also obtained from the China Kadoori Biobank [116] and Biobank Japan [117]. The MMIC dataset [118], published by the Massachusetts Institute of Technology, has now been published up to the fourth version. This is an open-source dataset comprising EHR data, including demographic information, diagnostic codes, and medications obtained from ICU patients at the Beth Israel Deaconess Medical Center. MMIC-IV is one of the most representative datasets of clinical AI models that aim to predict clinical events or readmissions. It contains textual data, such as reports, medical notes, and imaging data, including laboratory and physiological data and chest radiographs. Furthermore, it is possible to reconstruct the multimodal dataset using a combination of data from a single modality. This may include, for example, an Alzheimer’s patient’s brain image dataset, the Alzheimer’s Disease Neuroimaging Initiative (ADNI) [119], and exercise activity data from patients with schizophrenia and depression [120]. Table 4 lists the summary of the monomodal and multimodal clinical datasets.
3.2.2. Knowledge Graph
The use of knowledge graphs enables the formal expression of medical expertise and the semantic structuring of unstructured multimodal data. During this period, while learning existing HKGs, the knowledge graph can be expanded using new data. In particular, the emergence of large language models has facilitated the construction of more comprehensive and accurate HKGs. HKGs for CDSSs exist in fields such as medicine (prescription) [141,142], genetics [142,143], and disease [144,145], and based on this, an extended knowledge graph can be generated.
To effectively respond to new clinical cases, information collected in real time should be used in conjunction with existing knowledge. As mentioned previously, an automatic scalability of knowledge graphs is required, and reinforcement learning can be applied to ensures persuasive power. Numerous knowledge graphs exist in the clinical fields other than CDSSs, which are summarized in Table 5 based on application range.
3.3. XAI Model
Prior to the analysis of the multimodal data, the process of fusing multimodal data is required. In particular, it is important to select meaningful features to obtain the desired information from the vast amounts of data. To achieve this, deep learning, which trains a neural network composed of multiple layers, can be used to extract features and representations from complex data. Deep learning models [158] are well-suited to the integration and extraction of meaningful information owing to their capacity to learn complex patterns and generate knowledge for decision-making through the processing of vast amounts of data. Deep learning models [159], such as those pre-trained on large datasets such as ImageNet [160] or natural language corpora, can be employed to obtain correlations through the generation of new samples from multiple modalities with generative models from GAN [161] or VAE [162].
The advent of the transformer [62] has made multimodal data more accessible as it enables immediate inference based on an attention network rather than a model using convolutional structure, as previously described. Using the transformer structure, it is possible to learn multiple modalities together because encoding is possible in a consistent structure for all data modalities. Since the proposal of the vision transformer (VT) [163] that encodes images in a manner similar to natural language, there have been numerous attempts to apply the transformer to other modalities in video and voice.
Recently, VATT [164], a framework for learning multimodal representations from unlabeled data, has been developed for the extraction of multimodal representations from unlabeled signals. It is possible to perform a range of tasks, including behavioral recognition, voice event classification, image classification, and text-video search using video and audio, and text features extracted from Internet images. Based on the VATT model, the patient’s daily and counseling videos can be analyzed to identify biometric signals, changes over time, and nonverbal expressions for use in clinical decision-making.
XAI techniques can be largely divided based on explanation method, interpretation method, model specificity, and explanation range [165], as shown in Figure 5. The backpropagation-based XAI measures the degree to which each feature affects the result as a gradient value. Class activity map-based XAI visualizes features with a significant impact using the feature map of the uppermost layer, which aggregates the necessary information. Finally, input interference-based XAI provides explanatory power through the process of repeatedly investigating the model while making various changes to the input value of the model.
4. Applications
4.1. Function of CDSSs
The objective of CDSSs is to facilitate optimal decision-making for patient safety. Consequently, the role of CDSSs is to support the decision-making processes involved in diagnosis, treatment, and prescription, which are directly related to patient safety. Table 6 illustrates the functions of CDSS.
The first function of CDSSs is diagnostic support. CDSSs provide diagnostic information on suspected diseases collecting and monitoring figures showing the patient’s conditions, such as biosignals. This function provides efficient management of beds by supporting useful decision-making to inexperienced clinicians or nursing personnel.
The second function is treatment support. This function supports the determination of the optimal treatment by analyzing all applicable treatment methods in consideration of the patient’s current condition. Through treatment method analysis, information analysis of factors such as inconsistencies, errors, omissions, and side effects between the treatment methods is possible. This function is the most commonly studied area and provides interaction information between prescribed drugs or checks for side effects when multiple drugs are prescribed together. The third function of CDSSs is medical image analysis. As the performance of deep learning models using image data develops, it is the most commonly used CDSS function related to AI in the medical field. By analyzing medical image data such as X-rays, MRIs, and CT scans, which are most commonly used to identify patient diseases through deep learning, more accurate decision support can be provided to clinicians.
Finally, the system serves as a risk notification function. This function is typically activated in patients admitted to the hospital and immediately alerts medical personnel when abnormal symptoms or dangerous levels are identified while collecting and monitoring patient biometric signals, such as pulse rate, blood pressure, and temperature.
4.2. The Potential of XAI-Based CDSSs
The most significant features of the proposed XAI-based CDSS framework are generalizations owing to the use of multimodal data, scalability to apply various deep learning SOTA models, and trustworthiness through explainable AI technologies. The fusion and learning of various types of multimodal data generated in the medical domain can result in the discovery of features and patterns that were not found in models that only address single modalities. This can be achieved by developing a generalized foundation model using deep learning models that achieved SOTA performance in each field. Finally, the decision-making process can be explained and interpreted using various explainable AI technologies, enabling the development of a transparent and reliable clinical decision-making system. Figure 6 illustrates the areas where the XAI-based CDSS. framework proposed in this study are the most efficiently utilized. Disease detection represents the first field in which it can be used. Research related to deep learning-based disease detection has already demonstrated high performance; however, there is a challenge in utilizing it in the actual medical and nursing domain because of issues such as the black-box problem of deep learning. To address this challenge, explainable AI technology is employed to elucidate the rationale behind the predictions generated by the deep learning model, thereby facilitating its deployment in the medical and nursing domain.
The second area of application is nurses’ clinical decision support for inpatients. In clinical situations, there is a significant challenge in that the number of professional nurses provide the surveillance for patients is insufficient. Furthermore, in the case of an inexperienced nurses, it is challenging to identify and diagnose appropriate symptoms necessary for patients. The XAI-based CDSS framework is capable of simultaneously monitoring multiple patients and efficiently supporting the decision-making of professional nurses by providing the cause and basis when abnormal symptoms are detected.
The third area of application is treatment and prescription. By analyzing the patient’s medical history or medical record, customized treatment and prescription optimized for the current patient are possible. Specifically, when prescribing a drug, it is possible to derive more optimal prescription results for patient safety by analyzing potential side effects that may occur between the components of the drug prescribed together in advance and providing information to professional medical personnel.
The final area in which XAI-based CDSSs can be utilized is clinical practice. Unlike the aforementioned applications in hospitals and medical institutions, it is employed in the education of professional personnel. In particular, in the case of professional nurses, clinical practice is conducted in educational institutions on cases that may occur in hospitals. The use of XAI-based CDSSs explains the decisions that should be made about the situations that may occur in the ward and the reasons for the decisions.
5. Discussion and Conclusions
The main contribution of this paper is to categorize various studies ranging from traditional CDSSs to state-of-the-art XAI-based CDSSs according to appropriate criteria and to systematically summarize the features and limitations of each CDSS. Furthermore, we propose a new CDSS framework utilizing the latest XAI technology, systematically review explainable AI techniques that can ensure trustworthiness and transparency in the healthcare domain by introducing the areas where they can be most effectively utilized, and provide a future-proof roadmap for XAI-based CDSSs. By organizing the CDSS services introduced above by field and feature, it is possible to grasp the strengths and weaknesses of each CDSS service, and the matters required for CDSS services in the future. However, in the case of existing systems, there are limitations. These include a limited data utilization range, lack of explanatory power for AI models, and the opacity of the decision-making process. Consequently, in the medical field, which demands reliability and transparency, the decision-making process must be clearly represented for users to interpret it. This paper addressed the black-box problem of unknown-based CDSSs as a solution to the problem and proposed an XAI-based CDSS framework that provides valid evidence and reasons for the results. Furthermore, it introduced the available datasets, models, and resources.
The proposed framework is designed to construct an automated extended knowledge graph with multimodal features derived from multi-format data. It comprises three key elements, including a deep explanatory label, an interpretable model, and model inference, which collectively facilitate explainable AI. The framework has the potential to automate the entire process of medical clinical services, from personalized treatment to real-time patient condition reflection. It distinguishes itself from existing systems in terms of multimodal data management, utilization plan, explainable AI application plan, and CDSS application range. Furthermore, the medical knowledge graph and HKG, which are structured using available medical multimodal data, are summarized to enhance expertise in decision-making. The proposed XAI-based CDSS framework serves as a foundational model that can be flexibly applied to multiple disease domains. This approach enables the development of a medical system with minimal temporal and spatial constraints.
Furthermore, advancements in the medical field are facilitated by the ease with which computerized data can be used for research purposes. However, measures to enhance social awareness are necessary because the current medical data are not readily accessible owing to concerns regarding the protection of sensitive personal information. Additionally, there is a prevailing attitude that humans should be trusted even when explanations of the data sources, transparency regarding decision-making processes, and the results are provided. As non-face-to-face medical systems become increasingly prevalent, it is imperative that relevant legal deregulation be enacted. If social awareness and institutional improvements are guaranteed, it is expected that this will facilitate the development of compelling medical solutions through CDSS research and development that integrate richer medical multimodal data, medical knowledge graphs, and XAI technology. For future research, we have officially obtained data of post-abdominal surgery patients from the hospital and will apply an XAI-based CDSS to abdominal surgery patients to verify its clinical feasibility in practice.
Conceptualization, S.Y.K. and O.R.J.; methodology, D.H.K.; software, H.J.K.; validation, D.H.K., M.J.K. and H.J.K.; formal analysis, D.H.K.; investigation, M.J.K. and H.J.K.; resources, H.J.K.; data curation, M.J.K.; writing—original draft preparation, M.J.K. and H.J.K.; writing—review and editing, D.H.K.; visualization, D.H.K.; supervision, O.R.J.; project administration, O.R.J.; funding acquisition, S.Y.K. All authors have read and agreed to the published version of the manuscript.
Not applicable.
Not applicable.
Not applicable.
The authors declare no conflicts of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 1. Difference between knowledge-based, non-knowledge-based, and XAI-based CDSSs.
Datasets.
Category | Modality | Dataset | Features | # of Samples |
---|---|---|---|---|
Single Modality | Image | MURA [ | Musculoskeletal radiology images | 40,561 radiograph images |
MRNet [ | MRI images | 1370 knee MRI images | ||
RSNA [ | Chest X-ray images | 29,687 X-ray images | ||
Demner, F., 2016 [ | Chest X-ray images | 3996 radiology reports and 8121 associated images | ||
OASIS [ | MRI images | 140,000 MRI images | ||
ADNI [ | MRI images | 2733 MRI images | ||
X. Wang, 2017 [ | Chest X-ray images | 108,948 frontal-view X-ray images of 32,717 patients | ||
Armato III 2011 [ | CT images | 7371 lesions images | ||
Single Modality | Image | TCIA [ | Cancer tumor images | 50 million MRI Images |
EHR | MIMIC-III [ | Demographics, clinical history, diagnoses, prescriptions, physical information, etc. | 112,000 Clinical reports records | |
eICU [ | Management activities, clinical assessments, treatment plans, vital sign measurements, etc. | 200,000 critical care patients | ||
Text | 2010 i2b2/VA [ | Discharge statements, progress reports, radiology reports, pathology reports | 72,846 entities from 826 notes | |
2012 i2b2 [ | Discharge statements | 30,062 entities from 310 notes | ||
2014 i2b2/UTHealth [ | Longitudinal medical record | 1304 medical records of 297 patients | ||
2018 n2c2 [ | Discharge statements | 202 records of patient with document-level annotations | ||
Multiple Modalities | Genome, Image | TCGA [ | Genome, medical images | 11,000 human tumors across 33 different cancer types |
Genome, Image, EHR | UK Biobank [ | Clinical information, genomic analysis, exon variant testing, whole genome sequencing testing, MRI, etc. | 500,000 individuals clinical records | |
Image, Text | ImageCLEFmed [ | Diagnostic images, visible light images, signals, waves, etc. | 4000 radiology images with 4000 associated Question-Answer pairs | |
Openi [ | Medical articles, images | 3.7 million images from about 1.2 million PubMed articles | ||
Multiple signals | PhysiNet [ | Biomedical signals | Deidentified health data associated with over 200,000 admissions to ICUs | |
Video | MedVidCL [ | Medical instruction images | 6617 videos annotation | |
UNBC-McMaster [ | Patient with shoulder pain | Recorded 200 sequences for 25 patients | ||
MedVidOA [ | Medical instruction images | 3010 manually created health-related questions and timestamps as visual answers |
Clinical knowledge graphs.
CDSS | Bio-Informatics | Medicine | Pharmaceutical Chemistry | |
---|---|---|---|---|
HKG | DrugBank [ | Gene Ontology [ | GEFA [ | HetioNet [ |
POBOKOP [ | Reaction [ | |||
KnowLife [ | KEGG [ | ASICS [ | DrKG [ | |
Disease Ontology [ | Hetionet [ | |||
iBKH [ | Cell Ontology [ | GP-KG [ | PrimeKG [ | |
PharmKG [ | DRKF [ |
XAI Models.
Category | Model | Features | XAI Methods | ||||
---|---|---|---|---|---|---|---|
Post Hoc | Global | Local | Model Specific | Model Agnostic | |||
CAM Based | GradCAM [ | Feature classification by treating the slope of the prediction over the activation map as a weight | ✓ | ✓ | ✓ | ||
Backpropagation Based | Gradient [ | Reflect the rate of output change as input changes | ✓ | ✓ | ✓ | ||
Guided BackProp [ | Backpropagate non-negative input and output gradients | ✓ | ✓ | ✓ | |||
GuidedGradCAM [ | Compute the binwise product between the guided backpropagation and the CAM signal | ✓ | ✓ | ✓ | |||
DeepLift [ | Differentiate backpropagation rules from reference operations to account for prediction differences from extraneous informational | ✓ | ✓ | ✓ | |||
Integrated Gradients [ | Approximate a gradient from a neutral baseline input to a target input | ✓ | ✓ | ✓ | |||
Backpropagation Based | GradientShap [ | Approximate Shapley values by computing the expectation of a gradient | ✓ | ✓ | |||
Deconvolution [ | Modify gradient calculation rules in ReLU functions instead of backpropagating non-negative input gradients | ✓ | ✓ | ✓ | |||
SmoothGrad [ | Smoothed noisy gradient signals by averaging heatmaps over input and imputed neighbor samples | ✓ | ✓ | ✓ | |||
Input Inference Based | Occlusion [ | Occlude a portion of the image with a sliding window and average the difference in output as a feature attribute | ✓ | ✓ | |||
Shapley Value Sampling [ | Compute Shapley values over a subset of all possible feature combinations | ✓ | ✓ | ✓ | |||
Kernel Shap [ | Use non-modality-specific heatmaps | ✓ | ✓ | ✓ | |||
Feature Permutation [ | Replace image features by shuffling feature values within a batch and compute the resulting prediction difference | ✓ | ✓ | ✓ | |||
LIME [ | Sampling neighboring data around the input to learn an interpretable model | ✓ | ✓ | ✓ |
References
1. Khalifa, M. Clinical decision support: Strategies for success. Procedia Comput. Sci.; 2014; 37, pp. 422-427. [DOI: https://dx.doi.org/10.1016/j.procs.2014.08.063]
2. Simon, S.R.; Smith, D.H.; Feldstein, A.C.; Perrin, N.; Yang, X.; Zhou, Y.; Platt, R.; Soumerai, S.B. Computerized prescribing alerts and group academic detailing to reduce the use of potentially inappropriate medications in older people. J. Am. Geriatr. Soc.; 2006; 54, pp. 963-968. [DOI: https://dx.doi.org/10.1111/j.1532-5415.2006.00734.x] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/16776793]
3. Liu, X.; Deng, R.H.; Choo, K.K.R.; Yang, Y. Privacy-preserving reinforcement learning design for patient-centric dynamic treatment regimes. IEEE Trans. Emerg. Top. Comput.; 2019; 9, pp. 456-470. [DOI: https://dx.doi.org/10.1109/TETC.2019.2896325]
4. De Fauw, J.; Ledsam, J.R.; Romera-Paredes, B.; Nikolov, S.; Tomasev, N.; Blackwell, S.; Askham, H.; Glorot, X.; O’Donoghue, B.; Visentin, D. et al. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat. Med.; 2018; 24, pp. 1342-1350. [DOI: https://dx.doi.org/10.1038/s41591-018-0107-6] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/30104768]
5. Liang, Y.; Li, S.; Yan, C.; Li, M.; Jiang, C. Explaining the black-box model: A survey of local interpretation methods for deep neural networks. Neurocomputing; 2021; 419, pp. 168-182. [DOI: https://dx.doi.org/10.1016/j.neucom.2020.08.011]
6. Doshi-Velez, F.; Kim, B. Towards a rigorous science of interpretable machine learning. arXiv; 2017; [DOI: https://dx.doi.org/10.48550/arXiv.1702.08608] arXiv: 1702.08608
7. Sutton, R.T.; Pincock, D.; Baumgart, D.C.; Sadowski, D.C.; Fedorak, R.N.; Kroeker, K.I. An overview of clinical decision support systems: Benefits, risks, and strategies for success. NPJ Digit. Med.; 2020; 3, 17. [DOI: https://dx.doi.org/10.1038/s41746-020-0221-y] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/32047862]
8. Syeda-Mahmood, T. Role of big data and machine learning in diagnostic decision support in radiology. J. Am. Coll. Radiol.; 2018; 15, pp. 569-576. [DOI: https://dx.doi.org/10.1016/j.jacr.2018.01.028] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/29502585]
9. Sim, I.; Gorman, P.; Greenes, R.A.; Haynes, R.B.; Kaplan, B.; Lehmann, H.; Tang, P.C. Clinical decision support systems for the practice of evidence-based medicine. J. Am. Med. Inform. Assoc.; 2001; 8, pp. 527-534. [DOI: https://dx.doi.org/10.1136/jamia.2001.0080527]
10. Yang, J.B.; Singh, M.G. An evidential reasoning approach for multiple-attribute decision making with uncertainty. IEEE Trans. Syst. Man Cybern.; 1994; 24, pp. 1-18. [DOI: https://dx.doi.org/10.1109/21.259681]
11. Yang, J.B. Rule and utility based evidential reasoning approach for multiattribute decision analysis under uncertainties. Eur. J. Oper. Res.; 2001; 131, pp. 31-61. [DOI: https://dx.doi.org/10.1016/S0377-2217(99)00441-5]
12. Yang, J.B.; Liu, J.; Wang, J.; Sii, H.S.; Wang, H.W. Belief rule-base inference methodology using the evidential reasoning approach-RIMER. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum.; 2006; 36, pp. 266-285. [DOI: https://dx.doi.org/10.1109/TSMCA.2005.851270]
13. Rahaman, S.; Islam, M.M.; Hossain, M.S. A belief rule based clinical decision support system framework. Proceedings of the 2014 17th International Conference on Computer and Information Technology (ICCIT); Dhaka, Bangladesh, 22–23 December 2014; pp. 165-169. [DOI: https://dx.doi.org/10.1109/ICCITechn.2014.7073083]
14. Kong, G.; Xu, D.L.; Liu, X.; Yang, J.B. Applying a belief rule-base inference methodology to a guideline-based clinical decision support system. Expert Syst.; 2009; 26, pp. 391-408. [DOI: https://dx.doi.org/10.1111/j.1468-0394.2009.00500.x]
15. Hossain, M.S.; Andersson, K.; Naznin, S. A belief rule based expert system to diagnose measles under uncertainty. World Congress in Computer Science, Computer Engineering, and Applied Computing (WORLDCOMP’15), Proceedings of the 2015 International Conference on Health Informatics and Medical Systems 2015, Dallas, TX, USA, 21–23 October 2015; CSREA Press: Las Vegas, NV, USA, 2015; pp. 17-23.
16. Ahmed, F.; Hossain, M.S.; Islam, R.U.; Andersson, K. An evolutionary belief rule-based clinical decision support system to predict COVID-19 severity under uncertainty. Appl. Sci.; 2021; 11, 5810. [DOI: https://dx.doi.org/10.3390/app11135810]
17. Rahaman, S.; Hossain, M.S. A belief rule based clinical decision support system to assess suspicion of heart failure from signs, symptoms and risk factors. Proceedings of the 2013 International Conference on Informatics, Electronics and Vision (ICIEV); Dhaka, Bangladesh, 17–18 May 2013; pp. 1-6. [DOI: https://dx.doi.org/10.1109/ICIEV.2013.6572668]
18. Kong, G.; Xu, D.L.; Body, R.; Yang, J.B.; Mackway-Jones, K.; Carley, S. A belief rule-based decision support system for clinical risk assessment of cardiac chest pain. Eur. J. Oper. Res.; 2012; 219, pp. 564-573. [DOI: https://dx.doi.org/10.1016/j.ejor.2011.10.044]
19. Hossain, M.S.; Ahmed, F.; Andersson, K. A belief rule based expert system to assess tuberculosis under uncertainty. J. Med. Syst.; 2017; 41, 43. [DOI: https://dx.doi.org/10.1007/s10916-017-0685-8] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28138886]
20. Hossain, M.S.; Rahaman, S.; Mustafa, R.; Andersson, K. A belief rule-based expert system to assess suspicion of acute coronary syndrome (ACS) under uncertainty. Soft Comput.; 2009; 22, pp. 7571-7586. [DOI: https://dx.doi.org/10.1007/s00500-017-2732-2]
21. Zhou, Z.G.; Liu, F.; Jiao, L.C.; Zhou, Z.J.; Yang, J.B.; Gong, M.G.; Zhang, X.P. A bi-level belief rule based decision support system for diagnosis of lymph node metastasis in gastric cancer. Knowl. Based Syst.; 2013; 54, pp. 128-136. [DOI: https://dx.doi.org/10.1016/j.knosys.2013.09.001]
22. Silva, B.; Hak, F.; Guimaraes, T.; Manuel, M.; Santos, M.F. Rule-based system for effective clinical decision support. Procedia Comput. Sci.; 2023; 220, pp. 880-885. [DOI: https://dx.doi.org/10.1016/j.procs.2023.03.119]
23. Hyungwon, Y. Clinical Knowledge Modeling for Thyroid Nodule Surgical Treatment CDSS Using Mind Maps and Iterative Decision Trees. J. Korean Inst. Commun. Sci.; 2020; 37, pp. 28-33.
24. Yu, H.W.; Hussain, M.; Afzal, M.; Ali, T.; Choi, J.Y.; Han, H.S.; Lee, S. Use of mind maps and iterative decision trees to develop a guideline-based clinical decision support system for routine surgical practice: Case study in thyroid nodules. J. Am. Med. Inform. Assoc.; 2019; 26, pp. 524-536. [DOI: https://dx.doi.org/10.1093/jamia/ocz001] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/31087071]
25. Yu, H.W. Design and Implementation of a Clinical Decision Support System for Supporting Allergy Diagnosis and Treatment Decision Making for Pediatricians. J. Knowl. Inf. Technol. Syst.; 2023; 18, 5250535. [DOI: https://dx.doi.org/10.34163/jkits.2023.18.3.003]
26. Wasyluk, H.; Onisko, A.; Druzdzel, M. Support of diagnosis of liver disorders based on a causal Bayesian network model. Med. Sci. Monit.; 2001; 7, pp. 327-331. [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/12211748]
27. Cruz-Ramirez, N.; Acosta-Mesa, H.G.; Carrillo-Calvet, H.; Nava-Fernández, L.A.; Barrientos-Martinez, R.E. Diagnosis of breast cancer using Bayesian networks: A case study. Comput. Biol. Med.; 2007; 37, pp. 1553-1564. [DOI: https://dx.doi.org/10.1016/j.compbiomed.2007.02.003] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/17434159]
28. Lucas, P.J.; De Bruijn, N.C.; Schurink, K.; Hoepelman, A. A probabilistic and decision-theoretic approach to the management of infectious disease at the ICU. Artif. Intell. Med.; 2000; 19, pp. 251-279. [DOI: https://dx.doi.org/10.1016/S0933-3657(00)00048-8] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/10906615]
29. Andreassen, S.; Benn, J.J.; Hovorka, R.; Olesen, K.G.; Carson, E.R. A probabilistic approach to glucose prediction and insulin dose adjustment: Description of metabolic model and pilot evaluation study. Comput. Methods Programs Biomed.; 1994; 41, pp. 153-165. [DOI: https://dx.doi.org/10.1016/0169-2607(94)90052-3] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/8187463]
30. Vila-Francés, J.; Sanchis, J.; Soria-Olivas, E.; Serrano, A.J.; Martinez-Sober, M.; Bonanad, C.; Ventura, S. Expert system for predicting unstable angina based on Bayesian networks. Expert Syst. Appl.; 2013; 40, pp. 5004-5010. [DOI: https://dx.doi.org/10.1016/j.eswa.2013.03.029]
31. Edye, E.O.; Kurucz, J.F.; Lois, L.; Paredes, A.; Piria, F.; Rodríguez, J.; Delgado, S.H. Applying Bayesian networks to help physicians diagnose respiratory diseases in the context of COVID-19 pandemic. Proceedings of the 2021 IEEE URUCON; Montevideo, Uruguay, 24–26 November 2021; pp. 368-371.
32. Reijnen, C.; Gogou, E.; Visser, N.C.; Engerud, H.; Ramjith, J.; Van Der Putten, L.J.; Van de Vijver, K.; Santacana, M.; Bronsert, P.; Bulten, J. et al. Preoperative risk stratification in endometrial cancer (ENDORISK) by a Bayesian network model: A development and validation study. PLoS Med.; 2020; 17, e1003111. [DOI: https://dx.doi.org/10.1371/journal.pmed.1003111] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/32413043]
33. Thanathornwong, B.; Suebnukarn, S.; Ouivirach, K. Clinical Decision Support System for Geriatric Dental Treatment Using a Bayesian Network and a Convolutional Neural Network. Healthc. Inform. Res.; 2023; 29, 23. [DOI: https://dx.doi.org/10.4258/hir.2023.29.1.23]
34. Riali, I.; Fareh, M.; Ibnaissa, M.C.; Bellil, M. A semantic-based approach for hepatitis C virus prediction and diagnosis using a fuzzy ontology and a fuzzy Bayesian network. J. Intell. Fuzzy Syst.; 2023; 44, pp. 2381-2395. [DOI: https://dx.doi.org/10.3233/JIFS-213563]
35. Cao, S.; Lingao, W.; Ji, R.; Wang, C.; Yao, L.; Kai, L.; Abdalla, A.N. Clinical Decision Support System Based on KNN/Ontology Extraction Method. Proceedings of the 2020 3rd International Conference on Signal Processing and Machine Learning; Beijing, China, 22–24 October 2020; pp. 56-62. [DOI: https://dx.doi.org/10.1145/3432291.3432305]
36. Abbes, W.; Sellami, D.; Marc-Zwecker, S.; Zanni-Merk, C. Fuzzy decision ontology for melanoma diagnosis using KNN classifier. Multimed. Tools Appl.; 2021; 80, pp. 25517-25538. [DOI: https://dx.doi.org/10.1007/s11042-021-10858-4]
37. Comito, C.; Falcone, D.; Forestiero, A. Diagnosis Detection Support based on Time Series Similarity of Patients Physiological Parameters. Proceedings of the 2021 IEEE 33rd International Conference on Tools with Artificial Intelligence (ICTAI); Beijing, China, 22–24 October 2021; pp. 1327-1331. [DOI: https://dx.doi.org/10.1109/ICTAI52525.2021.00210]
38. Kim, H.J.; Kim, H.J.; Park, Y.; Lee, W.S.; Lim, Y.; Kim, J.H. Clinical genome data model (cGDM) provides interactive clinical decision support for precision medicine. Sci. Rep.; 2020; 10, 1414. [DOI: https://dx.doi.org/10.1038/s41598-020-58088-2] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/31996707]
39. DongJin, J. Natural language processing-based evolutionary clinical decision support systems: A case study in glaucoma diagnosis. J. Korean Inst. Commun. Sci.; 2020; 37, pp. 34-39.
40. Musarrat, H. Intelligent Medical Platform: IMP. J. Korean Inst. Commun. Sci.; 2020; 37, 9.
41. Chua, L.O. CNN: A vision of complexity. Int. J. Bifurc. Chaos; 1997; 7, pp. 2219-2425. [DOI: https://dx.doi.org/10.1142/S0218127497001618]
42. Cen, L.; Yu, Z.L.; Kluge, T.; Ser, W. Automatic system for obstructive sleep apnea events detection using convolutional neural network. Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology society (EMBC); Honolulu, HI, USA, 18–21 July 2018; pp. 3975-3978.
43. Dey, D.; Chaudhuri, S.; Munshi, S. Obstructive sleep apnoea detection using convolutional neural network based deep learning framework. Biomed. Eng. Lett.; 2018; 8, pp. 95-100. [DOI: https://dx.doi.org/10.1007/s13534-017-0055-y] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/30603194]
44. Cerqueiro-Pequeño, J.; Casal-Guisande, M.; Comesaña-Campos, A.; Bouza-Rodríguez, J.B. Conceptual Design of a New Methodology Based on Intelligent Systems Applied to the Determination of the User Experience in Ambulances. Proceedings of the Ninth International Conference on Technological Ecosystems for Enhancing Multiculturality (TEEM’21); Salamanca, Spain, 19–21 October 2021; pp. 290-296. [DOI: https://dx.doi.org/10.1145/3486011.3486464]
45. Esteva, A.; Kuprel, B.; Novoa, R.A.; Ko, J.; Swetter, S.M.; Blau, H.M.; Thrun, S. Dermatologist-level classification of skin cancer with deep neural networks. Nature; 2017; 542, pp. 115-118. [DOI: https://dx.doi.org/10.1038/nature21056] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28117445]
46. Medsker, L.R.; Lakhmi, J. Recurrent neural networks. Des. Appl.; 2001; 5, pp. 64-67.
47. Choi, E.; Schuetz, A.; Stewart, W.F.; Sun, J. Using recurrent neural network models for early detection of heart failure onset. J. Am. Med. Inform. Assoc.; 2017; 24, pp. 361-370. [DOI: https://dx.doi.org/10.1093/jamia/ocw112]
48. Suthaharan, S. Machine learning models and algorithms for big data classification. Integr. Ser. Inf. Syst; 2016; 36, pp. 1-12.
49. Taud, H.; Mas, J.F. Multilayer perceptron (MLP). Geomatic Approaches for Modeling Land Change Scenarios; Springer: Berlin/Heidelberg, Germany, 2018; pp. 451-455. [DOI: https://dx.doi.org/10.1007/978-3-319-60801-3_27]
50. LaValley, M.P. Logistic regression. Circulation; 2020; 117, pp. 2395-2399. [DOI: https://dx.doi.org/10.1161/CIRCULATIONAHA.106.682658]
51. Mahesh, B. Machine learning algorithms—A review. Int. J. Sci. Res. (IJSR); 2020; 9, pp. 381-386. [DOI: https://dx.doi.org/10.21275/ART20203995]
52. Cheng, M.; Sori, W.J.; Jiang, F.; Khan, A.; Liu, S. Recurrent neural network based classification of ECG signal features for obstruction of sleep apnea detection. Proceedings of the 2017 IEEE International Conference on Computational Science and Engineering (CSE) and IEEE International Conference on Embedded and Ubiquitous Computing (EUC); Guangzhou, China, 21–22 July 2017; Volume 2, pp. 199-202. [DOI: https://dx.doi.org/10.1109/CSE-EUC.2017.220]
53. DiPietro, R.; Hager, G.D. Deep learning: RNNs and LSTM. Handbook of Medical Image Computing and Computer Assisted Intervention; Elsevier: Amsterdam, The Netherlands, 2020; pp. 503-519. [DOI: https://dx.doi.org/10.1016/B978-0-12-816176-0.00026-0]
54. Lipton, Z.C.; Kale, D.C.; Elkan, C.; Wetzel, R. Learning to diagnose with LSTM recurrent neural networks. arXiv; 2015; [DOI: https://dx.doi.org/10.48550/arXiv.1511.03677] arXiv: 1511.03677
55. ElMoaqet, H.; Eid, M.; Glos, M.; Ryalat, M.; Penzel, T. Deep recurrent neural networks for automatic detection of sleep apnea from single channel respiration signals. Sensors; 2020; 20, 5037. [DOI: https://dx.doi.org/10.3390/s20185037] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/32899819]
56. Lee, K.H.; Goo, J.M.; Park, C.M.; Lee, H.J.; Jin, K.N. Computer-aided detection of malignant lung nodules on chest radiographs: Effect on observers’ performance. Korean J. Radiol.; 2012; 13, 564. [DOI: https://dx.doi.org/10.3348/kjr.2012.13.5.564]
57. Mazzone, P.J.; Obuchowski, N.; Phillips, M.; Risius, B.; Bazerbashi, B.; Meziane, M. Lung cancer screening with computer aided detection chest radiography: Design and results of a randomized, controlled trial. PLoS ONE; 2013; 8, e59650. [DOI: https://dx.doi.org/10.1371/journal.pone.0059650] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/23527241]
58. Pande, T.; Cohen, C.; Pai, M.; Ahmad Khan, F. Computer-aided detection of pulmonary tuberculosis on digital chest radiographs: A systematic review. Int. J. Tuberc. Lung Dis.; 2016; 20, pp. 1226-1230. [DOI: https://dx.doi.org/10.5588/ijtld.15.0926] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/27510250]
59. Wiering, M.A.; Van Otterlo, M. Reinforcement learning. Adaptation, Learning, and Optimization; Springer: Berlin/Heidelberg, Germany, 2012; Volume 12, 729. [DOI: https://dx.doi.org/10.1007/978-3-642-27645-3]
60. Qiu, X.; Tan, X.; Li, Q.; Chen, S.; Ru, Y.; Jin, Y. A latent batch-constrained deep reinforcement learning approach for precision dosing clinical decision support. Knowl. Based Syst.; 2022; 237, 107689. [DOI: https://dx.doi.org/10.1016/j.knosys.2021.107689]
61. Niraula, D.; Sun, W.; Jin, J.; Dinov, I.D.; Cuneo, K.; Jamaluddin, J.; Matuszak, M.M.; Luo, Y.; Lawrence, T.S.; Jolly, S. et al. A clinical decision support system for AI-assisted decision-making in response-adaptive radiotherapy (ARCliDS). Sci. Rep.; 2023; 13, 5279. [DOI: https://dx.doi.org/10.1038/s41598-023-32032-6]
62. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Proceedings of the Advances in Neural Information Processing Systems; Long Beach, CA, USA, 4–9 December 2017; Volume 30.
63. Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv; 2018; [DOI: https://dx.doi.org/10.48550/arXiv.1810.04805] arXiv: 1810.04805
64. Tang, R.; Yao, H.; Zhu, Z.; Sun, X.; Hu, G.; Li, Y.; Xie, G. Embedding electronic health records to learn BERT-based models for diagnostic decision support. Proceedings of the 2021 IEEE 9th International Conference on Healthcare Informatics (ICHI); Victoria, BC, Canada, 9–12 August 2021; pp. 311-319. [DOI: https://dx.doi.org/10.1109/ICHI52183.2021.00055]
65. Gade, K.; Geyik, S.C.; Kenthapadi, K.; Mithal, V.; Taly, A. Explainable AI in Industry. Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining; New York, NY, USA, 4–8 August 2019; KDD’19 pp. 3203-3204. [DOI: https://dx.doi.org/10.1145/3292500.3332281]
66. Hulsen, T. Explainable Artificial Intelligence (XAI): Concepts and Challenges in Healthcare. AI; 2023; 4, pp. 652-666. [DOI: https://dx.doi.org/10.3390/ai4030034]
67. Sajid, A.; Tamer, A.; Shaker, E.S.; Khan, M.; Jose, M.A.M.; Roberto, C.; Riccardo, G.; Javier, D.S.; Natalia, D.R.; Francisco, H. Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence. Inf. Fusion; 2023; 99, 101805.
68. Gabbay, F.; Bar-Lev, S.; Montano, O.; Hadad, N. A LIME-Based Explainable Machine Learning Model for Predicting the Severity Level of COVID-19 Diagnosed Patients. Appl. Sci.; 2021; 11, 417. [DOI: https://dx.doi.org/10.3390/app112110417]
69. Ribeiro, M.T.; Singh, S.; Guestrin, C. “Why should i trust you?” Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; San Francisco, CA, USA, 13–17 August 2016; pp. 1135-1144. [DOI: https://dx.doi.org/10.1145/2939672.2939778]
70. Zafar, M.R.; Naimul, K. Deterministic Local Interpretable Model-Agnostic Explanations for Stable Explainability. Mach. Learn. Knowl. Extr.; 2021; 3, pp. 525-541. [DOI: https://dx.doi.org/10.3390/make3030027]
71. Adadi, A.; Mohammed, B. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access; 2018; 6, pp. 52138-52160. [DOI: https://dx.doi.org/10.1109/ACCESS.2018.2870052]
72. Loh, H.W.; Ooi, C.P.; Seoni, S.; Barua, P.D.; Molinari, F.; Acharya, U.R. Application of explainable artificial intelligence for healthcare: A systematic review of the last decade (2011–2022). Comput. Methods Programs Biomed.; 2022; 226, 107161. [DOI: https://dx.doi.org/10.1016/j.cmpb.2022.107161] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/36228495]
73. Uddin, M.Z.; Dysthe, K.K.; Følstad, A.; Brandtzaeg, P.B. Deep learning for prediction of depressive symptoms in a large textual dataset. Neural Comput. Appl.; 2022; 34, pp. 721-744. [DOI: https://dx.doi.org/10.1007/s00521-021-06426-4]
74. Magesh, P.R.; Richard, D.M.; Rijo, J.T. An Explainable Machine Learning Model for Early Detection of Parkinson’s Disease using LIME on DaTSCAN Imagery. Comput. Biol. Med.; 2020; 126, 104041. [DOI: https://dx.doi.org/10.1016/j.compbiomed.2020.104041] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/33074113]
75. Dindorf, C.; Teufl, W.; Taetz, B.; Bleser, G.; Fröhlich, M. Interpretability of Input Representations for Gait Classification in Patients after Total Hip Arthroplasty. Sensors; 2020; 20, 4385. [DOI: https://dx.doi.org/10.3390/s20164385]
76. Sidulova, M.; Nina, N.; Chung, H.P. Towards Explainable Image Analysis for Alzheimer’s Disease and Mild Cognitive Impairment Diagnosis. Proceedings of the 2021 IEEE Applied Imagery Pattern Recognition Workshop (AIPR); Washington, DC, USA, 12–14 October 2021; pp. 1-6. [DOI: https://dx.doi.org/10.1109/AIPR52630.2021.9762082]
77. Lundberg, S.M.; Lee, S.-I. A unified approach to interpreting model predictions. Adv. Neural Inf. Process. Syst.; 2017; 30, [DOI: https://dx.doi.org/10.48550/arXiv.1705.07874]
78. Shi, H.; Yang, D.; Tang, K.; Hu, C.; Li, L.; Zhang, L.; Cui, Y. Explainable machine learning model for predicting the occurrence of postoperative malnutrition in children with congenital heart disease. Clin. Nutr.; 2022; 41, pp. 202-210. [DOI: https://dx.doi.org/10.1016/j.clnu.2021.11.006] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/34906845]
79. Chen, H.; Lundberg, S.M.; Erion, G.; Kim, J.H.; Lee, S.I. Forecasting adverse surgical events using self-supervised transfer learning for physiological signals. NPJ Digit. Med.; 2021; 4, 167. [DOI: https://dx.doi.org/10.1038/s41746-021-00536-y] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/34880410]
80. El-Sappagh, S.; Alonso, J.M.; Islam, S.R.; Sultan, A.M.; Kwak, K.S. A multilayer multimodal detection and prediction model based on explainable artificial intelligence for Alzheimer’s disease. Sci. Rep.; 2021; 11, 2660. [DOI: https://dx.doi.org/10.1038/s41598-021-82098-3] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/33514817]
81. Lo, Y.T.; Liao, J.C.H.; Chen, M.H.; Chang, C.M.; Li, C.T. Predictive modeling for 14-day unplanned hospital readmission risk by using machine learning algorithms. BMC Med. Inform. Decis. Mak.; 2021; 21, 288. [DOI: https://dx.doi.org/10.1186/s12911-021-01639-y] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/34670553]
82. Chmiel, F.P.; Burns, D.K.; Azor, M.; Borca, F.; Boniface, M.J.; Zlatev, Z.D.; Kiuber, M. Using explainable machine learning to identify patients at risk of reattendance at discharge from emergency departments. Sci. Rep.; 2021; 11, 21513. [DOI: https://dx.doi.org/10.1038/s41598-021-00937-9] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/34728706]
83. Nguyen, S.; Chan, R.; Cadena, J.; Soper, B.; Kiszka, P.; Womack, L.; Ray, P. Budget constrained machine learning for early prediction of adverse outcomes for COVID-19 patients. Sci. Rep.; 2021; 11, 19543. [DOI: https://dx.doi.org/10.1038/s41598-021-98071-z] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/34599200]
84. Lu, J.; Jin, R.; Song, E.; Alrashoud, M.; Al-Mutib, K.N.; Al-Rakhami, M.S. An Explainable System for Diagnosis and Prognosis of COVID-19. IEEE Internet Things J.; 2021; 8, pp. 15839-15846. [DOI: https://dx.doi.org/10.1109/JIOT.2020.3037915]
85. Rozenbaum, D.; Shreve, J.; Radakovich, N.; Duggal, A.; Jehi, L.; Nazha, A. Personalized prediction of hospital mortality in COVID-19–positive patients. Mayo Clin. Proc. Innov. Qual. Outcomes; 2021; 5, pp. 795-801. [DOI: https://dx.doi.org/10.1016/j.mayocpiqo.2021.05.001]
86. Alves, M.A.; Castro, G.Z.; Oliveira, B.A.S.; Ferreira, L.A.; Ramírez, J.A.; Silva, R.; Guimarães, F.G. Explaining machine learning based diagnosis of COVID-19 from routine blood tests with decision trees and criteria graphs. Comput. Biol. Med.; 2021; 132, 104335. [DOI: https://dx.doi.org/10.1016/j.compbiomed.2021.104335]
87. Alsinglawi, B.; Alshari, O.; Alorjani, M.; Mubin, O.; Alnajjar, F.; Novoa, M.; Darwish, O. An explainable machine learning framework for lung cancer hospital length of stay prediction. Sci. Rep.; 2022; 12, 607. [DOI: https://dx.doi.org/10.1038/s41598-021-04608-7]
88. Hu, C.A.; Chen, C.M.; Fang, Y.C.; Liang, S.J.; Wang, H.C.; Fang, W.F.; Sheu, C.C.; Perng, W.C.; Yang, K.Y.; Kao, K.C. et al. Using a machine learning approach to predict mortality in critically ill influenza patients: A cross-sectional retrospective multicentre study in Taiwan. BMJ Open; 2020; 10, e033898. [DOI: https://dx.doi.org/10.1136/bmjopen-2019-033898] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/32102816]
89. Foroushani, H.M.; Hamzehloo, A.; Kumar, A.; Chen, Y.; Heitsch, L.; Slowik, A.; Dhar, R. Accelerating prediction of malignant cerebral edema after ischemic stroke with automated image analysis and explainable neural networks. Neurocrit. Care; 2022; 36, pp. 471-482. [DOI: https://dx.doi.org/10.1007/s12028-021-01325-x]
90. Mahim, S.M.; Ali, M.S.; Hasan, M.O.; Nafi, A.A.N.; Sadat, A.; Al Hasan, S.; Niu, M.B. Unlocking the Potential of XAI for Improved Alzheimer’s Disease Detection and Classification Using a ViT-GRU Model. IEEE Access; 2024; 12, pp. 8390-8412. [DOI: https://dx.doi.org/10.1109/ACCESS.2024.3351809]
91. Ghosh, S.K.; Ahsan, H.K. Investigation on explainable machine learning models to predict chronic kidney diseases. Sci. Rep.; 2024; 14, 3687. [DOI: https://dx.doi.org/10.1038/s41598-024-54375-4]
92. Redmon, J.; Ali, F. Yolov3: An incremental improvement. arXiv; 2018; arXiv: 1804.02767
93. Zhuang, P.; Wang, Y.; Qiao, Y. Learning Attentive Pairwise Interaction for Fine-Grained Classification. Proceedings of the AAAI Conference on Artificial Intelligence; New York, NY, USA, 7–12 February 2020; Volume 34, pp. 13130-13137. [DOI: https://dx.doi.org/10.1609/aaai.v34i07.7016]
94. Raksasat, R.; Teerapittayanon, S.; Itthipuripat, S.; Praditpornsilpa, K.; Petchlorlian, A.; Chotibut, T.; Chatnuntawech, I. Attentive pairwise interaction network for AI-assisted clock drawing test assessment of early visuospatial deficits. Sci. Rep.; 2023; 13, 18113. [DOI: https://dx.doi.org/10.1038/s41598-023-44723-1] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/37872267]
95. Nambiar, A. Model-agnostic explainable artificial intelligence tools for severity prediction and symptom analysis on Indian COVID-19 data. Front. Artif. Intell.; 2023; 6, 1272506. [DOI: https://dx.doi.org/10.3389/frai.2023.1272506] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/38111787]
96. Hu, Z.F.; Kuflik, T.; Mocanu, I.G.; Najafian, S.; Shulner Tal, A. Recent studies of xai-review. Proceedings of the Adjunct Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization; Utrecht, The Netherlands, 21–25 June 2021; pp. 421-431. [DOI: https://dx.doi.org/10.1145/3450614.3463354]
97. Tan, M.; Quoc, L. Efficientnet: Rethinking model scaling for convolutional neural networks. Proceedings of the International Conference on Machine Learning, PMLR; Long Beach, CA, USA, 9–15 June 2019; pp. 6105-6114. [DOI: https://dx.doi.org/10.48550/arXiv.1905.11946]
98. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; Honolulu, HI, USA, 21–26 July 2017; pp. 4700-4708. [DOI: https://dx.doi.org/10.1109/CVPR.2017.243]
99. Ozbay, F.A.; Erdal, Ö. Brain tumor detection with mRMR-based multimodal fusion of deep learning from MR images using Grad-CAM. Iran J. Comput. Sci.; 2023; 6, pp. 245-259. [DOI: https://dx.doi.org/10.1007/s42044-023-00137-w]
100. Haque, R.; Hassan, M.M.; Bairagi, A.K.; Shariful Islam, S.M. NeuroNet19: An explainable deep neural network model for the classification of brain tumors using magnetic resonance imaging data. Sci. Rep.; 2024; 14, 1524. [DOI: https://dx.doi.org/10.1038/s41598-024-51867-1] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/38233516]
101. Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-cam: Visual explanations from deep networks via gradient-based localization. Proceedings of the IEEE International Conference on Computer Vision; Venice, Italy, 22–29 October 2017; pp. 618-626. [DOI: https://dx.doi.org/10.1109/ICCV.2017.74]
102. Jahmunah, V.; Ng, E.Y.; Tan, R.S.; Oh, S.L.; Acharya, U.R. Explainable detection of myocardial infarction using deep learning models with Grad-CAM technique on ECG signals. Comput. Biol. Med.; 2022; 146, 105550. [DOI: https://dx.doi.org/10.1016/j.compbiomed.2022.105550]
103. Figueroa, K.C.; Song, B.; Sunny, S.; Li, S.; Gurushanth, K.; Mendonca, P.; Liang, R. Interpretable deep learning approach for oral cancer classification using guided attention inference network. J. Biomed. Opt.; 2022; 27, 015001. [DOI: https://dx.doi.org/10.1117/1.JBO.27.1.015001] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/35023333]
104. Chang, J.; Lee, J.; Ha, A.; Han, Y.S.; Bak, E.; Choi, S.; Park, S.M. Explaining the Rationale of Deep Learning Glaucoma Decisions with Adversarial Examples. Ophthalmology; 2021; 128, pp. 78-88. [DOI: https://dx.doi.org/10.1016/j.ophtha.2020.06.036] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/32598951]
105. Qian, X.; Pei, J.; Zheng, H.; Xie, X.; Yan, L.; Zhang, H.; Shung, K.K. Prospective assessment of breast cancer risk from multimodal multiview ultrasound images via clinically applicable deep learning. Nat. Biomed. Eng.; 2021; 5, pp. 522-532. [DOI: https://dx.doi.org/10.1038/s41551-021-00711-2] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/33875840]
106. Singh, R.K.; Rohan, P.; Rishie, N.B. COVIDScreen: Explainable deep learning framework for differential diagnosis of COVID-19 using chest X-rays. Neural Comput. Appl.; 2021; 33, pp. 8871-8892. [DOI: https://dx.doi.org/10.1007/s00521-020-05636-6] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/33437132]
107. Thakoor, K.A.; Koorathota, S.C.; Hood, D.C.; Sajda, P. Robust and interpretable convolutional neural networks to detect glaucoma in optical coherence tomography images. IEEE Trans. Biomed. Eng.; 2020; 68, pp. 2456-2466. [DOI: https://dx.doi.org/10.1109/TBME.2020.3043215] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/33290209]
108. Xu, F.; Jiang, L.; He, W.; Huang, G.; Hong, Y.; Tang, F.; Lv, J.; Lin, Y.; Qin, Y.; Lan, R. et al. The Clinical Value of Explainable Deep Learning for Diagnosing Fungal Keratitis Using in vivo Confocal Microscopy Images. Front. Med.; 2021; 8, 797616. [DOI: https://dx.doi.org/10.3389/fmed.2021.797616] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/34970572]
109. Chetoui, M.; Akhloufi, M.A.; Yousefi, B.; Bouattane, E.M. Explainable COVID-19 Detection on Chest X-rays Using an End-to-End Deep Convolutional Neural Network Architecture. Big Data Cogn. Comput.; 2021; 5, 73. [DOI: https://dx.doi.org/10.3390/bdcc5040073]
110. Liu, S.C.; Lai, J.; Huang, J.Y.; Cho, C.F.; Lee, P.H.; Lu, M.H.; Lin, W.C. Predicting microvascular invasion in hepatocellular carcinoma: A deep learning model validated across hospitals. Cancer Imaging; 2021; 21, 56. [DOI: https://dx.doi.org/10.1186/s40644-021-00425-3] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/34627393]
111. Hou, J.; Gao, T. Explainable DCNN based chest X-ray image analysis and classification for COVID-19 pneumonia detection. Sci. Rep.; 2021; 11, 16071. [DOI: https://dx.doi.org/10.1038/s41598-021-95680-6]
112. Lamy, J.B.; Sekar, B.; Guezennec, G.; Bouaud, J.; Séroussi, B. Explainable artificial intelligence for breast cancer: A visual case-based reasoning approach. Artif. Intell. Med.; 2019; 94, pp. 42-53. [DOI: https://dx.doi.org/10.1016/j.artmed.2019.01.001]
113. Jin, W.; Fatehi, M.; Abhishek, K.; Mallya, M.; Toyota, B.; Hamarneh, G. Artificial intelligence in glioma imaging: Challenges and advances. J. Neural Eng.; 2020; 17, 021002. [DOI: https://dx.doi.org/10.1088/1741-2552/ab8131] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/32191935]
114. Gunning, D.; David, A. DARPA’s Explainable Artificial Intelligence (XAI) Program. AI Mag.; 2019; 40, pp. 44-58. [DOI: https://dx.doi.org/10.1609/aimag.v40i2.2850]
115. Woodfield, R.; Grant, I. UK Biobank Stroke Outcomes GroupUK Biobank Follow-Up and Outcomes Working Group Sudlow, C.L. Accuracy of electronic health record data for identifying stroke cases in large-scale epidemiological studies: A systematic review from the UK Biobank Stroke Outcomes Group. PLoS ONE; 2015; 10, e0140533. [DOI: https://dx.doi.org/10.1371/journal.pone.0140533] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/26496350]
116. Chen, Z.; Chen, J.; Collins, R.; Guo, Y.; Peto, R.; Wu, F.; Li, L. China Kadoorie Biobank of 0.5 million people: Survey methods, baseline characteristics and long-term follow-up. Int. J. Epidemiol.; 2011; 40, pp. 1652-1666. [DOI: https://dx.doi.org/10.1093/ije/dyr120] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/22158673]
117. Nagai, A.; Hirata, M.; Kamatani, Y.; Muto, K.; Matsuda, K.; Kiyohara, Y.; Ninomiya, T.; Tamakoshi, A.; Yamagata, Z.; Mushiroda, T. et al. Overview of the BioBank Japan Project: Study design and profile. J. Epidemiol.; 2017; 27, pp. S2-S8. [DOI: https://dx.doi.org/10.1016/j.je.2016.12.005] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28189464]
118. Johnson, A.E.; Pollard, T.J.; Berkowitz, S.J.; Greenbaum, N.R.; Lungren, M.P.; Deng, C.-Y.; Mark, R.G.; Horng, S. MIMIC-CXR, a de-identified publicly available database of chest radiographs with free-text reports. Sci. Data; 2019; 6, 317. [DOI: https://dx.doi.org/10.1038/s41597-019-0322-0] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/31831740]
119. Mueller, S.G.; Mueller, S.G.; Weiner, M.W.; Thal, L.J.; Petersen, R.C.; Jack, C.; Jagust, W.; Trojanowski, J.Q.; Toga, A.W.; Beckett, L. The Alzheimer’s disease neuroimaging initiative. Neuroimaging Clin. N. Am.; 2005; 15, 869. [DOI: https://dx.doi.org/10.1016/j.nic.2005.09.008] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/16443497]
120. Berle, J.O.; Hauge, E.R.; Oedegaard, K.J.; Holsten, F.; Fasmer, O.B. Actigraphic registration of motor activity reveals a more structured behavioural pattern in schizophrenia than in major depression. BMC Res. Notes; 2010; 3, 149. [DOI: https://dx.doi.org/10.1186/1756-0500-3-149] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/20507606]
121. Bien, N.; Rajpurkar, P.; Ball, R.L.; Irvin, J.; Park, A.; Jones, E.; Bereket, M.; Patel, B.N.; Yeom, K.W.; Shpanskaya, K. et al. Deep-learning-assisted diagnosis for knee magnetic resonance imaging: Development and retrospective validation of MRNet. PLoS Med.; 2018; 15, e1002699. [DOI: https://dx.doi.org/10.1371/journal.pmed.1002699]
122. Shih, G.; Wu, C.C.; Halabi, S.S.; Kohli, M.D.; Prevedello, L.M.; Cook, T.S.; Sharma, A.; Amorosa, J.K.; Arteaga, V.; Galperin-Aizenberg, M. et al. Augmenting the national institutes of health chest radiograph dataset with expert annotations of possible pneumonia. Radiol. Artif. Intell.; 2019; 1, e180041. [DOI: https://dx.doi.org/10.1148/ryai.2019180041]
123. Demner-Fushman, D.; Kohli, M.D.; Rosenman, M.B.; Shooshan, S.E.; Rodriguez, L.; Antani, S.; Thoma, G.R.; McDonald, C.J. Preparing a collection of radiology examinations for distribution and retrieval. J. Am. Med. Inform. Assoc.; 2015; 23, pp. 304-310. [DOI: https://dx.doi.org/10.1093/jamia/ocv080] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/26133894]
124. Marcus, D.S.; Wang, T.H.; Parker, J.; Csernansky, J.G.; Morris, J.C.; Buckner, R.L. Open Access Series of Imaging Studies (OASIS): Cross-sectional MRI Data in Young, Middle Aged, Nondemented, and Demented Older Adults. J. Cogn. Neurosci.; 2007; 19, pp. 1498-1507. [DOI: https://dx.doi.org/10.1162/jocn.2007.19.9.1498] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/17714011]
125. Wang, X.; Peng, Y.; Lu, L.; Lu, Z.; Bagheri, M.; Summers, R.M. ChestX-Ray8: Hospital-Scale Chest X-ray Database and Benchmarks on Weakly-Supervised Classification and Localization of Common Thorax Diseases. Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Los Alamitos, CA, USA, 21–26 July 2017; pp. 3462-3471. [DOI: https://dx.doi.org/10.1109/CVPR.2017.369]
126. Armato III, S.G.; McLennan, G.; Bidaut, L.; McNitt-Gray, M.F.; Meyer, C.R.; Reeves, A.P.; Zhao, B.; Aberle, D.R.; Henschke, C.I.; Hoffman, E.A. et al. The lung image database consortium (LIDC) and image database resource initiative (IDRI): A completed reference database of lung nodules on CT scans. Med. Phys.; 2011; 38, pp. 915-931. [DOI: https://dx.doi.org/10.1118/1.3528204] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/21452728]
127. Clark, K.; Vendt, B.; Smith, K.; Freymann, J.; Kirby, J.; Koppel, P.; Moore, S.; Phillips, S.; Maffitt, D.; Pringle, M. et al. The Cancer Imaging Archive (TCIA): Maintaining and operating a public information repository. J. Digit. Imaging; 2013; 26, pp. 1045-1057. [DOI: https://dx.doi.org/10.1007/s10278-013-9622-7] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/23884657]
128. Johnson, A.E.W.; Pollard, T.J.; Shen, L.; Lehman, L.H.; Feng, M.L.; Ghassemi, M.; Moody, B.; Szolovits, P.; Anthony Celi, L.; Mark, R.G. MIMIC-III, a freely accessible critical care database. Sci. Data; 2016; 3, 160035. [DOI: https://dx.doi.org/10.1038/sdata.2016.35] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/27219127]
129. Pollard, T.J.; Johnson, A.E.; Raffa, J.D.; Celi, L.A.; Mark, R.G.; Badawi, O. The eICU Collaborative Research Database, a freely available multi-center database for critical care research. Sci. Data; 2018; 5, 180178. [DOI: https://dx.doi.org/10.1038/sdata.2018.178] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/30204154]
130. Uzuner, O.; South, B.R.; Shen, S.; DuVall, S.L. 2010 i2b2/VA challenge on concepts, assertions, and relations in clinical text. J. Am. Med. Inform. Assoc.; 2011; 18, pp. 552-556. [DOI: https://dx.doi.org/10.1136/amiajnl-2011-000203] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/21685143]
131. Sun, W.; Anna, R.; Ozlem, U. Evaluating temporal relations in clinical text: 2012 i2b2 Challenge. J. Am. Med. Inform. Assoc.; 2013; 20, pp. 806-813. [DOI: https://dx.doi.org/10.1136/amiajnl-2013-001628] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/23564629]
132. Stubbs, A.; Uzuner, Ö. Annotating longitudinal clinical narratives for de-identification: The 2014 i2b2/UTHealth corpus. J. Biomed. Inform.; 2015; 58, pp. S20-S29. [DOI: https://dx.doi.org/10.1016/j.jbi.2015.07.020]
133. Yang, X.; Bian, J.; Fang, R.; Bjarnadottir, R.I.; Hogan, W.R.; Wu, Y. Identifying relations of medications with adverse drug events using recurrent convolutional neural networks and gradient boosting. J. Am. Med. Inform. Assoc.; 2019; 27, pp. 65-72. [DOI: https://dx.doi.org/10.1093/jamia/ocz144]
134. Tomczak, K.; Patrycja, C.; Maciej, W. Review The Cancer Genome Atlas (TCGA): An immeasurable source of knowledge. Contemp. Oncol./Współczesna Onkol.; 2015; 2015, pp. 68-77. [DOI: https://dx.doi.org/10.5114/wo.2014.47136] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/25691825]
135. De Herrera, A.G.E.; Schaer, R.; Bromuri, S.; Müller, H. Overview of the medical tasks in ImageCLEF 2016. Proceedings of the CLEF Working Notes Evora; Évora, Portugal, 5–8 September 2016.
136. Demner-Fushman, D.; Antani, S.; Simpson, M.; Thoma, G.R. Design and development of a multimodal biomedical information retrieval system. J. Comput. Sci. Eng.; 2012; 6, pp. 168-177. [DOI: https://dx.doi.org/10.5626/JCSE.2012.6.2.168]
137. Goldberger, A.L.; Amaral, L.A.N.; Glass, L.; Hausdorff, J.M.; Ivanov, P.C.; Mark, R.G.; Mietus, J.E.; Moody, G.B.; Peng, C.-K.; Stanley, H.E. PhysioBank, PhysioToolkit, and PhysioNet: Components of a new research resource for complex physiologic signals. Circulation; 2000; 101, pp. e215-e220. [DOI: https://dx.doi.org/10.1161/01.CIR.101.23.e215]
138. Li, B.; Weng, Y.; Xia, F.; Sun, B.; Li, S. VPAI_LAB at MedVidQA 2022: A two-stage cross-modal fusion method for medical instructional video classification. Proceedings of the 21st Workshop on Biomedical Language Processing; Dublin, Ireland, 26 May 2022; pp. 212-219.
139. Lucey, P.; Cohn, J.F.; Prkachin, K.M.; Solomon, P.E.; Matthews, I. Painful data: The UNBC-McMaster shoulder pain expression archive database. Proceedings of the 2011 IEEE International Conference on Automatic Face I& Gesture Recognition (FG); Santa Barbara, CA, USA, 21–25 March 2011; pp. 57-64. [DOI: https://dx.doi.org/10.1109/FG.2011.5771462]
140. Gupta, D.; Attal, K.; Demner-Fushman, D. A dataset for medical instructional video classification and question answering. Sci. Data; 2023; 10, 158. [DOI: https://dx.doi.org/10.1038/s41597-023-02036-y] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/36949119]
141. Wishart, D.S.; Feunang, Y.D.; Guo, A.C.; Lo, E.J.; Marcu, A.; Grant, J.R.; Sajed, T.; Johnson, D.; Li, C.; Sayeeda, Z. et al. DrugBank 5.0: A major update to the DrugBank database for 2018. Nucleic Acids Res.; 2018; 46, pp. D1074-D1082. [DOI: https://dx.doi.org/10.1093/nar/gkx1037]
142. Bizon, C.; Cox, S.; Balhoff, J.; Kebede, Y.; Wang, P.; Morton, K.; Fecho, K.; Tropsha, A. ROBOKOP KG and KGB: Integrated knowledge graphs from federated sources. J. Chem. Inf. Model.; 2019; 59, pp. 4968-4973. [DOI: https://dx.doi.org/10.1021/acs.jcim.9b00683]
143. Ernst, P.; Meng, C.; Siu, A.; Weikum, G. KnowLife: A knowledge graph for health and life sciences. Proceedings of the 2014 IEEE 30th International Conference on Data Engineering; Chicago, IL, USA, 31 March–4 April 2014; pp. 1254-1257. [DOI: https://dx.doi.org/10.1109/ICDE.2014.6816754]
144. Schriml, L.M.; Arze, C.; Nadendla, S.; Chang, Y.W.W.; Mazaitis, M.; Felix, V.; Feng, G.; Kibbe, W.A. Disease Ontology: A backbone for disease semantic integration. Nucleic Acids Res.; 2011; 40, pp. D940-D946. [DOI: https://dx.doi.org/10.1093/nar/gkr972] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/22080554]
145. Su, C.; Hou, Y.; Rajendran, S.; Jacqueline, R.M.A.; Abedi, Z.; Zhang, H.T.; Bai, Z.L.; Cuturrufo, A.; Guo, W.; Chaudhry, F.F. et al. Biomedical discovery through the integrative biomedical knowledge hub (iBKH). iScience; 2023; 26, 106460. [DOI: https://dx.doi.org/10.1016/j.isci.2023.106460]
146. Ashburner, M.; Ball, C.A.; Blake, J.A.; Botstein, D.; Butler, H.; Cherry, J.M.; Davis, A.P.; Dolinski, K.; Dwight, S.S.; Eppig, J.T. et al. Gene ontology: Tool for the unification of biology. Nat. Genet.; 2000; 25, pp. 25-29. [DOI: https://dx.doi.org/10.1038/75556]
147. Ranjan, A.; Shukla, S.; Datta, D.; Misra, R. Generating novel molecule for target protein (SARS-CoV-2) using drug–target interaction based on graph neural network. Netw. Model. Anal. Health Inform. Bioinform.; 2022; 11, 6. [DOI: https://dx.doi.org/10.1007/s13721-021-00351-1]
148. Himmelstein, D.S.; Baranzini, S.E. Heterogeneous network edge prediction: A data integration approach to prioritize disease-associated genes. PLoS Comput. Biol.; 2015; 11, e1004259. [DOI: https://dx.doi.org/10.1371/journal.pcbi.1004259] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/26158728]
149. Li, B.; Chen, H. Prediction of compound synthesis accessibility based on reaction knowledge graph. Molecules; 2022; 27, 1039. [DOI: https://dx.doi.org/10.3390/molecules27031039] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/35164303]
150. Bock, G.; Goode, J.A. The KEGG database. ‘In Silico’simulation of Biological Processes: Novartis Foundation Symposium 247; Wiley Online Library: Hoboken, NJ, USA, 2002; Volume 247, pp. 91-103. [DOI: https://dx.doi.org/10.1002/0470857897.ch8]
151. Jeong, J.; Lee, N.; Shin, Y.; Shin, D. Intelligent generation of optimal synthetic pathways based on knowledge graph inference and retrosynthetic predictions using reaction big data. J. Taiwan Inst. Chem. Eng.; 2022; 130, 103982. [DOI: https://dx.doi.org/10.1016/j.jtice.2021.07.015]
152. Zhang, R.; Hristovski, D.; Schutte, D.; Kastrin, A.; Fiszman, M.; Kilicoglu, H. Drug repurposing for COVID-19 via knowledge graph completion. J. Biomed. Inform.; 2021; 115, 103696. [DOI: https://dx.doi.org/10.1016/j.jbi.2021.103696] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/33571675]
153. Serra, L.M.; Duncan, W.D.; Diehl, A.D. An ontology for representing hematologic malignancies: The cancer cell ontology. BMC Bioinform.; 2019; 20, pp. 231-236. [DOI: https://dx.doi.org/10.1186/s12859-019-2722-8] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/31272372]
154. Gao, Z.; Ding, P.; Xu, R. KG-Predict: A knowledge graph computational framework for drug repurposing. J. Biomed. Inform.; 2022; 132, 104133. [DOI: https://dx.doi.org/10.1016/j.jbi.2022.104133] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/35840060]
155. Chandak, P.; Huang, K.; Zitnik, M. Building a knowledge graph to enable precision medicine. Sci. Data; 2023; 10, 67. [DOI: https://dx.doi.org/10.1038/s41597-023-01960-3] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/36732524]
156. Zheng, S.; Rao, J.; Song, Y.; Zhang, J.; Xiao, X.; Fang, E.F.; Yang, Y.; Niu, Z. PharmKG: A dedicated knowledge graph benchmark for bomedical data mining. Brief. Bioinform.; 2021; 22, bbaa344. [DOI: https://dx.doi.org/10.1093/bib/bbaa344] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/33341877]
157. Zhang, X.; Che, C. Drug repurposing for Parkinson’s disease by integrating knowledge graph completion model and knowledge fusion of medical literature. Future Internet; 2021; 13, 14. [DOI: https://dx.doi.org/10.3390/fi13010014]
158. Wang, X.; Chen, G.; Qian, G.; Gao, P.; Wei, X.Y.; Wang, Y.W.; Tian, Y.H.; Gao, W. Large-scale multi-modal pre-trained models: A comprehensive survey. Mach. Intell. Res.; 2023; 20, pp. 447-482. [DOI: https://dx.doi.org/10.1007/s11633-022-1410-8]
159. Du, C.D.; Du, C.Y.; He, H.G. Multimodal deep generative adversarial models for scalable doubly semi-supervised learning. Inf. Fusion; 2021; 68, pp. 118-130. [DOI: https://dx.doi.org/10.1016/j.inffus.2020.11.003]
160. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Li, F.-F. Imagenet: A large-scale hierarchical image database. Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition; Miami, FL, USA, 20–25 June 2009; pp. 248-255. [DOI: https://dx.doi.org/10.1109/CVPR.2009.5206848]
161. Zhou, T.; Li, Q.; Lu, H.; Cheng, Q.; Zhang, X. GAN review: Models and medical image fusion applications. Inf. Fusion; 2023; 91, pp. 134-148. [DOI: https://dx.doi.org/10.1016/j.inffus.2022.10.017]
162. Kingma, D.P.; Max, W. Auto-encoding variational bayes. arXiv; 2013; [DOI: https://dx.doi.org/10.48550/arXiv.1312.6114] arXiv: 1312.6114
163. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S. et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv; 2020; [DOI: https://dx.doi.org/10.48550/arXiv.2010.11929] arXiv: 2010.11929
164. Akbari, H.; Yuan, L.; Qian, R.; Chuang, W.H.; Chang, S.F.; Cui, Y.; Gong, B. Vatt: Transformers for multimodal self-supervised learning from raw video, audio and text. Adv. Neural Inf. Process. Syst.; 2021; 34, pp. 24206-24221. [DOI: https://dx.doi.org/10.48550/arXiv.2104.11178]
165. Lin, Y.S.; Lee, W.C.; Celik, Z.B. What do you see? Evaluation of explainable artificial intelligence (XAI) interpretability through neural backdoors. Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining; Virtual, 14–18 August 2021; pp. 1027-1035. [DOI: https://dx.doi.org/10.1145/3447548.3467213]
166. Selvaraju, R.R.; Das, A.; Vedantam, R.; Cogswell, M.; Parikh, D.; Batra, D. Grad-CAM: Why did you say that?. arXiv; 2016; [DOI: https://dx.doi.org/10.48550/arXiv.1611.07450] arXiv: 1611.07450
167. Simonyan, K.; Vedaldi, A.; Zisserman, A. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv; 2013; [DOI: https://dx.doi.org/10.48550/arXiv.1312.6034] arXiv: 1312.6034
168. Springenberg, J.T.; Dosovitskiy, A.; Brox, T.; Riedmiller, M. Striving for simplicity: The all convolutional net. arXiv; 2014; [DOI: https://dx.doi.org/10.48550/arXiv.1412.6806] arXiv: 1412.6806
169. Shrikumar, A.; Greenside, P.; Kundaje, A. Learning important features through propagating activation differences. Proceedings of the International Conference on Machine Learning. PMLR; Sydney, Australia, 6–11 August 2017; pp. 3145-3153. [DOI: https://dx.doi.org/10.48550/arXiv.1704.02685]
170. Shrikumar, A.; Greenside, P.; Shcherbina, A.; Kundaje, A. Not just a black box: Learning important features through propagating activation differences. arXiv; 2016; [DOI: https://dx.doi.org/10.48550/arXiv.1605.01713] arXiv: 1605.01713
171. Zeiler, M.D.; Fergus, R. Visualizing and understanding convolutional networks. Proceedings of the Computer Vision–ECCV 2014: 13th European Conference; Zurich, Switzerland, 6–12 September 2014; Proceedings, Part I 13 Springer: Berlin/Heidelberg, Germany, 2014; pp. 818-833. [DOI: https://dx.doi.org/10.1007/978-3-319-10590-1_53]
172. Smilkov, D.; Thorat, N.; Kim, B.; Viégas, F.; Wattenberg, M. Smoothgrad: Removing noise by adding noise. arXiv; 2017; [DOI: https://dx.doi.org/10.48550/arXiv.1706.03825] arXiv: 1706.03825
173. Castro, J.; Gómez, D.; Tejada, J. Polynomial calculation of the Shapley value based on sampling. Comput. Oper. Res.; 2009; 36, pp. 1726-1730. [DOI: https://dx.doi.org/10.1016/j.cor.2008.04.004]
174. Fisher, A.; Rudin, C.; Dominici, F. All models are wrong, but many are useful: Learning a variable’s importance by studying an entire class of prediction models simultaneously. J. Mach. Learn. Res.; 2019; 20, pp. 1-81. [DOI: https://dx.doi.org/10.48550/arXiv.1801.01489]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
With increasing electronic medical data and the development of artificial intelligence, clinical decision support systems (CDSSs) assist clinicians in diagnosis and prescription. Traditional knowledge-based CDSSs follow an accumulated medical knowledgebase and a predefined rule system, which clarifies the decision-making process; however, maintenance cost issues exist in the medical data quality control and standardization processes. Non-knowledge-based CDSSs utilize vast amounts of data and algorithms to effectively make decisions; however, the deep learning black-box problem causes unreliable results. EXplainable Artificial Intelligence (XAI)-based CDSSs provide valid rationales and explainable results. These systems ensure trustworthiness and transparency by showing the recommendation and prediction result process using explainable techniques. However, existing systems have limitations, such as the scope of data utilization and the lack of explanatory power of AI models. This study proposes a new XAI-based CDSS framework to address these issues; introduces resources, datasets, and models that can be utilized; and provides a foundation model to support decision-making in various disease domains. Finally, we propose future directions for CDSS technology and highlight societal issues that need to be addressed to emphasize the potential of CDSSs in the future.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details

1 Department of Nursing, Changwon National University, Changwon-si 51140, Republic of Korea;
2 School of Computing, Gachon University, Seongnam-si 13120, Republic of Korea;