1. Introduction
Over the past decade, ML and DL have emerged as transformative technologies with significant impacts across multiple scientific disciplines. These techniques, as branches of artificial intelligence (AI), have become essential tools for analyzing large volumes of data, identifying complex patterns, and providing innovative solutions to previously intractable problems. Their relevance has been particularly emphasized in health sciences, including medicine, biology, and, more recently, dentistry, where their application holds the potential to optimize diagnostics, personalize treatments, and improve clinical outcomes.
ML can be defined as a subfield of AI that employs algorithms to enable machines to learn patterns and behaviors from data without being explicitly programmed for each specific task (Samuel, 1959 [1]). Within this field, DL represents a significant evolution, utilizing artificial neural networks with multiple layers capable of learning hierarchical data representations. These deep architectures have proven especially effective in tasks involving large datasets and complex features, such as medical image interpretation, disease prediction, and biological signal analysis [2].
The current utility of ML and DL in health sciences is demonstrated in a wide range of applications, including but not limited to medical image analysis, genomic data mining, drug modeling, and clinical outcome prediction. For instance, in radiology, convolutional neural networks (CNNs) have been successfully employed to detect tumor lesions in computed tomography (CT) and magnetic resonance imaging (MRI) scans, achieving accuracy levels comparable to those of human experts [3]. In genomics, DL algorithms have facilitated the decoding of complex gene interactions, accelerating the development of personalized therapies [4].
In the field of dentistry, the impact of ML and DL is beginning to solidify with promising applications. Dentistry, as a health science, has undergone significant digital transformation in recent years, driven by technologies such as cone-beam computed tomography (CBCT), three-dimensional (3D) printing, and CAD/CAM systems. However, the integration of ML and DL in this domain has opened new avenues for diagnosis, treatment design, and disease monitoring. For example, DL algorithms have been effective in identifying dental caries [5], fractures, periodontal diseases [6], and periapical conditions from digital radiographs, enhancing diagnostic accuracy and reducing variability among professionals [7].
Beyond diagnostics, these technologies are beginning to influence treatment planning and execution. In orthodontics, for example, ML is used to predict tooth movement and optimize the placement of brackets or aligners, leading to more effective and personalized treatments [8]. In implantology, ML models assist in predicting dental implant stability over time by considering factors such as bone density, implant location, and patient characteristics [9]. In endodontics, AI supports professionals by detecting periapical lesions, identifying root fractures, analyzing root canal morphology, predicting retreatment needs, and aiding in regenerative pulpal therapy, all of which contribute to improved diagnostics, treatment planning, and patient care [10,11,12,13,14,15,16].
The future of ML and DL in health sciences, particularly in dentistry, promises to be even more revolutionary. It is anticipated that the combination of these technologies with advanced sensor systems and data from wearable devices will enable continuous, real-time monitoring of oral health. For example, the integration of DL with intraoral connected devices could facilitate the early detection of diseases such as oral cancer through the analysis of salivary biomarkers or intraoral images [17]. Additionally, the development of explainable AI (XAI) systems could address one of the most pressing current challenges: the need to provide clear and interpretable explanations of algorithm-generated predictions, fostering clinical acceptance and ethical use [18].
Despite these promises, the implementation of ML and DL in clinical practice faces significant challenges that must be addressed to ensure success. These include the need for large volumes of high-quality data for model training, ethical and legal concerns related to data privacy, and the necessity of educating healthcare professionals in the use of these technologies. These challenges highlight the importance of interdisciplinary collaboration involving researchers, technology developers, clinicians, and policymakers [19,20].
2. Study Objectives
To evaluate DL as an additional variable in an ML study for predicting NSRCTs in cases of AP. This study aims to determine the extent to which deep neural networks can predict the outcome of NSRCTs in teeth with apical periodontitis using digital periapical radiographs of confirmed AP diagnoses.
3. Materials and Methods
3.1. Sample Selection
A retrospective study was conducted based on the analysis of clinical records of patients with AP who underwent NSRCTs for the first time (not retreatments). Cases were randomly selected from the database of a private clinic in Mallorca, Spain. Only patients without reported systemic diseases [21] who received treatment for the first time and whose records included the following were included:
A comprehensive medical and dental history with general, facial, and oral inspection reports, as well as dental percussion and palpation examinations.
Results of complementary tests, such as thermal sensitivity testing using an ice pencil and periapical radiography.
A follow-up period of at least nine years, starting six months after treatment, with documented evaluations of lesion recovery, categorizing cases as successful (0: there are no symptoms or indications for further treatments, and the lesion disappears after NSRCT) or failed (1: the failure occurs when either the clinical or radiographic outcome fails).
Radiographs were acquired using an X Mind Unity Acteon Satelec system with a focal point of 0.4 mm, at 70 kV and 7 mA, employing a Carestream 6100 digital system with a resolution of 15 LP/mm. The bisecting angle technique was used with a Rinn XCD (Dentsply) positioner (Dentsply, Charlotte, NC, USA) [22]. Patients with vertical root fractures or teeth without sufficient ferrule structure for subsequent restoration were excluded.
Due to this filtering process, the final number of patients included in this study was reduced to 119. Patient consent was waived due to the inability to identify participants in the database. The Research Ethics Committee of the Balearic Islands (IB4015/19IP) approved this study.
3.2. Intervention Procedure
The 119 patients with confirmed AP, for whom eight preoperative domain variables were observed as per a recommended data collection template (DCT) for endodontic treatment evaluation studies [23,24,25], underwent standardized endodontic treatment performed by the same endodontist using identical materials and procedures. The following phases were followed:
Local anesthesia administration and rubber dam placement.
Chamber access and pre-enlargement of the coronal third, followed by apical third negotiation.
Working length determination using a Morita apex locator (Morita, Tokyo, Japan) and radiographic confirmation. The working length was always set at the radiographic apex level.
Instrumentation with K3 (SybronEndo, Orange, CA, USA) and Protaper Gold (Dentsply Maillefer, Woodbridge, ON, USA) rotary systems, complemented with manual instruments.
Irrigation with EDTA and 5.25% sodium hypochlorite.
Obturation using the warm vertical condensation technique with AH Plus sealer.
Following treatment completion, cases were radiographically evaluated to rule out overfills or obturation defects.
3.3. Machine Learning and Deep Learning Analysis
To compare DL with ML models, a previous study, “Second Opinion for NSRCT Prognosis Using Machine Learning Models” [26], was utilized, in which logistic regression (LR), random forest (RF), naive Bayes (NB), and k-nearest neighbor (KNN) algorithms were applied. The RF algorithm demonstrated the best performance.
All periapical radiographs, 108 in total since due to geometric distortion or anatomical noise 11 were excluded, used for DL model training were preoperative images obtained prior to NSRCT, labeled based on post-treatment follow-up outcomes (healed: 0/not healed: 1) after a minimum of 9 years (Figure 1 and Figure 2).
The AnotIA software was utilized for the precise segmentation of diagnostic 2D periapical radiographic images of AP, assigning labels to facilitate subsequent analysis (Figure 3). Although the segmented regions produced by AnotIA were not used as direct input for the DL model (i.e., no semantic segmentation masks were provided to the network), these annotations were employed during model development and testing to ensure that the network’s focus aligned with clinically relevant areas. Specifically, the marked regions of apical lesions were used to visually confirm that the model’s predictions and activation maps corresponded to diagnostically meaningful locations. This validation strategy contributed to increasing the model’s interpretability and may serve as a preliminary step toward incorporating explainable AI (XAI) techniques in future research.
Diagnostic two-dimensional AP images were employed to train a convolutional neural network based on the ResNet-18 architecture, a deep 18-layer network designed for recognizing complex patterns in medical images (Figure 4). This architecture has demonstrated efficacy in various AI applications in dentistry [27,28,29,30,31,32,33] due to its residual connections, which facilitate deep network training and mitigate accuracy degradation issues.
The training and validation process of DL follows the same LOOCV scheme used in the evaluation of ML algorithms, including logistic regression (LR), random forest (RF), naive Bayes (NB), and K nearest neighbors (KNN) [26,34]. In this approach, the DL treatment prognosis for each patient is determined by training the model with the images of the remaining patients.
The use of LOOCV is particularly valuable for assessing the performance of artificial intelligence models, as it systematically excludes one data point from the training set, using it as a validation or test instance. Subsequently, a predictive value is generated for the excluded data, and this process is repeated as many times as elements are in the training set. Finally, the predicted values for each excluded data point are compared with the observed values, allowing for a rigorous evaluation of model performance.
In the ML study, once the variable “Prediction by DL” was incorporated into each of the models, the LOOCV scheme was applied again. For variable selection, the Backward Stepwise Selection (BSS) technique was used [34], a commonly employed method for identifying the most relevant features in predictive models.
This study aims to provide evidence regarding the predictive capability of DL in NSRCT prognosis, comparing its performance with conventional ML models and validating its applicability in clinical settings.
3.4. Statistical Analysis
For statistical analysis, we relied on the results obtained in our previous study, where a set of preoperative patient variables, both clinical and demographic, were used as explanatory covariates in various ML models to predict treatment outcomes [26]. In the present study, we included an additional explanatory covariate: the treatment outcome prediction obtained by applying convolutional networks to the diagnostic images of 108 patients, training the networks to forecast the prognosis.
After establishing the performance of DL and the best performing ML model a series of statistical comparisons will be conducted.
For this analysis, Fisher’s exact test will be employed, setting a significance level of 0.05. Any result with a p-value below 0.05 will be considered statistically significant. The comparisons to be evaluated are as follows: Comparison between the best ML model from the previous study [26], random forest (RF), and the combined DL and best performing ML model. Comparison between random forest and DL in general: (RF vs. DL). Comparison between the clinical professional’s prediction (DP) and the combined DL and best performing ML model. Comparison between the clinical professional’s prediction (DP) and the DL model by itself: (DP vs. DL). Comparison between the combined DL and best performing ML model and DL model.
These comparisons will assess the relative efficacy of different predictive approaches, providing valuable insights into the applicability of AI models in predicting the success of NSRCT.
4. Results
Using the results obtained from the DL study, we applied the chi-square test to detect the association between DL results (DL prediction) and the dentist’s outcome, obtaining a p-value of 0.000000127 and an effect size of 0.53 (Table 1) supporting the inclusion of this variable as part of the analysis.
After replicating the study in [26] with this new variable, the best performance was obtained with an LR model, in which the most influential variables were “DL” (predictions generated by DL networks), “Age”, “Smoking”, “Level_Education”, “Periapical” (periapical condition), and “Prognosis”.
The performance of all methods used in this study is presented in Table 2.
Having established the performance of DL and LR, the above-mentioned comparisons were performed.
4.1. Comparison Between Random Forest (RF) and the Deep Learning–Logistic Regression Model (DL-LR)
Overall, DL-LR outperformed the best-performing machine learning model from the previous study [26], random forest, achieving sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy of 0.87, 0.65, 0.79, 0.77, and 0.78, respectively. In comparison, random forest yielded values of 0.83, 0.70, 0.79, 0.74, and 0.77 for the same metrics. However, the differences were not statistically significant, suggesting similar performance between the two models.
4.2. Comparison Between Random Forest and Deep Learning in General (RF vs. DL)
The comparative analysis between the overall DL model and random forest showed no statistically significant differences in their performance.
4.3. Comparison Between the Clinical Professional’s Prediction (DP) and Deep Learning–Logistic Regression (DP vs. DL-LR)
In the comparison between DL-LR and DP, DL-LR demonstrated better performance in sensitivity, NPV, and accuracy. Using the true positive (TP) and false negative (FN) values from Table 2, a statistically significant difference was observed in the sensitivity of the logistic regression model with DL compared to the professional’s prediction (p-value = 0.00041). However, using the false positive (FP) and true negative (TN) values from the same table, no significant differences were found in specificity or PPV. Nevertheless, significant differences were identified in NPV (p-value = 0.01563) and accuracy (p-value = 0.00253).
4.4. Comparison Between the Clinical Professional’s Prediction (DP) and Deep Learning (DL) (DP vs. DL)
Similarly, when comparing the standalone DL model with DP, statistically significant differences were found in sensitivity (p-value = 0.00005), NPV (p-value = 0.0108), and accuracy (p-value = 0.00421), indicating superior performance of DL in these key metrics.
4.5. Comparison Between Deep Learning and the Combined Logistic Regression–Deep Learning Model (DL vs. DL-LR)
Finally, when comparing the individual DL model with the logistic regression model supplemented with categorical variables and the output of the DL model (DL-LR), no statistically significant differences were found in any of the evaluated metrics, suggesting equivalent performance between both models.
4.6. Interpretation of Statistical Comparisons
Based on statistical comparisons: DP vs. DL, RF vs. DL, and DL-LR vs. DL, the following conclusions can be drawn:
Categorical variables have a lower predictive value compared to image-based models.
Sensitivity, NPV, and accuracy metrics show minimal or non-significant differences between RF vs. DL and DL-LR vs. DL models.
High p-values (>0.05) in model comparisons indicate no real difference between categorical data-based approaches.
No clear improvements were found in any metric for RF vs. DL and DL-LR vs. DL comparisons.
In contrast, the DP vs. DL comparison showed significant differences:
DL vs. DP exhibited very low p-values (0.00005 in sensitivity, 0.00421 in accuracy, 0.0108 in NPV), suggesting that image-based models (likely represented by DL) have superior predictive power compared with dental professionals (DPs).
Similarly, DP vs. AI-based methods showed significant differences:
DL-LR vs. DP displayed very low p-values (0.00041 in sensitivity, 0.00253 in accuracy, 0.01563 in NPV), indicating that AI-based methods (DL-LR) have better predictive value than dental professionals (DPs).
5. Discussion
The results obtained in this study, supported by the statistical data collected, highlight the need to compare our AI-based NSRCT prediction for AP with the existing literature to validate our findings scientifically. However, this comparison is challenging due to the limited number of studies dedicated to predicting NSRCT outcomes for apical periodontitis using AI applied to 2D periapical radiographs in endodontics.
AI systems have demonstrated significant advancements in medical imaging, substantially contributing to diagnosis and treatment planning across various specialties. In medicine, convolutional neural networks (CNNs) have been employed for the automatic analysis of pathologies such as breast cancer [35], lung cancer [36,37], and Alzheimer’s disease [38]. In dentistry, AI applications have included dental caries detection [39,40], implant classification [40,41], periodontal bone loss quantification [40,42], and cyst evaluation using various types of radiographs, including periapical, panoramic, cephalometric, and CBCT images [43]. In endodontics, AI has been applied to detect apical periodontitis [7] and C-shaped root canals [44].
Although DL applications in medicine are well established [45], studies on disease and treatment outcome prediction in endodontics remain considerably limited [27,35,36]. In this context, Lee et al. (2023) [27] conducted a study predicting endodontic treatment and retreatment outcomes over a three-year period using 598 preoperative periapical radiographs of single-rooted premolars. Utilizing a ResNet-18 CNN model, trained, validated, and tested, their study focused on two main objectives: detecting various clinical features and predicting treatment outcomes. Their findings confirmed the feasibility of DCNN algorithms for feature detection and endodontic prognosis prediction.
Our study shares the objective of evaluating the predictive capability of endodontic treatments using DL with a ResNet-18 architecture, however, our methodology considers all tooth types, not just single-rooted premolars. The selection of single-rooted premolars in Lee et al.’s study [27] was based on the lower anatomical variability of these teeth compared to incisors or molars, which can present heterogeneous periapical conditions [46,47]. Additionally, all cases analyzed in our study exhibited AP, reducing treatment outcome variability. Unlike Lee et al.’s study [27], our research did not include retreatments, which can influence treatment success rates. Furthermore, our study’s evaluation period was extended to nine years, whereas Lee et al. [27] conducted a three-year follow-up. This distinction is relevant, as short-term evaluations may not fully capture the healing process [47].
A key methodological aspect in endodontic treatment evaluation is the use of the periapical index (PAI) score [48]. In Lee et al.’s study [27], only PAI scores 1, 4, and 5 were considered, omitting stage PAI 3, which reflects bone structural changes with minimal demineralization characteristic of apical periodontitis [49]. In our study, we opted to dichotomize the PAI evaluation to avoid ambiguities. Moreover, our study accounted for working length and obturation type, which are critical parameters influencing treatment success rates [50,51,52].
In a broader context, the literature has explored various AI applications in endodontics. A study employing the AGMB-Transformer model used a dataset of 245 radiographic images of root canal treatments to evaluate its performance in anatomical structure segmentation and outcome classification [53]. Although this study did not focus on treatment prediction, it demonstrated that combining segmentation and classification data significantly improves automated evaluations.
Systematic reviews by Aminoshariae et al. [11], Khanagar et al. [13,16] and Herbst et al. [54] have consolidated knowledge of AI in endodontics, addressing areas such as diagnosis, clinical decision-making, and therapeutic success prediction. However, predicting endodontic treatment outcomes remains an unexplored research gap. Parvathi et al. [10] analyzed AI applications in endodontics, including apical foramen localization, root fracture detection, and retreatment prediction. Campo et al. [55] introduced a case-based reasoning (CBR) system to minimize failed retreatments, however, research addressing NSRCT outcome prediction remains scarce [56].
The use of ResNet-18 architectures in dentistry has proven to be an effective methodology for various applications, including dental caries classification [29], apical periodontitis detection [28], and periodontal disease evaluation [32]. Other studies have employed DL for anatomical structure segmentation [57], predicting inferior alveolar nerve paresthesia after third molar extraction [31], and detecting external root resorptions [30].
Despite advancements in AI applications in endodontics, the current literature presents a shortage of studies focusing on predicting the outcomes of primary endodontic treatments for apical periodontitis. As evidenced by Lee et al. [27] and Yunxiang Li et al. [53], additional studies are imperative. Compared to medicine, where AI has demonstrated significant advancements, efforts in endodontics remain focused on detecting periapical lesions [15,19,28,56,58,59,60], root morphology analysis [15,19,56,58], and retreatment prediction [55,61], leaving considerable room for future research on NSRCT outcome prediction.
6. Conclusions
The findings of this study suggest that image-based artificial intelligence models (DL) exhibit superior predictive capability compared with those relying solely on categorical data. Significant improvements in DL were observed compared with professional prognosis (DP), whereas, differences among models utilizing categorical data were minimal or statistically insignificant. This finding supports the hypothesis that the information contained in images provides greater richness and discriminatory power in predicting endodontic treatment success compared with categorical data.
These results reinforce the importance of radiographic analysis in evaluating AP and its potential progression, highlighting the critical role of AI models in optimizing clinical diagnoses and therapeutic decision-making. Additionally, further exploration of hybrid models that integrate categorical and imaging data is recommended to enhance predictive accuracy in endodontics.
7. Limitations
Despite the promising findings, this study presents certain limitations that must be considered when interpreting the results. First, the model was developed and validated using a restricted dataset collected from a single institution and obtained using a single radiographic device. This lack of heterogeneity in the sample may affect the generalizability of the results to other populations and clinical settings. Additionally, the limited number of samples available for training could contribute to model overfitting, where the algorithm performs well on the training data but fails to generalize to unseen cases. Although LOOCV was employed to mitigate this risk and maximize data usage, this approach can still yield high variance in performance metrics when applied to small datasets. As a result, the robustness and reliability of the model in broader clinical applications may be limited, underscoring the need for further validation with larger, more diverse cohorts.
Furthermore, the scarcity of previous studies addressing the prediction of the success of NSRCTs for apical periodontitis using artificial intelligence poses a challenge for comparing and validating our findings against the existing literature. The limited availability of specific bibliographic material hinders the direct comparison of our results with other predictive models in endodontics, highlighting the need for further research in this area.
Therefore, we recommend conducting multicenter studies with larger sample sizes and diverse radiographic equipment, as well as integrating complementary clinical data to enhance the applicability of these models in dental practice.
Conceptualization and methodology, C.B. and A.N.-M.; formal analysis, investigation, data curation, writing—original draft preparation, C.B., A.N.-M., S.A. and Á.A.L.-G.; writing—review and editing, supervision, C.B., A.N.-M., S.A., Y.G.-C., Á.A.L.-G. and P.J.T. All authors have read and agreed to the published version of the manuscript.
This study was conducted in accordance with the Declaration of Helsinki and approved by the Balearic Islands Research Ethics Committee (IB4015/19IP, 2 December 2019).
Patient consent was waived due to the lack of the possibility of identifying participating patients in the datasets.
Full datasets and R scripts are available upon reasonable request to the corresponding author.
The authors declare no conflicts of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 1 (A) AP in tooth 36; (B) three-year follow-up: healed.
Figure 2 (A) AP in tooth 46; (B) one-year follow-up: healed; (C) AP in tooth 45; (D) four-year follow-up: healed; (E) AP in tooth 41; (F) three-year follow-up: healed; (G) AP in tooth 31; (H) two-year follow-up: not healed.
Figure 3 Demarcation of apical periodontitis. (A) demarcation of the AP in the vision of the tooth as a whole. (B) demarcation of the AP in the apical area of the distal root of the same tooth.
Figure 4 ResNet-18 architecture employed for binary classification of non-surgical root canal treatment (NSRCT) outcomes. The model receives a 224 × 224 grayscale periapical radiograph as input. It processes the image through a series of convolutional layers and residual blocks with increasing filter sizes (64, 128, 256, and 512). A Global Average Pooling layer precedes the Fully Connected Layer, which outputs a binary classification (0: Healed, 1: Not Healed). This architecture enables the model to capture hierarchical features relevant to apical periodontitis prognosis.
Variables associated with the results of the previous ML study, incorporating the DL prediction.
Variable | Levels | p-Value | Effect Size |
---|---|---|---|
Age | 15–24; 25–34; 35–44; 45–54; 55–64; ≥65 | 0.0056 | 0.372 |
Highest level of education | Primary; Secondary; Post secondary | 0.0016 | 0.33 |
Arch | Mandible; Maxilla | 0.02 | 0.21 |
Smoking | No; Every day; Some days; Former | 0.046 | 0.26 |
Patient co-operation | No; Yes | 0.028 | 0.21 |
Pain relieved by | None; Cold; Medication | 0.003 | 0.31 |
Duration of the pain | Sec; Min; Continuous | 0.027 | 0.245 |
Periapical | Asymptomatic AP; Symptomatic AP; Chronic Apical Abscess; Acute Apical Abscess | 0.01 | 0.31 |
Estimated prognosis by clinician | Hopeless; Questionable; Fair; Good; Excellent | 0.034 | 0.29 |
Prediction by DL | Success; Failure | 0.000000127 | 0.53 |
Performance of AI algorithms and the dentist prognosis (DP).
Metric | DP | RF | Logistic Regression (DL-LR) | DL |
---|---|---|---|---|
TP | 42 | 57 | 57 | 59 |
FN | 27 | 12 | 8 | 6 |
FP | 21 | 15 | 15 | 18 |
TN | 29 | 35 | 28 | 25 |
Sensitivity | 0.61 (0.48, 0.72) | 0.83 (0.72, 0.91) | 0.87 (0.77, 0.94) | 0.90 (0.80, 0.90) |
Specificity | 0.58 (0.43, 0.72) | 0.7 (0.55, 0.82) | 0.65 (0.49, 0.78) | 0.58 (0.42, 0.72) |
PPV | 0.67 (0.54, 0.78) | 0.79 (0.68, 0.88) | 0.79 (0.67,0.87) | 0.76 (0.65, 0.85) |
NPV | 0.52 (0.38, 0.65) | 0.74 (0.6, 0.86) | 0.77 (0.60, 0.89) | 0.80 (0.62, 0.92) |
Accuracy | 0.6 (0.5, 0.69) | 0.77 (0.69, 0.84) | 0.78 (0.69, 0.86) | 0.77 (0.68, 0.85) |
1. Samuel, A.L. Some Studies in Machine Learning Using the Game of Checkers. IBM J. Res. Dev.; 1959; 44, pp. 206-226. [DOI: https://dx.doi.org/10.1147/rd.441.0206]
2. Hwang, J.-J.; Jung, Y.-H.; Cho, B.-H.; Heo, M.-S. An overview of deep learning in the field of dentistry. Imaging Sci. Dent.; 2019; 49, pp. 1-7. [DOI: https://dx.doi.org/10.5624/isd.2019.49.1.1]
3. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; van der Laak, J.A.W.M.; van Ginneken, B.; Sánchez, C.I. A survey on deep learning in medical image analysis. Med. Image Anal.; 2017; 42, pp. 60-88. [DOI: https://dx.doi.org/10.1016/j.media.2017.07.005] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28778026]
4. Angermueller, C.; Pärnamaa, T.; Parts, L.; Stegle, O. Deep learning for computational biology. Mol. Syst. Biol.; 2016; 12, 878. [DOI: https://dx.doi.org/10.15252/msb.20156651]
5. Anwar, S.M.; Majid, M.; Qayyum, A.; Awais, M.; Alnowami, M.; Khan, M.K. Medical Image Analysis using Convolutional Neural Networks: A Review. J. Med. Syst.; 2018; 42, 226. [DOI: https://dx.doi.org/10.1007/s10916-018-1088-1]
6. Lin, P.L.; Huang, P.Y.; Huang, P.W. Automatic methods for alveolar bone loss degree measurement in periodontitis periapical radiographs. Comput. Methods Programs Biomed.; 2017; 148, pp. 1-11. [DOI: https://dx.doi.org/10.1016/j.cmpb.2017.06.012] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28774432]
7. Ekert, T.; Krois, J.; Meinhold, L.; Elhennawy, K.; Emara, R.; Golla, T.; Schwendicke, F. Deep Learning for the Radiographic Detection of Apical Lesions. J. Endod.; 2019; 45, pp. 917-922.e5. [DOI: https://dx.doi.org/10.1016/j.joen.2019.03.016]
8. Wang, X.; Cai, B.; Cao, Y.; Zhou, C.; Yang, L.; Liu, R.; Long, X.; Wang, W.; Gao, D.; Bao, B. Objective method for evaluating orthodontic treatment from the lay perspective: An eye-tracking study. Am. J. Orthod. Dentofac. Orthop.; 2016; 150, pp. 601-610. [DOI: https://dx.doi.org/10.1016/j.ajodo.2016.03.028]
9. Alarifi, A.; AlZubi, A.A. Memetic Search Optimization Along with Genetic Scale Recurrent Neural Network for Predictive Rate of Implant Treatment. J. Med. Syst.; 2018; 42, 202. [DOI: https://dx.doi.org/10.1007/s10916-018-1051-1]
10. Gehlot, P.M.; Sudeep, P.; Murali, B.; Mariswamy, A.B. Artificial intelligence in endodontics: A narrative review. J. Int. Oral Health; 2023; 15, pp. 134-141. [DOI: https://dx.doi.org/10.4103/jioh.jioh_257_22]
11. Aminoshariae, A.; Kulild, J.; Nagendrababu, V. Artificial Intelligence in Endodontics: Current Applications and Future Directions. J. Endod.; 2021; 47, pp. 1352-1357. [DOI: https://dx.doi.org/10.1016/j.joen.2021.06.003]
12. Ourang, S.A.; Sohrabniya, F.; Mohammad-Rahimi, H.; Dianat, O.; Aminoshariae, A.; Nagendrababu, V.; Dummer, P.M.H.; Duncan, H.F.; Nosrat, A. Artificial intelligence in endodontics: Fundamental principles, workflow, and tasks. Int. Endod. J.; 2024; 57, pp. 1546-1565. [DOI: https://dx.doi.org/10.1111/iej.14127] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/39056554]
13. Khanagar, S.B.; Alfadley, A.; Alfouzan, K.; Awawdeh, M.; Alaqla, A.; Jamleh, A. Developments and Performance of Artificial Intelligence Models Designed for Application in Endodontics: A Systematic Review. Diagnostics; 2023; 13, 414. [DOI: https://dx.doi.org/10.3390/diagnostics13030414] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/36766519]
14. Asiri, A.F.; Altuwalah, A.S. The role of neural artificial intelligence for diagnosis and treatment planning in endodontics: A qualitative review. Saudi Dent. J.; 2022; 34, pp. 270-281. [DOI: https://dx.doi.org/10.1016/j.sdentj.2022.04.004] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/35692236]
15. Karobari, M.I.; Adil, A.H.; Basheer, S.N.; Murugesan, S.; Savadamoorthi, K.S.; Mustafa, M.; Abdulwahed, A.; Almokhatieb, A.A. Evaluation of the Diagnostic and Prognostic Accuracy of Artificial Intelligence in Endodontic Dentistry: A Comprehensive Review of Literature. Comput. Math. Methods Med.; 2023; 2023, 7049360. [DOI: https://dx.doi.org/10.1155/2023/7049360]
16. Khanagar, S.B.; Al-ehaideb, A.; Maganur, P.C.; Vishwanathaiah, S.; Patil, S.; Baeshen, H.A.; Sarode, S.C.; Bhandi, S. Developments, application, and performance of artificial intelligence in dentistry—A systematic review. J. Dent. Sci.; 2021; 16, pp. 508-522. [DOI: https://dx.doi.org/10.1016/j.jds.2020.06.019]
17. Machoy, M.E.; Szyszka-Sommerfeld, L.; Vegh, A.; Gedrange, T.; Woźniak, K. The ways of using machine learning in dentistry. Adv. Clin. Exp. Med.; 2020; 29, pp. 375-384. [DOI: https://dx.doi.org/10.17219/acem/115083]
18. Zhang, Y.; Weng, Y.; Lund, J. Applications of Explainable Artificial Intelligence in Diagnosis and Surgery. Diagnostics; 2022; 12, 237. [DOI: https://dx.doi.org/10.3390/diagnostics12020237]
19. Pethani, F. Promises and perils of artificial intelligence in dentistry. Aust. Dent. J.; 2021; 66, pp. 124-135. [DOI: https://dx.doi.org/10.1111/adj.12812]
20. Schwendicke, F.; Samek, W.; Krois, J. Artificial Intelligence in Dentistry: Chances and Challenges. J. Dent. Res.; 2020; 99, pp. 769-774. [DOI: https://dx.doi.org/10.1177/0022034520915714]
21. Alasqah, M.; Alotaibi, F.D.; Gufran, K. The Radiographic Assessment of Furcation Area in Maxillary and Mandibular First Molars while Considering the New Classification of Periodontal Disease. Healthcare; 2022; 10, 1464. [DOI: https://dx.doi.org/10.3390/healthcare10081464] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/36011121]
22. Basrani, B. Endodontic Radiology, Segunda; Wiley-Blackwell: Hoboken, NJ, USA, 2012; pp. 36-38.
23. Azarpazhooh, A.; Khazaei, S.; Jafarzadeh, H.; Malkhassian, G.; Sgro, A.; Elbarbary, M.; Cardoso, E.; Oren, A.; Kishen, A.; Shah, P.S. A Scoping Review of Four Decades of Outcomes in Nonsurgical Root Canal Treatment, Nonsurgical Retreatment, and Apexification Studies: Part 3—A Proposed Framework for Standardized Data Collection and Reporting of Endodontic Outcome Studies. J. Endod.; 2022; 48, pp. 40-54. [DOI: https://dx.doi.org/10.1016/j.joen.2021.09.017]
24. Azarpazhooh, A.; Sgro, A.; Cardoso, E.; Elbarbary, M.; Lighvan, N.L.; Badewy, R.; Malkhassian, G.; Jafarzadeh, H.; Bakhtiar, H.; Khazaei, S.
25. Azarpazhooh, A.; Cardoso, E.; Sgro, A.; Elbarbary, M.; Lighvan, N.L.; Badewy, R.; Malkhassian, G.; Jafarzadeh, H.; Bakhtiar, H.; Khazaei, S.
26. Bennasar, C.; García, I.; Gonzalez-Cid, Y.; Pérez, F.; Jiménez, J. Second Opinion for Non-Surgical Root Canal Treatment Prognosis Using Machine Learning Models. Diagnostics; 2023; 13, 2742. [DOI: https://dx.doi.org/10.3390/diagnostics13172742]
27. Lee, J.; Seo, H.; Choi, Y.J.; Lee, C.; Kim, S.; Lee, Y.S.; Lee, S.; Kim, E. An Endodontic Forecasting Model Based on the Analysis of Preoperative Dental Radiographs: A Pilot Study on an Endodontic Predictive Deep Neural Network. J. Endod.; 2023; 49, pp. 710-719. [DOI: https://dx.doi.org/10.1016/j.joen.2023.03.015]
28. Li, S.; Liu, J.; Zhou, Z.; Zhou, Z.; Wu, X.; Li, Y.; Wang, S.; Liao, W.; Ying, S.; Zhao, Z. Artificial intelligence for caries and periapical periodontitis detection. J. Dent.; 2022; 122, 104107. [DOI: https://dx.doi.org/10.1016/j.jdent.2022.104107] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/35341892]
29. Panyarak, W.; Wantanajittikul, K.; Suttapak, W.; Charuakkra, A.; Prapayasatok, S. Feasibility of deep learning for dental caries classification in bitewing radiographs based on the ICCMS™ radiographic scoring system. Oral Surg. Oral Med. Oral Pathol. Oral Radiol.; 2023; 135, pp. 272-281. [DOI: https://dx.doi.org/10.1016/j.oooo.2022.06.012]
30. Mohammad-Rahimi, H.; Dianat, O.; Abbasi, R.; Zahedrozegar, S.; Ashkan, A.; Motamedian, S.R.; Rohban, M.H.; Nosrat, A. Artificial Intelligence for Detection of External Cervical Resorption Using Label-Efficient Self-Supervised Learning Method. J. Endod.; 2024; 50, pp. 144-153.e2. [DOI: https://dx.doi.org/10.1016/j.joen.2023.11.004]
31. Kim, B.S.; Yeom, H.G.; Lee, J.H.; Shin, W.S.; Yun, J.P.; Jeong, S.H.; Kang, J.H.; Kim, S.W.; Kim, B.C. Deep Learning-Based Prediction of Paresthesia after Third Molar Extraction: A Preliminary Study. Diagnostics; 2021; 11, 1572. [DOI: https://dx.doi.org/10.3390/diagnostics11091572]
32. Vilkomir, K.; Phen, C.; Baldwin, F.; Cole, J.; Herndon, N.; Zhang, W. Classification of mandibular molar furcation involvement in periapical radiographs by deep learning. Imaging Sci. Dent.; 2024; 54, pp. 257-263. [DOI: https://dx.doi.org/10.5624/isd.20240020]
33. Kim, Y.-H.; Park, J.-B.; Chang, M.-S.; Ryu, J.-J.; Lim, W.H.; Jung, S.-K. Influence of the depth of the convolutional neural networks on an artificial intelligence model for diagnosis of orthognathic surgery. J. Pers. Med.; 2021; 11, 356. [DOI: https://dx.doi.org/10.3390/jpm11050356] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/33946874]
34. James, G.; Witten, D.; Hastie, T.; Tibshirani, R. An Introduction to Statistical Learning; 2nd ed. Springer: New York, NY, USA, 2021; pp. 229-232.
35. El Adoui, M.; Drisis, S.; Benjelloun, M. Multi-input deep learning architecture for predicting breast tumor response to chemotherapy using quantitative MR images. Int. J. Comput. Assist. Radiol. Surg.; 2020; 15, pp. 1491-1500. [DOI: https://dx.doi.org/10.1007/s11548-020-02209-9] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/32556920]
36. Mukherjee, P.; Zhou, M.; Lee, E.; Schicht, A.; Balagurunathan, Y.; Napel, S.; Gillies, R.; Wong, S.; Thieme, A.; Leung, A.
37. Xie, H.; Zhang, T.; Song, W.; Wang, S.; Zhu, H.; Zhang, R.; Zhang, W.; Yu, Y.; Zhao, Y. Super-resolution of Pneumocystis carinii pneumonia CT via self-attention GAN. Comput. Methods Programs Biomed.; 2021; 212, 106467. [DOI: https://dx.doi.org/10.1016/j.cmpb.2021.106467]
38. Ricucci, D.; Siqueira, J.F., Jr. Biofilms and Apical Periodontitis: Study of Prevalence and Association with Clinical and Histopathologic Findings. J. Endod.; 2010; 36, pp. 1277-1288. [DOI: https://dx.doi.org/10.1016/j.joen.2010.04.007]
39. Lee, J.-H.; Kim, D.-H.; Jeong, S.-N.; Choi, S.-H. Detection and diagnosis of dental caries using a deep learning-based convolutional neural network algorithm. J. Dent.; 2018; 77, pp. 106-111. [DOI: https://dx.doi.org/10.1016/j.jdent.2018.07.015]
40. Shan, T.; Tay, F.R.; Gu, L. Application of Artificial Intelligence in Dentistry. J. Dent. Res.; 2021; 100, pp. 232-244. [DOI: https://dx.doi.org/10.1177/0022034520969115]
41. Kim, J.-E.; Nam, N.-E.; Shim, J.-S.; Jung, Y.-H.; Cho, B.-H.; Hwang, J.J. Transfer Learning via Deep Neural Networks for Implant Fixture System Classification Using Periapical Radiographs. J. Clin. Med.; 2020; 9, 1117. [DOI: https://dx.doi.org/10.3390/jcm9041117]
42. Krois, J.; Ekert, T.; Meinhold, L.; Golla, T.; Kharbot, B.; Wittemeier, A.; Dörfer, C.; Schwendicke, F. Deep Learning for the Radiographic Detection of Perio-dontal Bone Loss. Sci. Rep.; 2019; 9, 8495. [DOI: https://dx.doi.org/10.1038/s41598-019-44839-3]
43. Hung, K.; Montalvao, C.; Tanaka, R.; Kawai, T.; Bornstein, M.M. The use and performance of artificial intelligence applications in dental and maxillofacial radiology: A systematic review. Dentomaxillofacial Radiol.; 2019; 49, 20190107. [DOI: https://dx.doi.org/10.1259/dmfr.20190107] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/31386555]
44. Yang, S.; Lee, H.; Jang, B.; Kim, K.-D.; Kim, J.; Kim, H.; Park, W. Development and Validation of a Visually Explainable Deep Learning Model for Classification of C-shaped Canals of the Mandibular Second Molars in Periapical and Panoramic Dental Radiographs. J. Endod.; 2022; 48, pp. 914-921. [DOI: https://dx.doi.org/10.1016/j.joen.2022.04.007] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/35427635]
45. Umer, F.; Habib, S. Critical Analysis of Artificial Intelligence in Endodontics: A Scoping Review. J. Endod.; 2021; 48, pp. 152-160. [DOI: https://dx.doi.org/10.1016/j.joen.2021.11.007] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/34838523]
46. Chugal, N.M.; Clive, J.M.; Spångberg, L.S. A prognostic model for assessment of the outcome of endodontic treatment: Effect of biologic and diagnostic variables. Oral Surg. Oral Med. Oral Pathol. Oral Radiol. Endodontol.; 2001; 91, pp. 342-352. [DOI: https://dx.doi.org/10.1067/moe.2001.113106]
47. Friedman, S. Prognosis of initial endodontic therapy. Endod. Top.; 2002; 2, pp. 59-88. [DOI: https://dx.doi.org/10.1034/j.1601-1546.2002.20105.x]
48. Moidu, N.P.; Sharma, S.; Chawla, A.; Kumar, V.; Logani, A. Deep learning for categorization of endodontic lesion based on radiographic periapical index scoring system. Clin. Oral Investig.; 2021; 26, pp. 651-658. [DOI: https://dx.doi.org/10.1007/s00784-021-04043-y]
49. Jiménez Pinzón, A.; Segura Egea, J.J. Valoración clínica y radiológica del estado periapical: Registros e índices periapicales. Endodoncia; 2003; 21, pp. 220-228.
50. Chugal, N.M.; Clive, J.M.; Spångberg, L.S. Endodontic infection: Some biologic and treatment factors associated with outcome. Oral Surg. Oral Med. Oral Pathol. Oral Radiol. Endodontol.; 2003; 96, pp. 81-90. [DOI: https://dx.doi.org/10.1016/S1079-2104(02)91703-8]
51. Friedman, S.; Abitbol, S.; Lawrence, H. Treatment Outcome in Endodontics: The Toronto Study. Phase 1: Initial Treatment. J. Endod.; 2003; 29, pp. 787-793. [DOI: https://dx.doi.org/10.1097/00004770-200312000-00001]
52. Farzaneh, M.; Abitbol, S.; Lawrence, H.; Friedman, S. Treatment Outcome in Endodontics—The Toronto Study. Phase II: Initial Treatment. J. Endod.; 2004; 30, pp. 302-309. [DOI: https://dx.doi.org/10.1097/00004770-200405000-00002]
53. Li, Y.; Zeng, G.; Zhang, Y.; Wang, J.; Jin, Q.; Sun, L.; Zhang, Q.; Lian, Q.; Qian, G.; Xia, N.
54. Herbst, C.S.; Schwendicke, F.; Krois, J.; Herbst, S.R. Association between patient-, tooth- and treatment-level factors and root canal treatment failure: A retrospective longitudinal and machine learning study. J. Dent.; 2022; 117, 103937. [DOI: https://dx.doi.org/10.1016/j.jdent.2021.103937]
55. Campo, L.; Aliaga, I.J.; De Paz, J.F.; García, A.E.; Bajo, J.; Villarubia, G.; Corchado, J.M. Retreatment Predictions in Odontology by means of CBR Systems. Comput. Intell. Neurosci.; 2016; 2016, 7485250. [DOI: https://dx.doi.org/10.1155/2016/7485250] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/26884749]
56. Ramezanzade, S.; Laurentiu, T.; Bakhshandah, A.; Ibragimov, B.; Kvist, T.; EndoReCo, E.; Bjorndal, L. The efficiency of artificial intelligence methods for finding radiographic features in different endodontic treatments—A systematic review. Acta Odontol. Scand.; 2023; 81, pp. 422-435. [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/36548872]
57. Sunnetci, K.M.; Kaba, E.; Beyazal Çeliker, F.; Alkan, A. Comparative parotid gland segmentation by using ResNet-18 and MobileNetV2 based DeepLab v3+ architectures from magnetic resonance images. Concurr. Comput.; 2023; 35, e7405. [DOI: https://dx.doi.org/10.1002/cpe.7405]
58. Boreak, N. Effectiveness of Artifcial Intelligence Applications Designed for Endodontic Diagnosis, Decision-making, and Pre-diction of Prognosis: A Systematic Review. J. Contemp. Dent. Pract.; 2020; 21, pp. 926-934. [DOI: https://dx.doi.org/10.5005/jp-journals-10024-2894]
59. Pauwels, R.; Brasil, D.M.; Yamasaki, M.C.; Jacobs, R.; Bosmans, H.; Freitas, D.Q.; Haiter-Neto, F. Artificial intelligence for detection of periapical lesions on intraoral radiographs: Comparison between convolutional neural networks and human observers. Oral Surg. Oral Med. Oral Pathol. Oral Radiol.; 2021; 131, pp. 610-616. [DOI: https://dx.doi.org/10.1016/j.oooo.2021.01.018]
60. Sadr, S.; Mohammad-Rahimi, H.; Motamedian, S.R.; Zahedrozegar, S.; Motie, P.; Vinayahalingam, S.; Dianat, O.; Nosrat, A. Deep Learning for Detection of Periapical Radiolucent Lesions: A Systematic Review and Meta-analysis of Diagnostic Test Accuracy. J. Endod.; 2023; 49, pp. 248-261.e3. Available online: https://linkinghub.elsevier.com/retrieve/pii/S0099239922008457 (accessed on 17 January 2023). [DOI: https://dx.doi.org/10.1016/j.joen.2022.12.007]
61. Sherwood, A.A.; Setzer, F.C.; K, S.D.; Shamili, J.V.; John, C.; Schwendicke, F. A Deep Learning Approach to Segment and Classify C-Shaped Canal Morphologies in Mandibular Second Molars Using Cone-beam Computed Tomography. J. Endod.; 2021; 47, pp. 1907-1916. [DOI: https://dx.doi.org/10.1016/j.joen.2021.09.009]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details






1 Academia Dental de Mallorca (ADEMA), School of Dentistry, University of Balearic Islands, 07122 Palma de Mallorca, Spain; [email protected]
2 Soft Computing, Image Processing and Aggregation (SCOPIA) Research Group, University of the Balearic Islands (UIB), 07122 Palma de Mallorca, Spain; [email protected]
3 Department of Mathematical Sciences and Informatics, University of the Balearic Islands, 07120 Palma de Mallorca, Spain; [email protected]
4 ADEMA-Health Group, University Institute of Health Sciences of Balearic Islands (IUNICS), 02008 Palma de Mallorca, Spain; [email protected]
5 Faculty of Medicine, University of Castilla-La Mancha, 02001 Albacete, Spain; [email protected]