Welcome to PracticeUpdate! We hope you are enjoying access to a selection of our top-read and most recent articles. Please register today for a free account and gain full access to all of our expert-selected content.
Already Have An Account? Log in Now
Deep Learning for the Detection of Periapical Radiolucent Lesions
abstract
This article is open access.
Access this abstract now Full Text Available for ClinicalKey SubscribersINTRODUCTION
The aim of this systematic review and meta-analysis was to investigate the overall accuracy of deep learning models in detecting periapical radiolucent lesions in dental radiographs, when compared to expert clinicians.
METHODS
Electronic databases of Medline (via PubMed), Embase (via Ovid), Scopus, Google Scholar, and arXiv were searched. Quality of eligible studies was assessed by using Quality Assessment and Diagnostic Accuracy Tool-2 (QUADAS-2). Quantitative analyses were conducted using hierarchical logistic regression for meta-analyses on diagnostic accuracy. Subgroup analyses on different image modalities (periapical radiographs, panoramic radiographs, cone beam computed tomographic images), and on different deep learning tasks (classification, segmentation, object detection) were conducted. Certainty of evidence was assessed by using Grading of Recommendations Assessment, Development and Evaluation (GRADE) system.
RESULTS
A total of 932 studies were screened. Eighteen studies were included in the systematic review, out of which six studies were selected for quantitative analyses. Six studies had low risk of bias. Twelve studies had risk of bias. Pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio and diagnostic odds ratio of included studies (all image modalities; all tasks) were 0.925 (95% confidence interval [CI], 0.862-0.960), 0.852 (95% CI, 0.810-0.885), 6.261 (95% CI, 4.717-8.311), 0.087 (95% CI, 0.045-0.168), and 71.692 (95% CI, 29.957-171.565), respectively. No publication bias was detected (Egger's test, p=.82). GRADE showed a 'high' certainty of evidence for the studies included in the meta-analyses.
CONCLUSION
Compared to expert clinicians, deep learning showed highly accurate results in detecting periapical radiolucent lesions in dental radiographs. Most studies had risk of bias. There was a lack of prospective studies.
Additional Info
Disclosure statements are available on the authors' profiles:
Deep Learning for Detection of Periapical Radiolucent Lesions: A Systematic Review and Meta-analysis of Diagnostic Test Accuracy
J Endod 2022 Dec 20;[EPub Ahead of Print], S Sadr, H Mohammad-Rahimi, SR Motamedian, S Zahedrozegar, P Motie, S Vinayahalingam, O Dianat, A NosratFrom MEDLINE®/PubMed®, a database of the U.S. National Library of Medicine.
More recently, deep learning–based neural network models have been explored in the medical field to assist clinicians in various clinical decision–making processes. These models are based on artificial intelligence algorithms and have shown high sensitivity and specificity in the detection and differentiation of patterns. A recent systematic review and meta-analysis by Sadr S et al in 2022 aimed to systematically appraise studies that used deep learning for detecting periapical radiolucent lesions (PARLs) in dental radiographs and determine the overall accuracy of deep learning in detecting PARLs when compared with that of expert clinicians. This study was done using various electronic databases, and the authors assessed the quality of studies using the Quality Assessment and Diagnostic Accuracy Tool-2, which lead to a total of 932 studies that were screened. The systematic review consisted of 18 studies, out of which 6 studies were selected for quantitative analyses. A robust hierarchical logistic regression statistical analysis was performed for meta-analyses on diagnostic accuracy. The subgroup of image modalities consisted of periapical radiographs, panoramic radiographs, and cone-beam CT images, and deep-learning tasks conducted included classification, segmentation, and object detection. For the purpose of the analysis, pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratio of the included studies were taken into consideration. The results of this study showed relatively high sensitivity and specificity for the detection of PARLs using deep learning in all image modalities. In conclusion, despite the potential risk of bias and other concerns related to the extent of clinical applicability, these deep learning models are very promising and are projected to have a huge potential to assist clinicians in clinical decision–making, with additional studies verifying the specificity and sensitivity for clinical applicability.