Multi-modal deep learning for automated assembly of periapical radiographs.
CNNs, computer vision
Deep learning
Multi-modal learning
Periapical radiographs
Time series analysis
Journal
Journal of dentistry
ISSN: 1879-176X
Titre abrégé: J Dent
Pays: England
ID NLM: 0354422
Informations de publication
Date de publication:
Aug 2023
Aug 2023
Historique:
received:
11
01
2023
revised:
23
03
2023
accepted:
13
06
2023
medline:
17
7
2023
pubmed:
23
6
2023
entrez:
22
6
2023
Statut:
ppublish
Résumé
Periapical radiographs are oftentimes taken in series to display all teeth present in the oral cavity. Our aim was to automatically assemble such a series of periapical radiographs into an anatomically correct status using a multi-modal deep learning model. 4,707 periapical images from 387 patients (on average, 12 images per patient) were used. Radiographs were labeled according to their field of view and the dataset split into a training, validation, and test set, stratified by patient. In addition to the radiograph the timestamp of image generation was extracted and abstracted as follows: A matrix, containing the normalized timestamps of all images of a patient was constructed, representing the order in which images were taken, providing temporal context information to the deep learning model. Using the image data together with the time sequence data a multi-modal deep learning model consisting of two residual convolutional neural networks (ResNet-152 for image data, ResNet-50 for time data) was trained. Additionally, two uni-modal models were trained on image data and time data, respectively. A custom scoring technique was used to measure model performance. Multi-modal deep learning outperformed both uni-modal image-based learning (p<0.001) and time-based learning (p<0.05). The multi-modal deep learning model predicted tooth labels with an F1-score, sensitivity and precision of 0.79, respectively, and an accuracy of 0.99. 37 out of 77 patient datasets were fully correctly assembled by multi-modal learning; in the remaining ones, usually only one image was incorrectly labeled. Multi-modal modeling allowed automated assembly of periapical radiographs and outperformed both uni-modal models. Dental machine learning models can benefit from additional data modalities. Like humans, deep learning models may profit from multiple data sources for decision-making. We demonstrate how multi-modal learning can assist assembling periapical radiographs into an anatomically correct status. Multi-modal learning should be considered for more complex tasks, as clinically a wealth of data is usually available and could be leveraged.
Identifiants
pubmed: 37348642
pii: S0300-5712(23)00174-4
doi: 10.1016/j.jdent.2023.104588
pii:
doi:
Types de publication
Journal Article
Langues
eng
Sous-ensembles de citation
IM
Pagination
104588Subventions
Organisme : World Health Organization
ID : 001
Pays : International
Informations de copyright
Copyright © 2023 Elsevier Ltd. All rights reserved.
Déclaration de conflit d'intérêts
Declaration of Competing Interest The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Falk Schwendicke and Joachim Krois are cofounders of the dentalXrai Ltd, a company focusing on AI-based radiograph analysis. The work here was completely independent from dentalXrai.