Automatic soft-tissue analysis on orthodontic frontal and lateral facial photographs based on deep learning.
artificial intelligence
deep learning
facial landmark detection
soft‐tissue analysis
Journal
Orthodontics & craniofacial research
ISSN: 1601-6343
Titre abrégé: Orthod Craniofac Res
Pays: England
ID NLM: 101144387
Informations de publication
Date de publication:
05 Jul 2024
05 Jul 2024
Historique:
accepted:
18
06
2024
medline:
5
7
2024
pubmed:
5
7
2024
entrez:
5
7
2024
Statut:
aheadofprint
Résumé
To establish the automatic soft-tissue analysis model based on deep learning that performs landmark detection and measurement calculations on orthodontic facial photographs to achieve a more comprehensive quantitative evaluation of soft tissues. A total of 578 frontal photographs and 450 lateral photographs of orthodontic patients were collected to construct datasets. All images were manually annotated by two orthodontists with 43 frontal-image landmarks and 17 lateral-image landmarks. Automatic landmark detection models were established, which consisted of a high-resolution network, a feature fusion module based on depthwise separable convolution, and a prediction model based on pixel shuffle. Ten measurements for frontal images and eight measurements for lateral images were defined. Test sets were used to evaluate the model performance, respectively. The mean radial error of landmarks and measurement error were calculated and statistically analysed to evaluate their reliability. The mean radial error was 14.44 ± 17.20 pixels for the landmarks in the frontal images and 13.48 ± 17.12 pixels for the landmarks in the lateral images. There was no statistically significant difference between the model prediction and manual annotation measurements except for the mid facial-lower facial height index. A total of 14 measurements had a high consistency. Based on deep learning, we established automatic soft-tissue analysis models for orthodontic facial photographs that can automatically detect 43 frontal-image landmarks and 17 lateral-image landmarks while performing comprehensive soft-tissue measurements. The models can assist orthodontists in efficient and accurate quantitative soft-tissue evaluation for clinical application.
Sections du résumé
BACKGROUND
BACKGROUND
To establish the automatic soft-tissue analysis model based on deep learning that performs landmark detection and measurement calculations on orthodontic facial photographs to achieve a more comprehensive quantitative evaluation of soft tissues.
METHODS
METHODS
A total of 578 frontal photographs and 450 lateral photographs of orthodontic patients were collected to construct datasets. All images were manually annotated by two orthodontists with 43 frontal-image landmarks and 17 lateral-image landmarks. Automatic landmark detection models were established, which consisted of a high-resolution network, a feature fusion module based on depthwise separable convolution, and a prediction model based on pixel shuffle. Ten measurements for frontal images and eight measurements for lateral images were defined. Test sets were used to evaluate the model performance, respectively. The mean radial error of landmarks and measurement error were calculated and statistically analysed to evaluate their reliability.
RESULTS
RESULTS
The mean radial error was 14.44 ± 17.20 pixels for the landmarks in the frontal images and 13.48 ± 17.12 pixels for the landmarks in the lateral images. There was no statistically significant difference between the model prediction and manual annotation measurements except for the mid facial-lower facial height index. A total of 14 measurements had a high consistency.
CONCLUSION
CONCLUSIONS
Based on deep learning, we established automatic soft-tissue analysis models for orthodontic facial photographs that can automatically detect 43 frontal-image landmarks and 17 lateral-image landmarks while performing comprehensive soft-tissue measurements. The models can assist orthodontists in efficient and accurate quantitative soft-tissue evaluation for clinical application.
Types de publication
Journal Article
Langues
eng
Sous-ensembles de citation
IM
Subventions
Organisme : Beijing Natural Science Foundation
ID : L222024
Organisme : Beijing Natural Science Foundation
ID : L232028
Organisme : Beijing Hospitals Authority Clinical Medicine Development of special funding support
ID : ZLRK202330
Organisme : Beijing Stomatological Hospital, Capital Medical University Young Scientist Program
ID : YSP 21-09-01
Informations de copyright
© 2024 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Références
Lee MS, Chung DH, Lee JW, Cha KS. Assessing soft‐tissue characteristics of facial asymmetry with photographs. Am J Orthod Dentofacial Orthop. 2010;138(1):23‐31. doi:10.1016/j.ajodo.2008.08.029
Yeung CY, McGrath CP, Wong RW, Hagg EU, Lo J, Yang Y. Frontal facial proportions of 12‐year‐old southern Chinese: a photogrammetric study. Head Face Med. 2015;11:26. doi:10.1186/s13005-015-0084-7
Leung CS, Yang Y, Wong RW, Hagg U, Lo J, McGrath C. Angular photogrammetric analysis of the soft tissue profile in 12‐year‐old southern Chinese. Head Face Med. 2014;10:56. doi:10.1186/s13005-014-0056-3
Liu C, Du S, Wang Z, et al. Impact of orthodontic‐induced facial morphology changes on aesthetic evaluation: a retrospective study. BMC Oral Health. 2024;24(1):24. doi:10.1186/s12903-023-03776-4
Kiekens RM, Kuijpers‐Jagtman AM, van't Hof MA, van't Hof BE, Straatman H, Maltha JC. Facial esthetics in adolescents and its relationship to “ideal” ratios and angles. Am J Orthod Dentofacial Orthop. 2008;133(2):188.e1‐188.e8. doi:10.1016/j.ajodo.2007.07.013
Sim RS, Smith JD, Chan AS. Comparison of the aesthetic facial proportions of southern Chinese and white women. Arch Facial Plast Surg. 2000;2(2):113‐120. doi:10.1001/archfaci.2.2.113
Cummaudo M, Guerzoni M, Marasciuolo L, et al. Pitfalls at the root of facial assessment on photographs: a quantitative study of accuracy in positioning facial landmarks. Int J Leg Med. 2013;127(3):699‐706. doi:10.1007/s00414-013-0850-7
Campomanes‐Alvarez BR, Ibanez O, Navarro F, Aleman I, Cordon O, Damas S. Dispersion assessment in the location of facial landmarks on photographs. Int J Leg Med. 2015;129(1):227‐236. doi:10.1007/s00414-014-1002-4
Douglas TS. Image processing for craniofacial landmark identification and measurement: a review of photogrammetry and cephalometry. Comput Med Imaging Graph. 2004;28(7):401‐409. doi:10.1016/j.compmedimag.2004.06.002
Kim H, Shim E, Park J, Kim YJ, Lee U, Kim Y. Web‐based fully automated cephalometric analysis by deep learning. Comput Methods Programs Biomed. 2020;194:105513. doi:10.1016/j.cmpb.2020.105513
Park JH, Hwang HW, Moon JH, et al. Automated identification of cephalometric landmarks: part 1‐comparisons between the latest deep‐learning methods YOLOV3 and SSD. Angle Orthod. 2019;89(6):903‐909. doi:10.2319/022019-127.1
Chang Q, Wang Z, Wang F, Dou J, Zhang Y, Bai Y. Automatic analysis of lateral cephalograms based on high‐resolution net. Am J Orthod Dentofacial Orthop. 2023;163(4):501‐508 e4. doi:10.1016/j.ajodo.2022.02.020
Aulsebrook WA, Iscan MY, Slabbert JH, Becker P. Superimposition and reconstruction in forensic facial identification: a survey. Forensic Sci Int. 1995;75(2–3):101‐120. doi:10.1016/0379-0738(95)01770-4
Sharifisoraki Z, Amini M, Rajan S. A novel face recognition using specific values from deep neural network‐based landmarks. IEEE; 2023:1‐6.
Timothy F. Active shape models‐their training and application. Comput Vis Und. 1995;61(1):38‐59. doi:10.1006/cviu.1995.1004
Cootes TF, Edwards GJ, Taylor CJ. Active appearance models. IEEE Trans Pattern Anal Mach Intell. 2001;23(6):681‐685. doi:10.1109/34.927467
Sun Y, Wang X, Tang X. Deep convolutional network cascade for facial point detection. 2013 IEEE Conference on Computer Vision and Pattern Recognition. IEEE; 2013:3476‐3483. doi:10.1109/CVPR.2013.446
Zhang J, Shan S, Kan M, Chen X. Coarse‐to‐Fine Auto‐Encoder Networks (CFAN) for Real‐Time Face Alignment. Computer Vision – ECCV. 2014. 1–16. doi:10.1007/978-3-319-10605-2_1
Yurdakurban E, Duran GS, Gorgulu S. Evaluation of an automated approach for facial midline detection and asymmetry assessment: a preliminary study. Orthod Craniofac Res. 2021;24(Suppl 2):84‐91. doi:10.1111/ocr.12539
Sun K, Xiao B, Liu D, Wang J. Deep high‐resolution representation learning for human pose estimation. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2019;6:5686. doi:10.1109/CVPR.2019.00584
Chollet F. Xception: deep learning with Depthwise separable convolutions. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2017;7:1800–1807. doi:10.1109/CVPR.2017.195
Liu JJ, Hou Q, Cheng MM, Wang C, Feng J. Improving convolutional networks with self‐calibrated convolutions. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2020;6:10093–10102. doi:10.1109/CVPR42600.2020.01011
Chen R, Ma Y, Chen N, Lee D, Wang W. Cephalometric landmark detection by attentive feature pyramid fusion and regression‐voting. Medical Image Computing and Computer Assisted Intervention–MICCAI 2019. 2019;10:873–881. doi:10.1007/978-3-030-32248-9_97
Choi KY. Analysis of facial asymmetry. Arch Craniofac Surg. 2015;16(1):1‐10. doi:10.7181/acfs.2015.16.1.1
Salinas CA, Liu A, Sharaf BA. Facial morphometrics in black celebrities: contemporary facial analysis using an artificial intelligence platform. J Clin Med. 2023;12(13):4499. doi:10.3390/jcm12134499
Farkas LG, Bryson W, Klotz J. Is photogrammetry of the face reliable? Plast Reconstr Surg. 1980;66(3):346‐355. doi:10.1097/00006534-198009000-00004
Patel M, Dayan I, Fishman EK, et al. Accelerating artificial intelligence: how federated learning can protect privacy, facilitate collaboration, and improve outcomes. Health Informatics J. 2023;29(4):14604582231207744. doi:10.1177/14604582231207744
Zoetmulder R, Gavves E, Caan M, Marquering H. Domain‐ and task‐specific transfer learning for medical segmentation tasks. Comput Methods Programs Biomed. 2022;214:106539. doi:10.1016/j.cmpb.2021.106539
Xie Z, He F, Fu S, Sato I, Tao D, Sugiyama M. Artificial neural variability for deep learning: on overfitting, noise memorization, and catastrophic forgetting. Neural Comput. 2021;33(8):2163‐2192. doi:10.1162/neco_a_01403