Differentiate cavernous hemangioma from schwannoma with artificial intelligence (AI).

Artificial intelligence (AI) differential diagnosis multicenter

Journal

Annals of translational medicine
ISSN: 2305-5839
Titre abrégé: Ann Transl Med
Pays: China
ID NLM: 101617978

Informations de publication

Date de publication:
Jun 2020
Historique:
entrez: 4 7 2020
pubmed: 4 7 2020
medline: 4 7 2020
Statut: ppublish

Résumé

Cavernous hemangioma and schwannoma are tumors that both occur in the orbit. Because the treatment strategies of these two tumors are different, it is necessary to distinguish them at treatment initiation. Magnetic resonance imaging (MRI) is typically used to differentiate these two tumor types; however, they present similar features in MRI images which increases the difficulty of differential diagnosis. This study aims to devise and develop an artificial intelligence framework to improve the accuracy of clinicians' diagnoses and enable more effective treatment decisions by automatically distinguishing cavernous hemangioma from schwannoma. Material: As the study materials, we chose MRI images as the study materials that represented patients from diverse areas in China who had been referred to our center from more than 45 different hospitals. All images were initially acquired on films, which we scanned into digital versions and recut. Finally, 11,489 images of cavernous hemangioma (from 33 different hospitals) and 3,478 images of schwannoma (from 16 different hospitals) were collected. Labeling: All images were labeled using standard anatomical knowledge and pathological diagnosis. Training: Three types of models were trained in sequence (a total of 96 models), with each model including a specific improvement. The first two model groups were eye- and tumor-positioning models designed to reduce the identification scope, while the third model group consisted of classification models trained to make the final diagnosis. First, internal four-fold cross-validation processes were conducted for all the models. During the validation of the first group, the 32 eye-positioning models were able to localize the position of the eyes with an average precision of 100%. In the second group, the 28 tumor-positioning models were able to reach an average precision above 90%. Subsequently, using the third group, the accuracy of all 32 tumor classification models reached nearly 90%. Next, external validation processes of 32 tumor classification models were conducted. The results showed that the accuracy of the transverse T1-weighted contrast-enhanced sequence reached 91.13%; the accuracy of the remaining models was significantly lower compared with the ground truth. The findings of this retrospective study show that an artificial intelligence framework can achieve high accuracy, sensitivity, and specificity in automated differential diagnosis between cavernous hemangioma and schwannoma in a real-world setting, which can help doctors determine appropriate treatments.

Sections du résumé

BACKGROUND BACKGROUND
Cavernous hemangioma and schwannoma are tumors that both occur in the orbit. Because the treatment strategies of these two tumors are different, it is necessary to distinguish them at treatment initiation. Magnetic resonance imaging (MRI) is typically used to differentiate these two tumor types; however, they present similar features in MRI images which increases the difficulty of differential diagnosis. This study aims to devise and develop an artificial intelligence framework to improve the accuracy of clinicians' diagnoses and enable more effective treatment decisions by automatically distinguishing cavernous hemangioma from schwannoma.
METHODS METHODS
Material: As the study materials, we chose MRI images as the study materials that represented patients from diverse areas in China who had been referred to our center from more than 45 different hospitals. All images were initially acquired on films, which we scanned into digital versions and recut. Finally, 11,489 images of cavernous hemangioma (from 33 different hospitals) and 3,478 images of schwannoma (from 16 different hospitals) were collected. Labeling: All images were labeled using standard anatomical knowledge and pathological diagnosis. Training: Three types of models were trained in sequence (a total of 96 models), with each model including a specific improvement. The first two model groups were eye- and tumor-positioning models designed to reduce the identification scope, while the third model group consisted of classification models trained to make the final diagnosis.
RESULTS RESULTS
First, internal four-fold cross-validation processes were conducted for all the models. During the validation of the first group, the 32 eye-positioning models were able to localize the position of the eyes with an average precision of 100%. In the second group, the 28 tumor-positioning models were able to reach an average precision above 90%. Subsequently, using the third group, the accuracy of all 32 tumor classification models reached nearly 90%. Next, external validation processes of 32 tumor classification models were conducted. The results showed that the accuracy of the transverse T1-weighted contrast-enhanced sequence reached 91.13%; the accuracy of the remaining models was significantly lower compared with the ground truth.
CONCLUSIONS CONCLUSIONS
The findings of this retrospective study show that an artificial intelligence framework can achieve high accuracy, sensitivity, and specificity in automated differential diagnosis between cavernous hemangioma and schwannoma in a real-world setting, which can help doctors determine appropriate treatments.

Identifiants

pubmed: 32617330
doi: 10.21037/atm.2020.03.150
pii: atm-08-11-710
pmc: PMC7327353
doi:

Types de publication

Journal Article

Langues

eng

Pagination

710

Informations de copyright

2020 Annals of Translational Medicine. All rights reserved.

Déclaration de conflit d'intérêts

Conflicts of Interest: All authors have completed the ICMJE uniform disclosure form (available at http://dx.doi.org/10.21037/atm.2020.03.150). The series “Medical Artificial Intelligent Research” was commissioned by the editorial office without any funding or sponsorship. HL served as the unpaid Guest Editor of the series. The other authors have no other conflicts of interest to declare.

Références

Nat Med. 2018 Oct;24(10):1559-1567
pubmed: 30224757
Ophthalmology. 2004 May;111(5):997-1008
pubmed: 15121380
Radiology. 2018 Mar;286(3):887-896
pubmed: 29059036
Lancet. 2008 Nov 15;372(9651):1774-81
pubmed: 18930528
J Craniofac Surg. 2018 May;29(3):712-716
pubmed: 29381625
Nature. 2017 Feb 2;542(7639):115-118
pubmed: 28117445
Neuroimaging Clin N Am. 2005 Feb;15(1):159-74
pubmed: 15927866
World Neurosurg. 2018 Feb;110:301-302
pubmed: 29191527
Arch Ophthalmol. 1984 Nov;102(11):1606-11
pubmed: 6497741
Lancet Oncol. 2014 Jun;15(7):e279-89
pubmed: 24872111
Diabetes Care. 2018 Dec;41(12):2509-2516
pubmed: 30275284
Cancer Res. 2018 Sep 1;78(17):5135-5143
pubmed: 30026330
J Med Internet Res. 2018 Nov 14;20(11):e11144
pubmed: 30429111
IEEE Trans Pattern Anal Mach Intell. 2019 Feb;41(2):423-443
pubmed: 29994351
AJR Am J Roentgenol. 2004 Dec;183(6):1799-804
pubmed: 15547232
Int J Equity Health. 2017 Mar 3;16(1):42
pubmed: 28253876
IEEE Trans Pattern Anal Mach Intell. 2017 Jun;39(6):1137-1149
pubmed: 27295650
JAMA. 2017 Dec 12;318(22):2211-2223
pubmed: 29234807
JAMA. 2016 Dec 13;316(22):2402-2410
pubmed: 27898976
Surv Ophthalmol. 2017 Jul - Aug;62(4):393-403
pubmed: 28131871
Lancet Oncol. 2019 Feb;20(2):193-201
pubmed: 30583848
Eur Neurol. 2014;72(1-2):116-24
pubmed: 25011957
Biopreserv Biobank. 2017 Jun;15(3):248-252
pubmed: 28080144
Neuroimaging Clin N Am. 2005 Feb;15(1):137-58
pubmed: 15927865
AJNR Am J Neuroradiol. 2008 Mar;29(3):577-81
pubmed: 18065511
Ophthalmology. 2017 Jul;124(7):962-969
pubmed: 28359545
Health Care Manag Sci. 2016 Dec;19(4):338-346
pubmed: 26018176
Am J Ophthalmol. 2004 Aug;138(2):237-44
pubmed: 15289133
Nat Biomed Eng. 2018 Mar;2(3):158-164
pubmed: 31015713

Auteurs

Shaowei Bi (S)

State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China.

Rongxin Chen (R)

State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China.

Kai Zhang (K)

State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China.
School of Computer Science and Technology, Xidian University, Xi'an, China.

Yifan Xiang (Y)

State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China.

Ruixin Wang (R)

State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China.

Haotian Lin (H)

State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China.
Center for Precision Medicine, Sun Yat-sen University, Guangzhou, China.

Huasheng Yang (H)

State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China.

Classifications MeSH