Deep learning to distinguish pancreatic cancer tissue from non-cancerous pancreatic tissue: a retrospective study with cross-racial external validation.
Aged
Contrast Media
Deep Learning
Diagnosis, Differential
Female
Humans
Male
Middle Aged
Pancreas
/ diagnostic imaging
Pancreatic Neoplasms
/ diagnostic imaging
Racial Groups
Radiographic Image Enhancement
/ methods
Radiographic Image Interpretation, Computer-Assisted
/ methods
Reproducibility of Results
Retrospective Studies
Sensitivity and Specificity
Taiwan
Tomography, X-Ray Computed
/ methods
Journal
The Lancet. Digital health
ISSN: 2589-7500
Titre abrégé: Lancet Digit Health
Pays: England
ID NLM: 101751302
Informations de publication
Date de publication:
06 2020
06 2020
Historique:
received:
20
01
2020
revised:
24
03
2020
accepted:
25
03
2020
entrez:
17
12
2020
pubmed:
18
12
2020
medline:
28
1
2021
Statut:
ppublish
Résumé
The diagnostic performance of CT for pancreatic cancer is interpreter-dependent, and approximately 40% of tumours smaller than 2 cm evade detection. Convolutional neural networks (CNNs) have shown promise in image analysis, but the networks' potential for pancreatic cancer detection and diagnosis is unclear. We aimed to investigate whether CNN could distinguish individuals with and without pancreatic cancer on CT, compared with radiologist interpretation. In this retrospective, diagnostic study, contrast-enhanced CT images of 370 patients with pancreatic cancer and 320 controls from a Taiwanese centre were manually labelled and randomly divided for training and validation (295 patients with pancreatic cancer and 256 controls) and testing (75 patients with pancreatic cancer and 64 controls; local test set 1). Images were preprocessed into patches, and a CNN was trained to classify patches as cancerous or non-cancerous. Individuals were classified as with or without pancreatic cancer on the basis of the proportion of patches diagnosed as cancerous by the CNN, using a cutoff determined using the training and validation set. The CNN was further tested with another local test set (101 patients with pancreatic cancers and 88 controls; local test set 2) and a US dataset (281 pancreatic cancers and 82 controls). Radiologist reports of pancreatic cancer images in the local test sets were retrieved for comparison. Between Jan 1, 2006, and Dec 31, 2018, we obtained CT images. In local test set 1, CNN-based analysis had a sensitivity of 0·973, specificity of 1·000, and accuracy of 0·986 (area under the curve [AUC] 0·997 (95% CI 0·992-1·000). In local test set 2, CNN-based analysis had a sensitivity of 0·990, specificity of 0·989, and accuracy of 0·989 (AUC 0·999 [0·998-1·000]). In the US test set, CNN-based analysis had a sensitivity of 0·790, specificity of 0·976, and accuracy of 0·832 (AUC 0·920 [0·891-0·948)]. CNN-based analysis achieved higher sensitivity than radiologists did (0·983 vs 0·929, difference 0·054 [95% CI 0·011-0·098]; p=0·014) in the two local test sets combined. CNN missed three (1·7%) of 176 pancreatic cancers (1·1-1·2 cm). Radiologists missed 12 (7%) of 168 pancreatic cancers (1·0-3·3 cm), of which 11 (92%) were correctly classified using CNN. The sensitivity of CNN for tumours smaller than 2 cm was 92·1% in the local test sets and 63·1% in the US test set. CNN could accurately distinguish pancreatic cancer on CT, with acceptable generalisability to images of patients from various races and ethnicities. CNN could supplement radiologist interpretation. Taiwan Ministry of Science and Technology.
Sections du résumé
BACKGROUND
The diagnostic performance of CT for pancreatic cancer is interpreter-dependent, and approximately 40% of tumours smaller than 2 cm evade detection. Convolutional neural networks (CNNs) have shown promise in image analysis, but the networks' potential for pancreatic cancer detection and diagnosis is unclear. We aimed to investigate whether CNN could distinguish individuals with and without pancreatic cancer on CT, compared with radiologist interpretation.
METHODS
In this retrospective, diagnostic study, contrast-enhanced CT images of 370 patients with pancreatic cancer and 320 controls from a Taiwanese centre were manually labelled and randomly divided for training and validation (295 patients with pancreatic cancer and 256 controls) and testing (75 patients with pancreatic cancer and 64 controls; local test set 1). Images were preprocessed into patches, and a CNN was trained to classify patches as cancerous or non-cancerous. Individuals were classified as with or without pancreatic cancer on the basis of the proportion of patches diagnosed as cancerous by the CNN, using a cutoff determined using the training and validation set. The CNN was further tested with another local test set (101 patients with pancreatic cancers and 88 controls; local test set 2) and a US dataset (281 pancreatic cancers and 82 controls). Radiologist reports of pancreatic cancer images in the local test sets were retrieved for comparison.
FINDINGS
Between Jan 1, 2006, and Dec 31, 2018, we obtained CT images. In local test set 1, CNN-based analysis had a sensitivity of 0·973, specificity of 1·000, and accuracy of 0·986 (area under the curve [AUC] 0·997 (95% CI 0·992-1·000). In local test set 2, CNN-based analysis had a sensitivity of 0·990, specificity of 0·989, and accuracy of 0·989 (AUC 0·999 [0·998-1·000]). In the US test set, CNN-based analysis had a sensitivity of 0·790, specificity of 0·976, and accuracy of 0·832 (AUC 0·920 [0·891-0·948)]. CNN-based analysis achieved higher sensitivity than radiologists did (0·983 vs 0·929, difference 0·054 [95% CI 0·011-0·098]; p=0·014) in the two local test sets combined. CNN missed three (1·7%) of 176 pancreatic cancers (1·1-1·2 cm). Radiologists missed 12 (7%) of 168 pancreatic cancers (1·0-3·3 cm), of which 11 (92%) were correctly classified using CNN. The sensitivity of CNN for tumours smaller than 2 cm was 92·1% in the local test sets and 63·1% in the US test set.
INTERPRETATION
CNN could accurately distinguish pancreatic cancer on CT, with acceptable generalisability to images of patients from various races and ethnicities. CNN could supplement radiologist interpretation.
FUNDING
Taiwan Ministry of Science and Technology.
Identifiants
pubmed: 33328124
pii: S2589-7500(20)30078-9
doi: 10.1016/S2589-7500(20)30078-9
pii:
doi:
Substances chimiques
Contrast Media
0
Types de publication
Journal Article
Research Support, Non-U.S. Gov't
Langues
eng
Sous-ensembles de citation
IM
Pagination
e303-e313Commentaires et corrections
Type : CommentIn
Type : CommentIn
Type : CommentIn
Informations de copyright
Copyright © 2020 The Author(s). Published by Elsevier Ltd. This is an Open Access article under the CC BY 4.0 license. Published by Elsevier Ltd.. All rights reserved.