Semantic Segmentation of Smartphone Wound Images: Comparative Analysis of AHRF and CNN-Based Approaches.

Associative Hierarchical Random Fields Contrast Limited Adaptive Histogram Equalization Convolutional Neural Network DeepLabV3 FCN U-Net Wound image analysis chronic wounds semantic segmentation

Journal

IEEE access : practical innovations, open solutions
ISSN: 2169-3536
Titre abrégé: IEEE Access
Pays: United States
ID NLM: 101639462

Informations de publication

Date de publication:
2020
Historique:
entrez: 30 11 2020
pubmed: 1 12 2020
medline: 1 12 2020
Statut: ppublish

Résumé

Smartphone wound image analysis has recently emerged as a viable way to assess healing progress and provide actionable feedback to patients and caregivers between hospital appointments. Segmentation is a key image analysis step, after which attributes of the wound segment (e.g. wound area and tissue composition) can be analyzed. The Associated Hierarchical Random Field (AHRF) formulates the image segmentation problem as a graph optimization problem. Handcrafted features are extracted, which are then classified using machine learning classifiers. More recently deep learning approaches have emerged and demonstrated superior performance for a wide range of image analysis tasks. FCN, U-Net and DeepLabV3 are Convolutional Neural Networks used for semantic segmentation. While in separate experiments each of these methods have shown promising results, no prior work has comprehensively and systematically compared the approaches on the same large wound image dataset, or more generally compared deep learning vs non-deep learning wound image segmentation approaches. In this paper, we compare the segmentation performance of AHRF and CNN approaches (FCN, U-Net, DeepLabV3) using various metrics including segmentation accuracy (dice score), inference time, amount of training data required and performance on diverse wound sizes and tissue types. Improvements possible using various image pre- and post-processing techniques are also explored. As access to adequate medical images/data is a common constraint, we explore the sensitivity of the approaches to the size of the wound dataset. We found that for small datasets (< 300 images), AHRF is more accurate than U-Net but not as accurate as FCN and DeepLabV3. AHRF is also over 1000x slower. For larger datasets (> 300 images), AHRF saturates quickly, and all CNN approaches (FCN, U-Net and DeepLabV3) are significantly more accurate than AHRF.

Identifiants

pubmed: 33251080
doi: 10.1109/access.2020.3014175
pmc: PMC7695230
mid: NIHMS1637061
doi:

Types de publication

Journal Article

Langues

eng

Pagination

181590-181604

Subventions

Organisme : NIBIB NIH HHS
ID : R01 EB025801
Pays : United States

Références

Comput Intell Neurosci. 2018 May 31;2018:4149103
pubmed: 29955227
Wound Repair Regen. 2009 Nov-Dec;17(6):763-71
pubmed: 19903300
Ostomy Wound Manage. 2000 Apr;46(4):20-6, 28-30
pubmed: 10788924
IEEE J Biomed Health Inform. 2019 Jul;23(4):1730-1741
pubmed: 30188841
IEEE Trans Biomed Eng. 2015 Feb;62(2):477-88
pubmed: 25248175
Spinal Cord. 2018 Apr;56(4):372-381
pubmed: 29497177
Diabetes Educ. 2018 Feb;44(1):35-50
pubmed: 29346744
Biomed Res Int. 2014;2014:851582
pubmed: 25114925
IEEE Trans Pattern Anal Mach Intell. 2018 Apr;40(4):834-848
pubmed: 28463186
Int Wound J. 2018 Jun;15(3):460-472
pubmed: 29334176
Diabetes Care. 2003 Jun;26(6):1879-82
pubmed: 12766127
IEEE Trans Pattern Anal Mach Intell. 2014 Jun;36(6):1056-77
pubmed: 26353271
IEEE Trans Biomed Eng. 2017 Sep;64(9):2098-2109
pubmed: 27893380

Auteurs

Ameya Wagh (A)

Computer Science Department, Worcester Polytechnic Institute, Worcester, MA, USA, 01609.

Shubham Jain (S)

Computer Science Department, Worcester Polytechnic Institute, Worcester, MA, USA, 01609.

Apratim Mukherjee (A)

Computer Science Department, Manipal Institute of Technology, Manipal, Karnataka, India, 576104.

Emmanuel Agu (E)

Computer Science Department, Worcester Polytechnic Institute, Worcester, MA, USA, 01609.

Peder Pedersen (P)

Computer Science Department, Worcester Polytechnic Institute, Worcester, MA, USA, 01609.

Diane Strong (D)

Computer Science Department, Worcester Polytechnic Institute, Worcester, MA, USA, 01609.

Bengisu Tulu (B)

Computer Science Department, Worcester Polytechnic Institute, Worcester, MA, USA, 01609.

Clifford Lindsay (C)

Radiology Department, University of Massachusetts Medical School, Worcester MA, USA, 01655.

Ziyang Liu (Z)

Computer Science Department, Worcester Polytechnic Institute, Worcester, MA, USA, 01609.

Classifications MeSH