Stimulus-guided adaptive transformer network for retinal blood vessel segmentation in fundus images.
Receptive field
Retinal blood vessel segmentation
Stimulus-guided adaptive feature fusion
Stimulus-guided adaptive pooling transformer
Visual cortex
Journal
Medical image analysis
ISSN: 1361-8423
Titre abrégé: Med Image Anal
Pays: Netherlands
ID NLM: 9713490
Informations de publication
Date de publication:
10 2023
10 2023
Historique:
received:
22
02
2023
revised:
15
06
2023
accepted:
07
08
2023
medline:
8
9
2023
pubmed:
21
8
2023
entrez:
20
8
2023
Statut:
ppublish
Résumé
Automated retinal blood vessel segmentation in fundus images provides important evidence to ophthalmologists in coping with prevalent ocular diseases in an efficient and non-invasive way. However, segmenting blood vessels in fundus images is a challenging task, due to the high variety in scale and appearance of blood vessels and the high similarity in visual features between the lesions and retinal vascular. Inspired by the way that the visual cortex adaptively responds to the type of stimulus, we propose a Stimulus-Guided Adaptive Transformer Network (SGAT-Net) for accurate retinal blood vessel segmentation. It entails a Stimulus-Guided Adaptive Module (SGA-Module) that can extract local-global compound features based on inductive bias and self-attention mechanism. Alongside a light-weight residual encoder (ResEncoder) structure capturing the relevant details of appearance, a Stimulus-Guided Adaptive Pooling Transformer (SGAP-Former) is introduced to reweight the maximum and average pooling to enrich the contextual embedding representation while suppressing the redundant information. Moreover, a Stimulus-Guided Adaptive Feature Fusion (SGAFF) module is designed to adaptively emphasize the local details and global context and fuse them in the latent space to adjust the receptive field (RF) based on the task. The evaluation is implemented on the largest fundus image dataset (FIVES) and three popular retinal image datasets (DRIVE, STARE, CHASEDB1). Experimental results show that the proposed method achieves a competitive performance over the other existing method, with a clear advantage in avoiding errors that commonly happen in areas with highly similar visual features. The sourcecode is publicly available at: https://github.com/Gins-07/SGAT.
Identifiants
pubmed: 37598606
pii: S1361-8415(23)00189-5
doi: 10.1016/j.media.2023.102929
pii:
doi:
Types de publication
Journal Article
Research Support, Non-U.S. Gov't
Langues
eng
Sous-ensembles de citation
IM
Pagination
102929Informations de copyright
Copyright © 2023 The Authors. Published by Elsevier B.V. All rights reserved.
Déclaration de conflit d'intérêts
Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.