LUNet: deep learning for the segmentation of arterioles and venules in high resolution fundus images.

DFIs Eye vasculature deep learning microvasculature retinal fundus images segmentation

Journal

Physiological measurement
ISSN: 1361-6579
Titre abrégé: Physiol Meas
Pays: England
ID NLM: 9306921

Informations de publication

Date de publication:
10 Apr 2024
Historique:
medline: 11 4 2024
pubmed: 11 4 2024
entrez: 10 4 2024
Statut: aheadofprint

Résumé

This study aims to automate the segmentation of retinal arterioles and venules (A/V) from digital fundus images (DFI), as changes in the spatial distribution of retinal microvasculature are indicative of cardiovascular diseases, positioning the eyes as windows to cardiovascular health.

Approach: We utilized active learning to create a new DFI dataset with 240 crowd-sourced manual A/V segmentations performed by 15 medical students and reviewed by an ophthalmologist. We then developed LUNet, a novel deep learning architecture optimized for high-resolution A/V segmentation. The LUNet model features a double dilated convolutional block to widen the receptive field and reduce parameter count, alongside a high-resolution tail to refine segmentation details. A custom loss function was designed to prioritize the continuity of blood vessel segmentation.

Main Results: LUNet significantly outperformed three benchmark A/V segmentation algorithms both on a local test set and on four external test sets that simulated variations in ethnicity, comorbidities and annotators.

Significance: The release of the new datasets and the LUNet model (URL upon publication) provides a valuable resource for the advancement of retinal microvasculature analysis. The improvements in A/V segmentation accuracy highlight LUNet's potential as a robust tool for diagnosing and understanding cardiovascular diseases through retinal imaging.

Identifiants

pubmed: 38599224
doi: 10.1088/1361-6579/ad3d28
doi:

Types de publication

Journal Article

Langues

eng

Sous-ensembles de citation

IM

Informations de copyright

Creative Commons Attribution license.

Auteurs

Jonathan Fhima (J)

Technion Israel Institute of Technology, Technion city, Haifa, Haifa, 3200003, ISRAEL.

Jan Van Eijgen (J)

KU Leuven, Leuven, Leuven, Flanders, 3000, BELGIUM.

Hana Kulenovic (H)

KU Leuven, Leuven, Leuven, Flanders, 3000, BELGIUM.

Valerie Debeuf (V)

KU Leuven, Leuven, Leuven, Flanders, 3000, BELGIUM.

Marie Vangilbergen (M)

KU Leuven, Leuven, Leuven, Flanders, 3000, BELGIUM.

Marie-Isaline Billen (MI)

KU Leuven, Leuven, Leuven, Flanders, 3000, BELGIUM.

Heloise Brackenier (H)

KU Leuven, Leuven, Leuven, Flanders, 3000, BELGIUM.

Moti Freiman (M)

Technion Israel Institute of Technology, Technion city, Haifa, Haifa, 3200003, ISRAEL.

Ingeborg Stalmans (I)

KU Leuven, Leuven, Leuven, Flanders, 3000, BELGIUM.

Joachim A Behar (JA)

Biomedical Engineering Faculty, Technion Israel Institute of Technology, Technion city, Haifa, 32000, ISRAEL.

Classifications MeSH