Non-local degradation modeling for spatially adaptive single image super-resolution.

Blind super-resolution Contrastive learning Non-local degradation modeling

Journal

Neural networks : the official journal of the International Neural Network Society
ISSN: 1879-2782
Titre abrégé: Neural Netw
Pays: United States
ID NLM: 8805018

Informations de publication

Date de publication:
10 Apr 2024
Historique:
received: 25 10 2023
revised: 10 03 2024
accepted: 05 04 2024
medline: 17 4 2024
pubmed: 17 4 2024
entrez: 16 4 2024
Statut: aheadofprint

Résumé

Existing methods for single image super-resolution (SISR) model the blur kernel as spatially invariant across the entire image, and are susceptible to the adverse effects of textureless patches. To achieve improved results, adaptive estimation of the degradation kernel is necessary. We explore the synergy of joint global and local degradation modeling for spatially adaptive blind SISR. Our model, named spatially adaptive network for blind super-resolution (SASR), employs a simple encoder to estimate global degradation representations and a decoder to extract local degradation. These two representations are fused with a cross-attention mechanism and applied using spatially adaptive filtering to enhance the local image detail. Specifically, SASR contains two novel features: (1) a non-local degradation modeling with contrastive learning to learn global and local degradation representations, and (2) a non-local spatially adaptive filtering module (SAFM) that incorporates the global degradation and spatial-detail factors to preserve and enhance local details. We demonstrate that SASR can efficiently estimate degradation representations and handle multiple types of degradation. The local representations avoid the detrimental effect of estimating the entire super-resolved image with only one kernel through locally adaptive adjustments. Extensive experiments are performed to quantitatively and qualitatively demonstrate that SASR not only performs favorably for degradation estimation but also leads to state-of-the-art blind SISR performance when compared to alternative approaches.

Identifiants

pubmed: 38626619
pii: S0893-6080(24)00217-X
doi: 10.1016/j.neunet.2024.106293
pii:
doi:

Types de publication

Journal Article

Langues

eng

Sous-ensembles de citation

IM

Pagination

106293

Informations de copyright

Copyright © 2024 Elsevier Ltd. All rights reserved.

Déclaration de conflit d'intérêts

Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Auteurs

Qianyu Zhang (Q)

School of Automation, Hangzhou Dianzi University, Hangzhou 310018, China. Electronic address: qyzhang@hdu.edu.cn.

Bolun Zheng (B)

School of Automation, Hangzhou Dianzi University, Hangzhou 310018, China. Electronic address: blzheng@hdu.edu.cn.

Zongpeng Li (Z)

School of Automation, Hangzhou Dianzi University, Hangzhou 310018, China. Electronic address: zongpeng@tsinghua.edu.cn.

Yu Liu (Y)

Department of Electronic Engineering, Tsinghua University, Beijing 100084, China. Electronic address: liuyu77360132@126.com.

Zunjie Zhu (Z)

Lishui Institute of Hangzhou Dianzi University, China; School of Communication Engineering, Hangzhou Dianzi University, Hangzhou 310018, China. Electronic address: zunjiezhu@hdu.edu.cn.

Gregory Slabaugh (G)

Digital Environment Research Institute (DERI), Queen Mary University of London, London E1 4NS, UK. Electronic address: g.slabaugh@qmul.ac.uk.

Shanxin Yuan (S)

Digital Environment Research Institute (DERI), Queen Mary University of London, London E1 4NS, UK. Electronic address: shanxinyuan@gmail.com.

Classifications MeSH