A Deep Ordinal Distortion Estimation Approach for Distortion Rectification.


Journal

IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
ISSN: 1941-0042
Titre abrégé: IEEE Trans Image Process
Pays: United States
ID NLM: 9886191

Informations de publication

Date de publication:
2021
Historique:
pubmed: 2 3 2021
medline: 2 3 2021
entrez: 1 3 2021
Statut: ppublish

Résumé

Radial distortion has widely existed in the images captured by popular wide-angle cameras and fisheye cameras. Despite the long history of distortion rectification, accurately estimating the distortion parameters from a single distorted image is still challenging. The main reason is that these parameters are implicit to image features, influencing the networks to learn the distortion information fully. In this work, we propose a novel distortion rectification approach that can obtain more accurate parameters with higher efficiency. Our key insight is that distortion rectification can be cast as a problem of learning an ordinal distortion from a single distorted image. To solve this problem, we design a local-global associated estimation network that learns the ordinal distortion to approximate the realistic distortion distribution. In contrast to the implicit distortion parameters, the proposed ordinal distortion has a more explicit relationship with image features, and significantly boosts the distortion perception of neural networks. Considering the redundancy of distortion information, our approach only uses a patch of the distorted image for the ordinal distortion estimation, showing promising applications in efficient distortion rectification. In the distortion rectification field, we are the first to unify the heterogeneous distortion parameters into a learning-friendly intermediate representation through ordinal distortion, bridging the gap between image feature and distortion rectification. The experimental results demonstrate that our approach outperforms the state-of-the-art methods by a significant margin, with approximately 23% improvement on the quantitative evaluation while displaying the best performance on visual appearance.

Identifiants

pubmed: 33646951
doi: 10.1109/TIP.2021.3061283
doi:

Types de publication

Journal Article

Langues

eng

Sous-ensembles de citation

IM

Pagination

3362-3375

Auteurs

Classifications MeSH