A Benchmark for Compositional Visual Reasoning.


Journal

Advances in neural information processing systems
ISSN: 1049-5258
Titre abrégé: Adv Neural Inf Process Syst
Pays: United States
ID NLM: 9607483

Informations de publication

Date de publication:
Dec 2022
Historique:
medline: 3 8 2023
pubmed: 3 8 2023
entrez: 3 8 2023
Statut: ppublish

Résumé

A fundamental component of human vision is our ability to parse complex visual scenes and judge the relations between their constituent objects. AI benchmarks for visual reasoning have driven rapid progress in recent years with state-of-the-art systems now reaching human accuracy on some of these benchmarks. Yet, there remains a major gap between humans and AI systems in terms of the sample efficiency with which they learn new visual reasoning tasks. Humans' remarkable efficiency at learning has been at least partially attributed to their ability to harness compositionality - allowing them to efficiently take advantage of previously gained knowledge when learning new tasks. Here, we introduce a novel visual reasoning benchmark, Compositional Visual Relations (CVR), to drive progress towards the development of more data-efficient learning algorithms. We take inspiration from fluid intelligence and non-verbal reasoning tests and describe a novel method for creating compositions of abstract rules and generating image datasets corresponding to these rules at scale. Our proposed benchmark includes measures of sample efficiency, generalization, compositionality, and transfer across task rules. We systematically evaluate modern neural architectures and find that convolutional architectures surpass transformer-based architectures across all performance measures in most data regimes. However, all computational models are much less data efficient than humans, even after learning informative visual representations using self-supervision. Overall, we hope our challenge will spur interest in developing neural architectures that can learn to harness compositionality for more efficient learning.

Identifiants

pubmed: 37534101
pmc: PMC10396074
mid: NIHMS1856061

Types de publication

Journal Article

Langues

eng

Pagination

29776-29788

Subventions

Organisme : NIH HHS
ID : S10 OD025181
Pays : United States

Références

Neural Comput. 2022 Apr 15;34(5):1075-1099
pubmed: 35231926
Proc Natl Acad Sci U S A. 2011 Oct 25;108(43):17621-5
pubmed: 22006295
Vision Res. 2011 Jul 1;51(13):1538-51
pubmed: 21329719
Interface Focus. 2018 Aug 6;8(4):20180011
pubmed: 29951191
J Vis. 2022 Sep 2;22(10):11
pubmed: 36094524
Science. 2015 Dec 11;350(6266):1332-8
pubmed: 26659050

Auteurs

Aimen Zerroug (A)

Artificial and Natural Intelligence Toulouse Institute, Université de Toulouse, France.
Carney Institute for Brain Science, Dept. of Cognitive Linguistic & Psychological Sciences Brown University, Providence, RI 02912.
Centre de Recherche Cerveau et Cognition, CNRS, Université de Toulouse, France.

Mohit Vaishnav (M)

Artificial and Natural Intelligence Toulouse Institute, Université de Toulouse, France.
Carney Institute for Brain Science, Dept. of Cognitive Linguistic & Psychological Sciences Brown University, Providence, RI 02912.
Centre de Recherche Cerveau et Cognition, CNRS, Université de Toulouse, France.

Julien Colin (J)

Carney Institute for Brain Science, Dept. of Cognitive Linguistic & Psychological Sciences Brown University, Providence, RI 02912.

Sebastian Musslick (S)

Carney Institute for Brain Science, Dept. of Cognitive Linguistic & Psychological Sciences Brown University, Providence, RI 02912.

Thomas Serre (T)

Artificial and Natural Intelligence Toulouse Institute, Université de Toulouse, France.
Carney Institute for Brain Science, Dept. of Cognitive Linguistic & Psychological Sciences Brown University, Providence, RI 02912.

Classifications MeSH