Explicit reasoning over end-to-end neural architectures for visual question answering

Somak Aditya, Yezhou Yang, Chitta Baral

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

Many vision and language tasks require commonsense reasoning beyond data-driven image and natural language processing. Here we adopt Visual Question Answering (VQA) as an example task, where a system is expected to answer a question in natural language about an image. Current state-of-the-art systems attempted to solve the task using deep neural architectures and achieved promising performance. However, the resulting systems are generally opaque and they struggle in understanding questions for which extra knowledge is required. In this paper, we present an explicit reasoning layer on top of a set of penultimate neural network based systems. The reasoning layer enables reasoning and answering questions where additional knowledge is required, and at the same time provides an interpretable interface to the end users. Specifically, the reasoning layer adopts a Probabilistic Soft Logic (PSL) based engine to reason over a basket of inputs: visual relations, the semantic parse of the question, and background ontological knowledge from word2vec and ConceptNet. Experimental analysis of the answers and the key evidential predicates generated on the VQA dataset validate our approach.

Original languageEnglish (US)
Title of host publication32nd AAAI Conference on Artificial Intelligence, AAAI 2018
PublisherAAAI press
Pages629-637
Number of pages9
ISBN (Electronic)9781577358008
StatePublished - Jan 1 2018
Event32nd AAAI Conference on Artificial Intelligence, AAAI 2018 - New Orleans, United States
Duration: Feb 2 2018Feb 7 2018

Other

Other32nd AAAI Conference on Artificial Intelligence, AAAI 2018
CountryUnited States
CityNew Orleans
Period2/2/182/7/18

Fingerprint

Semantics
Engines
Neural networks
Processing

ASJC Scopus subject areas

  • Artificial Intelligence

Cite this

Aditya, S., Yang, Y., & Baral, C. (2018). Explicit reasoning over end-to-end neural architectures for visual question answering. In 32nd AAAI Conference on Artificial Intelligence, AAAI 2018 (pp. 629-637). AAAI press.

Explicit reasoning over end-to-end neural architectures for visual question answering. / Aditya, Somak; Yang, Yezhou; Baral, Chitta.

32nd AAAI Conference on Artificial Intelligence, AAAI 2018. AAAI press, 2018. p. 629-637.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Aditya, S, Yang, Y & Baral, C 2018, Explicit reasoning over end-to-end neural architectures for visual question answering. in 32nd AAAI Conference on Artificial Intelligence, AAAI 2018. AAAI press, pp. 629-637, 32nd AAAI Conference on Artificial Intelligence, AAAI 2018, New Orleans, United States, 2/2/18.
Aditya S, Yang Y, Baral C. Explicit reasoning over end-to-end neural architectures for visual question answering. In 32nd AAAI Conference on Artificial Intelligence, AAAI 2018. AAAI press. 2018. p. 629-637
Aditya, Somak ; Yang, Yezhou ; Baral, Chitta. / Explicit reasoning over end-to-end neural architectures for visual question answering. 32nd AAAI Conference on Artificial Intelligence, AAAI 2018. AAAI press, 2018. pp. 629-637
@inproceedings{aa13d0fddb1b41b09df6cf14cf707c28,
title = "Explicit reasoning over end-to-end neural architectures for visual question answering",
abstract = "Many vision and language tasks require commonsense reasoning beyond data-driven image and natural language processing. Here we adopt Visual Question Answering (VQA) as an example task, where a system is expected to answer a question in natural language about an image. Current state-of-the-art systems attempted to solve the task using deep neural architectures and achieved promising performance. However, the resulting systems are generally opaque and they struggle in understanding questions for which extra knowledge is required. In this paper, we present an explicit reasoning layer on top of a set of penultimate neural network based systems. The reasoning layer enables reasoning and answering questions where additional knowledge is required, and at the same time provides an interpretable interface to the end users. Specifically, the reasoning layer adopts a Probabilistic Soft Logic (PSL) based engine to reason over a basket of inputs: visual relations, the semantic parse of the question, and background ontological knowledge from word2vec and ConceptNet. Experimental analysis of the answers and the key evidential predicates generated on the VQA dataset validate our approach.",
author = "Somak Aditya and Yezhou Yang and Chitta Baral",
year = "2018",
month = "1",
day = "1",
language = "English (US)",
pages = "629--637",
booktitle = "32nd AAAI Conference on Artificial Intelligence, AAAI 2018",
publisher = "AAAI press",

}

TY - GEN

T1 - Explicit reasoning over end-to-end neural architectures for visual question answering

AU - Aditya, Somak

AU - Yang, Yezhou

AU - Baral, Chitta

PY - 2018/1/1

Y1 - 2018/1/1

N2 - Many vision and language tasks require commonsense reasoning beyond data-driven image and natural language processing. Here we adopt Visual Question Answering (VQA) as an example task, where a system is expected to answer a question in natural language about an image. Current state-of-the-art systems attempted to solve the task using deep neural architectures and achieved promising performance. However, the resulting systems are generally opaque and they struggle in understanding questions for which extra knowledge is required. In this paper, we present an explicit reasoning layer on top of a set of penultimate neural network based systems. The reasoning layer enables reasoning and answering questions where additional knowledge is required, and at the same time provides an interpretable interface to the end users. Specifically, the reasoning layer adopts a Probabilistic Soft Logic (PSL) based engine to reason over a basket of inputs: visual relations, the semantic parse of the question, and background ontological knowledge from word2vec and ConceptNet. Experimental analysis of the answers and the key evidential predicates generated on the VQA dataset validate our approach.

AB - Many vision and language tasks require commonsense reasoning beyond data-driven image and natural language processing. Here we adopt Visual Question Answering (VQA) as an example task, where a system is expected to answer a question in natural language about an image. Current state-of-the-art systems attempted to solve the task using deep neural architectures and achieved promising performance. However, the resulting systems are generally opaque and they struggle in understanding questions for which extra knowledge is required. In this paper, we present an explicit reasoning layer on top of a set of penultimate neural network based systems. The reasoning layer enables reasoning and answering questions where additional knowledge is required, and at the same time provides an interpretable interface to the end users. Specifically, the reasoning layer adopts a Probabilistic Soft Logic (PSL) based engine to reason over a basket of inputs: visual relations, the semantic parse of the question, and background ontological knowledge from word2vec and ConceptNet. Experimental analysis of the answers and the key evidential predicates generated on the VQA dataset validate our approach.

UR - http://www.scopus.com/inward/record.url?scp=85060499179&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85060499179&partnerID=8YFLogxK

M3 - Conference contribution

SP - 629

EP - 637

BT - 32nd AAAI Conference on Artificial Intelligence, AAAI 2018

PB - AAAI press

ER -