September 19, 2019
Visual Dialog is a multimodal task of answering a sequence of questions grounded in an image, using the conversation history as context. It entails challenges in vision, language, reasoning, and grounding. However, studying these subtasks in isolation on large, real datasets is infeasible as it requires prohibitively-expensive complete annotation of the ‘state’ of all images and dialogs. We develop CLEVR-Dialog, a large diagnostic dataset for studying multi-round reasoning in visual dialog. Specifically, we construct a dialog grammar that is grounded in the scene graphs of the images from the CLEVR dataset. This combination results in a dataset where all aspects of the visual dialog are fully annotated. In total, CLEVR-Dialog contains 5 instances of 10-round dialogs for about 85k CLEVR images, totaling to 4.25M question-answer pairs. We use CLEVR-Dialog to benchmark performance of standard visual dialog models; in particular, on visual coreference resolution (as a function of the coreference distance). This is the first analysis of its kind for visual dia- log models that was not possible without this dataset. We hope the findings from CLEVR- Dialog will help inform the development of future models for visual dialog. Our code and dataset are publicly available.
Publisher
NAACL
December 15, 2021
Akash Bharadwaj, Graham Cormode
December 15, 2021
December 06, 2021
Hongyu Gong, Yun Tang, Juan Miguel Pino, Xian Li
December 06, 2021
November 16, 2021
Rahma Chaabouni, Roberto Dessì, Evgeny Kharitonov
November 16, 2021
November 08, 2021
Baptiste Rozière, Marie-Anne Lachaux, Marc Szafraniec, Guillaume Lample
November 08, 2021