DVD: A Diagnostic Dataset for Multi-step Reasoning in Video Grounded Dialogue

October 1, 2021


A video-grounded dialogue system is required to understand both dialogue, which contains semantic dependencies from turn to turn, and video, which contains visual cues of spatial and temporal scene variations. Building such dialogue systems is a challenging problem, involving various reasoning types on both visual and language inputs. Existing benchmarks do not have enough annotations to thoroughly analyze dialogue systems and understand their capabilities and limitations in isolation. These benchmarks are also not explicitly designed to minimize biases that models can exploit without actual reasoning. To address these limitations, in this paper, we present DVD, a Diagnostic Dataset for Video grounded Dialogues. The dataset is designed to contain minimal biases and has detailed annotations for the different types of reasoning over the spatio-temporal space of video. Dialogues are synthesized over multiple question turns, each of which is injected with a set of cross-turn semantic relationships. We use DVD to analyze existing approaches, providing interesting insights into their abilities and limitations. In total, DVD is built from 11k CATER synthetic videos and contains 10 instances of 10-round dialogues for each video, resulting in more than 100k dialogues and 1M question-answer pairs. Our code and dataset are publicly available.

Download the Paper


Written by

Hung Le

Chinnadhurai Sankar

Seungwhan Moon

Ahmad Beiram

Alborz Geramifard

Satwik Kottur


ACL 2021

Research Topics

Computer Vision

Conversational AI

Natural Language Processing

Related Publications

December 03, 2018


Explore-Exploit: A Framework for Interactive and Online Learning | Facebook AI Research

Interactive user interfaces need to continuously evolve based on the interactions that a user has (or does not have) with the system. This may require constant exploration of various options that the system may have for the user and obtaining…

Honglei Liu, Anuj Kumar, Wenhai Yang, Benoit Dumoulin

December 03, 2018

October 31, 2018



Extending Neural Generative Conversational Model using External Knowledge Sources | Facebook AI Research

The use of connectionist approaches in conversational agents has been progressing rapidly due to the availability of large corpora. However current generative dialogue models often lack coherence and are content poor. This work proposes an…

Prasanna Parthasarathi, Joelle Pineau

October 31, 2018

November 05, 2019



Memory Grounded Conversational Reasoning | Facebook AI Research

We demonstrate a conversational system which engages the user through a multi-modal, multi-turn dialog over the user’s memories. The system can perform QA over memories by responding to user queries to recall specific attributes and associated…

Shane Moon, Pararth Shah, Anuj Kumar, Rajen Subba

November 05, 2019

July 28, 2019



What makes a good conversation? How controllable attributes affect human judgments | Facebook AI Research

A good conversation requires balance – between simplicity and detail; staying on topic and changing it; asking questions and answering them. Although dialogue agents are commonly evaluated via human judgments of overall quality, the…

Abigail See, Stephen Roller, Douwe Kiela, Jason Weston

July 28, 2019

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.