RESEARCH

COMPUTER VISION

Improving Generative Visual Dialog by Answering Diverse Questions

November 3, 2019

Abstract

Prior work on training generative Visual Dialog models with reinforcement learning (Das et al., 2017b) has explored a Q-BOT-A-BOT image-guessing game and shown that this ‘self-talk’ approach can lead to improved performance at the downstream dialog-conditioned image-guessing task. However, this improvement saturates and starts degrading after a few rounds of interaction, and does not lead to a better Visual Dialog model. We find that this is due in part to repeated interactions between Q-BOT and A-BOT during self-talk, which are not informative with respect to the image. To improve this, we devise a simple auxiliary objective that incentivizes Q-BOT to ask diverse questions, thus reducing repetitions and in turn enabling A-BOT to explore a larger state space during RL i.e. be exposed to more visual concepts to talk about, and varied questions to answer. We evaluate our approach via a host of automatic metrics and human studies, and demonstrate that it leads to better dialog, i.e. dialog that is more diverse (i.e. less repetitive), consistent (i.e. has fewer conflicting exchanges), fluent (i.e. more humanlike), and detailed, while still being comparably image-relevant as prior work and ablations.

Download the Paper

Related Publications

May 17, 2019

COMPUTER VISION

SPEECH & AUDIO

GLoMo: Unsupervised Learning of Transferable Relational Graphs | Facebook AI Research

Modern deep transfer learning approaches have mainly focused on learning generic feature vectors from one task that are transferable to other tasks, such as word embeddings in language and pretrained convolutional features in vision. However,…

Zhilin Yang, Jake (Junbo) Zhao, Bhuwan Dhingra, Kaiming He, William W. Cohen, Ruslan Salakhutdinov, Yann LeCun

May 17, 2019

May 06, 2019

COMPUTER VISION

NLP

No Training Required: Exploring Random Encoders for Sentence Classification | Facebook AI Research

We explore various methods for computing sentence representations from pre-trained word embeddings without any training, i.e., using nothing but random parameterizations. Our aim is to put sentence embeddings on more solid footing by 1) looking…

John Wieting, Douwe Kiela

May 06, 2019

May 06, 2019

NLP

COMPUTER VISION

Efficient Lifelong Learning with A-GEM | Facebook AI Research

In lifelong learning, the learner is presented with a sequence of tasks, incrementally building a data-driven prior which may be leveraged to speed up learning of a new task. In this work, we investigate the efficiency of current lifelong…

Arslan Chaudhry, Marc'Aurelio Ranzato, Marcus Rohrbach, Mohamed Elhoseiny

May 06, 2019

May 06, 2019

COMPUTER VISION

Learning Exploration Policies for Navigation | Facebook AI Research

Numerous past works have tackled the problem of task-driven navigation. But, how to effectively explore a new environment to enable a variety of down-stream tasks has received much less attention. In this work, we study how agents can…

Tao Chen, Saurabh Gupta, Abhinav Gupta

May 06, 2019

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.