NLP

COMPUTER VISION

Large-scale Pretraining for Visual Dialog:A Simple State-of-the-Art Baseline

July 15, 2020

Abstract

Prior work in visual dialog has focused on training deep neural models on VisDial [1] in isolation. Instead, we present an approach to leverage pretraining on related vision-language datasets before transferring to visual dialog. We adapt the recently proposed ViLBERT model [2] for multi-turn visually-grounded conversations. Our model is pretrained on the Conceptual Captions [3] and Visual Question Answering [4] datasets, and finetuned on VisDial. Our best single model outperforms prior published work by >1% absolute on NDCG and MRR. Next, we find that additional finetuning using “dense” annotations in VisDial leads to even higher NDCG – more than 10% over our base model – but hurts MRR – more than 17% below our base model! This highlights a trade-off between the two primary metrics – NDCG and MRR – which we find is due to dense annotations not correlating well with the original ground-truth answers to questions.

Download the Paper

AUTHORS

Written by

Devi Parikh

Abhishek Das

Dhruv Batra

Vishvak Murahari

Publisher

ECCV

Related Publications

November 16, 2022

RESEARCH

NLP

Memorization Without Overfitting: Analyzing the Training Dynamics of Large Language Models

Kushal Tirumala, Aram H. Markosyan, Armen Aghajanyan, Luke Zettlemoyer

November 16, 2022

November 10, 2022

RESEARCH

COMPUTER VISION

Learning State-Aware Visual Representations from Audible Interactions

Unnat Jain, Abhinav Gupta, Himangi Mittal, Pedro Morgado

November 10, 2022

November 06, 2022

RESEARCH

COMPUTER VISION

Neural Basis Models for Interpretability

Filip Radenovic, Abhimanyu Dubey, Dhruv Mahajan

November 06, 2022

October 31, 2022

NLP

ML APPLICATIONS

AD-Drop: Attribution Driven Dropout for Robust Language Model Finetuning

Qifan Wang, Shaoliang Nie, Jinghao Deng, Tao Yang, Xiaojun Quan

October 31, 2022

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.