Finding Generalizable Evidence by Learning to Convince Q&A Models

November 04, 2019

Abstract

We propose a system that finds the strongest supporting evidence for a given answer to a question, using passage-based question-answering (QA) as a testbed. We train evidence agents to select the passage sentences that most convince a pretrained QA model of a given answer, if the QA model received those sentences instead of the full passage. Rather than finding evidence that convinces one model alone, we find that agents select evidence that generalizes; agent-chosen evidence increases the plausibility of the supported answer, as judged by other QA models and humans. Given its general nature, this approach improves QA in a robust manner: using agent-selected evidence (i) humans can correctly answer questions with only ~20\% of the full passage and (ii) QA models can generalize to longer passages and harder questions.

Download the Paper

AUTHORS

Written by

Douwe Kiela

Jason Weston

Kyunghyun Cho

Rob Fergus

Siddharth Karamcheti

Ethan Perez

Publisher

EMNLP

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.