RESEARCH

Intrinsic Motivation for Encouraging Synergistic Behavior

April 25, 2020

Abstract

We study the role of intrinsic motivation as an exploration bias for reinforcement learning in sparse-reward synergistic tasks, which are tasks where multiple agents must work together to achieve a goal they could not individually. Our key idea is that a good guiding principle for intrinsic motivation in synergistic tasks is to take actions which affect the world in ways that would not be achieved if the agents were acting on their own. Thus, we propose to incentivize agents to take (joint) actions whose effects cannot be predicted via a composition of the predicted effect for each individual agent. We study two instantiations of this idea, one based on the true states encountered, and another based on a dynamics model trained concurrently with the policy. While the former is simpler, the latter has the benefit of being analytically differentiable with respect to the action taken. We validate our approach in robotic bimanual manipulation and multi-agent locomotion tasks with sparse rewards; we find that our approach yields more efficient learning than both 1) training with only the sparse reward and 2) using the typical surprise-based formulation of intrinsic motivation, which does not bias toward synergistic behavior. Videos are available on the project webpage: https://sites.google.com/view/iclr2020-synergistic.

Download the Paper

AUTHORS

Written by

Rohan Chitnis

Shubham Tulsiani

Saurabh Gupta

Abhinav Gupta

Publisher

International Conference on Learning Representations (ICLR)

Recent Publications

December 14, 2021

Sample-and-threshold differential privacy: Histograms and applications

Akash Bharadwaj, Graham Cormode

December 14, 2021

May 14, 2021

Not All Memories are Created Equal: Learning to Forget by Expiring

Sainbayar Sukhbaatar, Da Ju, Spencer Poff, Stephen Roller, Arthur Szlam, Jason Weston, Angela Fan

May 14, 2021

May 03, 2021

NLP

Support-Set bottlenecks for video-text representation learning

Mandela Patrick, Po-Yao Huang, Florian Metze , Andrea Vedaldi, Alexander Hauptmann, Yuki M. Asano, João Henriques

May 03, 2021

April 08, 2021

RESPONSIBLE AI

INTEGRITY

Towards measuring fairness in AI: the Casual Conversations dataset

Caner Hazirbas, Joanna Bitton, Brian Dolhansky, Jacqueline Pan, Albert Gordo, Cristian Canton Ferrer

April 08, 2021

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.