RESEARCH

Improved Sample Complexity for Incremental Autonomous Exploration in MDPs

January 09, 2021

Abstract

We study the problem of exploring an unknown environment when no reward function is provided to the agent. Building on the incremental exploration setting introduced by Lim and Auer (2012), we define the objective of learning the set of $\epsilon$-optimal goal-conditioned policies attaining all states that are incrementally reachable within $L$ steps (in expectation) from a reference state $s_0$. In this paper, we introduce a novel model-based approach that interleaves discovering new states from $s_0$ and improving the accuracy of a model estimate that is used to compute goal-conditioned policies. The resulting algorithm, DisCo, achieves a sample complexity scaling as $\widetilde{O}(L^5 S_{L+\epsilon} \Gamma_{L+\epsilon} A \epsilon^{-2})$, where $A$ is the number of actions, $S_{L+\epsilon}$ is the number of states that are incrementally reachable from $s_0$ in $L+\epsilon$ steps, and $\Gamma_{L+\epsilon}$ is the branching factor of the dynamics over such states. This improves over the algorithm proposed in (Lim and Auer, 2012) in both $\epsilon$ and $L$ at the cost of an extra $\Gamma_{L+\epsilon}$ factor, which is small in most environments of interest. Furthermore, DisCo is the first algorithm that can return an $\epsilon/c_{\min}$-optimal policy for any cost sensitive shortest-path problem defined on the $L$-reachable states with minimum cost $c_{\min}$. Finally, we report preliminary empirical results confirming our theoretical findings.

Download the Paper

AUTHORS

Written by

Jean Tarbouriech

Alessandro Lazaric

Matteo Pirotta

Michal Valko

Publisher

NeurIPS

Related Publications

December 15, 2021

RESEARCH

Sample-and-threshold differential privacy: Histograms and applications

Akash Bharadwaj, Graham Cormode

December 15, 2021

October 19, 2020

RESEARCH

SPEECH & AUDIO

Unsupervised Translation of Programming Languages

Baptiste Rozière, Marie-Anne Lachaux, Lowik Chanussot, Guillaume Lample

October 19, 2020

July 12, 2019

RESEARCH

NLP

Word-order biases in deep-agent emergent communication

Rahma Chaabouni, Alessandro Lazaric, Emmanuel Dupoux, Evgeny Kharitonov, Marco Baroni

July 12, 2019