January 09, 2021
We study the problem of exploring an unknown environment when no reward function is provided to the agent. Building on the incremental exploration setting introduced by Lim and Auer (2012), we define the objective of learning the set of $\epsilon$-optimal goal-conditioned policies attaining all states that are incrementally reachable within $L$ steps (in expectation) from a reference state $s_0$. In this paper, we introduce a novel model-based approach that interleaves discovering new states from $s_0$ and improving the accuracy of a model estimate that is used to compute goal-conditioned policies. The resulting algorithm, DisCo, achieves a sample complexity scaling as $\widetilde{O}(L^5 S_{L+\epsilon} \Gamma_{L+\epsilon} A \epsilon^{-2})$, where $A$ is the number of actions, $S_{L+\epsilon}$ is the number of states that are incrementally reachable from $s_0$ in $L+\epsilon$ steps, and $\Gamma_{L+\epsilon}$ is the branching factor of the dynamics over such states. This improves over the algorithm proposed in (Lim and Auer, 2012) in both $\epsilon$ and $L$ at the cost of an extra $\Gamma_{L+\epsilon}$ factor, which is small in most environments of interest. Furthermore, DisCo is the first algorithm that can return an $\epsilon/c_{\min}$-optimal policy for any cost sensitive shortest-path problem defined on the $L$-reachable states with minimum cost $c_{\min}$. Finally, we report preliminary empirical results confirming our theoretical findings.
Written by
Jean Tarbouriech
Alessandro Lazaric
Matteo Pirotta
Michal Valko
Publisher
NeurIPS
December 15, 2021
Akash Bharadwaj, Graham Cormode
December 15, 2021
January 09, 2021
Baptiste Rozière, Camille Couprie, Olivier Teytaud, Andry Rasoanaivo, Hanhe Lin, Nathanaël Carraz Rakotonirina, Vlad Hosu
January 09, 2021
December 07, 2020
Mandela Patrick
December 07, 2020
December 06, 2020
Du Tran, Bruno Korbar, Dhruv Mahajan, Lorenzo Torresani, Bernard Ghanem, Humam Alwassel
December 06, 2020