# On the Modularity of Hypernetworks

November 30, 2020

## Abstract

In the context of learning to map an input $I$ to a function $h_I:\mathcal{X}\to \mathbb{R}$, two alternative methods are compared: (i) an embedding-based method, which learns a fixed function in which $I$ is encoded as a conditioning signal $e(I)$ and the learned function takes the form $h_I(x) = q(x,e(I))$, and (ii) hypernetworks, in which the weights $\theta_I$ of the function $h_I(x) = g(x;\theta_I)$ are given by a hypernetwork $f$ as $\theta_I=f(I)$.
In this paper, we define the property of modularity as the ability to effectively learn a different function for each input instance $I$. For this purpose, we adopt an expressivity perspective of this property and extend the theory of~\cite{devore} and provide a lower bound on the complexity (number of trainable parameters) of neural networks as function approximators, by eliminating the requirements for the approximation method to be robust. Our results are then used to compare the complexities of $q$ and $g$, showing that under certain conditions and when letting the functions $e$ and $f$ be as large as we wish, $g$ can be smaller than $q$ by orders of magnitude. This sheds light on the modularity of hypernetworks in comparison with the embedding-based method. Besides, we show that for a structured target function, the overall number of trainable parameters in a hypernetwork is smaller by orders of magnitude than the number of trainable parameters of a standard neural network and an embedding method.

Written by

Tomer Galanti

Lior Wolf

### Related Publications

May 06, 2019

#### Hierarchical RL Using an Ensemble of Proprioceptive Periodic Policies | Facebook AI Research

In this work we introduce a simple, robust approach to hierarchically training an agent in the setting of sparse reward tasks. The agent is split into a low-level and a high-level policy. The low-level policy only accesses internal,…

Kenneth Marino, Abhinav Gupta, Rob Fergus, Arthur Szlam

May 06, 2019

April 24, 2017

#### Episodic Exploration for Deep Deterministic Policies for StarCraft Micro-Management | Facebook AI Research

We consider scenarios from the real-time strategy game StarCraft as benchmarks for reinforcement learning algorithms. We focus on micromanagement, that is, the short-term, low-level control of team members during a battle. We propose several…

Nicolas Usunier, Gabriel Synnaeve, Zeming Lin, Soumith Chintala

April 24, 2017

December 03, 2018

#### Forward Modeling for Partial Observation Strategy Games | Facebook AI Research

We formulate the problem of defogging as state estimation and future state prediction from previous, partial observations in the context of real-time strategy games. We propose to employ encoder-decoder neural networks for this task, and…

Gabriel Synnaeve, Zeming Lin, Jonas Gehring, Dan Gant, Vegard Mella, Vasil Khalidov, Nicolas Carion, Nicolas Usunier

December 03, 2018

July 09, 2018

#### Continuous Reasoning: Scaling the Impact of Formal Methods | Facebook AI Research

This paper describes work in continuous reasoning, where formal reasoning about a (changing) codebase is done in a fashion which mirrors the iterative, continuous model of software development that is increasingly practiced in industry. We…

Peter O'Hearn

July 09, 2018

Tools

Research

Blog

People