RESEARCH

ML APPLICATIONS

Machine Learning in Compilers: Past, Present, and Future

September 14, 2020

Abstract

Writing optimising compilers is difficult. The range of programs that may be presented to the compiler is huge and the systems on which they run are complex, heterogeneous, non-deterministic, and constantly changing. The space of possible optimisations is also vast, making it very hard for compiler writers to design heuristics that take all of these considerations into account. As a result, many compiler optimisations are out of date or poorly tuned.
Near the turn of the century it was first shown how compilers could be made to automatically search the optimisation space, producing programs far better optimised than previously possible, and without the need for compiler writers to worry about architecture or program specifics. The searches, though, were slow, so in the years that followed, machine learning was developed to learn heuristics from the results of previous searches so that thereafter the search could be avoided and much of the benefit could be gained in a single shot.
In this paper we will give a retrospective of machine learning in compiler optimisation from its earliest inception, through some of the works that set themselves apart, to today’s deep learning, finishing with our vision of the field’s future. Index Terms—machine learning, compilers.

Download the Paper

AUTHORS

Written by

Hugh Leather

Chris Cummins

Publisher

Forum Design Language (FDL)

Recent Publications

January 01, 2021

Asynchronous Gradient-Push | Facebook AI Research

We consider a multi-agent framework for distributed optimization where each agent has access to a local smooth strongly convex function, and the collective goal is to achieve consensus on the parameters that minimize the sum of the agents’…

Mahmoud Assran, Michael Rabbat

January 01, 2021

October 26, 2020

NLP

Weak-Attention Suppression For Transformer Based Speech Recognition

Transformers, originally proposed for natural language processing (NLP) tasks, have recently achieved great success in automatic…

Yangyang Shi, Yongqiang Wang, Chunyang Wu, Christian Fuegen, Frank Zhang, Duc Le, Ching-Feng Yeh, Michael L. Seltzer

October 26, 2020

October 25, 2020

NLP

Statistical Testing on ASR Performance via Blockwise Bootstrap

A common question being raised in automatic speech recognition (ASR) evaluations is how reliable is an observed word error rate (WER) improvement comparing two ASR systems, where statistical hypothesis testing and confidence interval…

Zhe Liu, Fuchun Peng

October 25, 2020

October 01, 2020

Voice Separation with an Unknown Number of Multiple Speakers

We present a new method for separating a mixed audio sequence, in which multiple voices speak simultaneously. The new method employs gated neural networks that are trained to separate the voices at multiple processing steps…

Eliya Nachmani, Yossi Adi, Lior Wolf

October 01, 2020

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.