RESEARCH

ML APPLICATIONS

Machine Learning in Compilers: Past, Present, and Future

September 14, 2020

Abstract

Writing optimising compilers is difficult. The range of programs that may be presented to the compiler is huge and the systems on which they run are complex, heterogeneous, non-deterministic, and constantly changing. The space of possible optimisations is also vast, making it very hard for compiler writers to design heuristics that take all of these considerations into account. As a result, many compiler optimisations are out of date or poorly tuned.
Near the turn of the century it was first shown how compilers could be made to automatically search the optimisation space, producing programs far better optimised than previously possible, and without the need for compiler writers to worry about architecture or program specifics. The searches, though, were slow, so in the years that followed, machine learning was developed to learn heuristics from the results of previous searches so that thereafter the search could be avoided and much of the benefit could be gained in a single shot.
In this paper we will give a retrospective of machine learning in compiler optimisation from its earliest inception, through some of the works that set themselves apart, to today’s deep learning, finishing with our vision of the field’s future. Index Terms—machine learning, compilers.

Download the Paper

AUTHORS

Written by

Hugh Leather

Chris Cummins

Publisher

Forum Design Language (FDL)

Recent Publications

January 01, 2021

Asynchronous Gradient-Push | Facebook AI Research

We consider a multi-agent framework for distributed optimization where each agent has access to a local smooth strongly convex function, and the collective goal is to achieve consensus on the parameters that minimize the sum of the agents’…

Mahmoud Assran, Michael Rabbat

January 01, 2021

November 16, 2020

NLP

An Imitation Game for Learning Semantic Parsers from User Interaction

Despite the widely successful applications, building a semantic parser is still a tedious process in practice with challenges from costly data annotation and privacy risks.…

Ziyu Yao, Yiqi Tang, Wen-tau Yih, Huan Sun, Yu Su

November 16, 2020

November 16, 2020

NLP

Measuring the Similarity of Grammatical Gender Systems by Comparing Partitions

A grammatical gender system divides a lexicon into a small number of relatively fixed grammatical categories. How similar are these gender systems across languages? To quantify the similarity…

Arya D. McCarthy, Adina Williams, Shijia Liu, David Yarowsky, Ryan Cotterell

November 16, 2020

November 16, 2020

NLP

Efficient One-Pass End-to-End Entity Linking for Questions

We present ELQ, a fast end-to-end entity linking model for questions, which uses a biencoder to jointly perform mention detection and linking in one pass.…

Belinda Z. Li, Sewon Min, Srinivasan Iyer, Yashar Mehdad, Wen-tau Yih

November 16, 2020

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.