Research

Turing Award presented to Yann LeCun, Geoffrey Hinton, and Yoshua Bengio

March 27, 2019

Facebook Chief AI Scientist Yann LeCun is one of three recipients of the Association for Computing Machinery’s 2018 A.M. Turing Award, along with Yoshua Bengio (MILA and University of Montreal) and Geoffrey Hinton (Google, Vector Institute, and University of Toronto). The award, which was announced today, recognizes their conceptual and engineering breakthroughs that made deep neural networks a critical component of computing. Their pioneering work has led to major advances in speech recognition, computer vision, natural language understanding, robotics, translation systems, medical image diagnostic tools, and computational protein folding strategies.

With the award, LeCun shared his reflections on meeting Bengio and Hinton more than three decades ago, when they were first developing the ML approaches that led to today’s deep learning revolution.

“All three of us got into this field not just because we want to build intelligent machines, but also because we just wanted to understand intelligence — and that includes human intelligence,” LeCun says. “We’re looking for underlying principles to intelligence and learning, and through the construction of intelligent machines, to understand ourselves.”

In addition to his work at Facebook AI Research (FAIR), LeCun is also a Silver Professor at New York University, where he founded and directed the Center for Data Science. He is co-director with Yoshua Bengio of the Learning in Machines and Brains Program of the Canadian Institute for Advanced Research (CIFAR). He is a member of the National Academy of Engineering and received the 2014 IEEE Neural Network Pioneer Award, among other distinctions.

The early days of deep learning

LeCun recalls meeting one of Hinton’s collaborators, Terry Sejnowski, at a workshop in France in 1985. “I told Terry I was working on this sort of back-propagation algorithm,” LeCun explains. “He didn’t tell me what he was working on at the time, but he went back to the U.S. and he told Geoff, ‘There’s this kid in France who’s working on the same things we’re doing.’” A few months later, Hinton attended a conference in France where LeCun presented the research he had previously shared with Sejnowski. After reading LeCun’s paper in the proceedings, Hinton sought out LeCun and soon discovered their mutual research interests.

A couple of years later, while LeCun was a postdoc in Hinton’s lab at the University of Toronto, he was invited to give a talk at McGill University in Montreal by professor Renato De Mori. “In the audience, there was this really smart master’s student who was asking extremely smart questions that showed that he had really thought about the problem of neural nets,” LeCun says. The student was Yoshua Bengio. “I kept an eye on him because it was obvious that he was very smart and on a trajectory to do something really interesting. I ended up hiring him at Bell Labs after he completed his PhD at McGill and his postdoc at MIT.”

LeCun joined Bell Laboratories in Holmdel, New Jersey, where he developed the convolutional neural network model (ConvNet). With Leon Bottou (now at FAIR), Yoshua Bengio, and Patrick Haffner, he developed character recognition systems based on ConvNets that were widely deployed to read checks and other documents automatically. Today, ConvNets have become the dominant method for recognizing images, videos, speech, and other signals.

LeCun, Hinton, and Bengio worked together for many years after that, staying in contact and maintaining a particular interest in neural nets. Despite a fair amount of skepticism surrounding artificial neural networks in the mid-1990s and early 2000s, Hinton, LeCun, and Bengio remained dedicated to these techniques. With support from CIFAR, they were able to continue their research and create a community for students and collaborators to exchange ideas. This led to the CIFAR Learning in Machines and Brains program, which Hinton initially directed and which LeCun and Bengio now co-direct.

“This is really where the deep learning revolution started,” LeCun says. The group’s ideas started to gain traction within the machine learning community around 2007.

According to LeCun, the key event of the deep learning revolution occurred in late 2012. “Geoff and his students Alex Krizhevsky and Ilya Sutskever produced a very efficient implementation of ConvNets on GPUs. With this, they were able to train a large-scale ConvNet on the ImageNet dataset, which yielded much better accuracy than existing methods. The computer vision community, initially skeptical of ConvNets, was sold,” LeCun says.

In the natural language processing community, Bengio’s pioneering work has had a lasting impact, notably on researchers Jason Weston and Ronan Collobert (who have both since joined LeCun at FAIR) while working with Leon Bottou at NEC Labs (now also at FAIR).

“Back when Yoshua was still collaborating with us at Bell Labs in the mid to late ’90s, he started working on using neural nets to model language. He pioneered those models,” LeCun says. Collobert and Weston were influenced by Bengio’s work and went on to publish “A Unified Architecture for Natural Language Processing: Deep Neural Networks with Multitask Learning” in 2008. The paper was initially met with skepticism by the NLP community, but it became very influential and eventually won the Test of Time award at ICML 2018.

Building the building blocks of machine learning

In reflecting on his proudest accomplishments, LeCun mentions his back-propagation algorithm and the work he did on it with FAIR’s Leon Bottou. Their collaboration produced a building-block principle on which every deep learning software platform, including PyTorch and TensorFlow, is based.

“One of the things that Leon and I worked on is — and he’s really the one that deserves credit for this — the idea that you can design a learning machine by assembling trainable blocks, if you will, parameterized functional blocks, and connecting them with each other in a computational graph,” LeCun says.

“You can build a learning machine by assembling those graphs of predefined blocks, a bit like Lego bricks, and then automatically the software will figure out how to adjust the parameters of all the blocks using the back-propagation algorithm, so that the output it produces is the one you want.”

These advances can be traced directly to the intuitions LeCun, Bengio, and Hinton had early in their careers.

“I had a feeling that the proper approach to AI was to get a machine to learn — that learning was essentially inseparable from intelligence,” LeCun says.

For more information about the A.M. Turing Award and its recipients, visit the Association for Computing Machinery’s website.