The progress of AI research will be closely tied to innovations in hardware.
In his keynote address at the 2019 International Solid-State Circuits Conference and his accompanying paper, “Deep Learning Hardware: Past, Present, and Future,” Facebook's Chief AI Scientist, Yann LeCun, describes how advances in deep learning (DL) research will influence the hardware architecture of the future.
LeCun says the demand for DL-specific hardware will likely only increase. New architectural concepts such as dynamic networks, associative-memory structures, and sparse activations will affect the type of hardware architecture that will be required in the future.
“This might require us to reinvent the way we do arithmetic in circuits,” LeCun says. Computer chips today are typically not optimized for deep learning, which can be effective even when using less precise calculations. “So, people are trying to design new ways of representing numbers that will be more efficient.”
Watch the video below to see LeCun discuss the hardware challenges the industry must address to create dramatically more effective and efficient AI systems.