ML Applications

Integrity

How Facebook uses super-efficient AI models to detect hate speech

November 19, 2020

Building AI that can analyze complicated text isn’t enough to protect people from harmful content. We need systems that spot a slang-filled or intentionally misspelled piece of hate speech — and do it in a fraction of a second and at billion-person scale.

This has been a challenge when deploying systems to detect hate speech because the most powerful, cutting-edge language-understanding systems today use large-scale Transformer models with hundreds of millions or billions of parameters. New models, including Facebook AI’s RoBERTa and XLM-R, have repeatedly advanced the state of the art, but these gains have come from creating ever-larger models that require massive amounts of computation.

To unlock the capabilities of these powerful AI models, Facebook AI recently developed a new Transformer architecture called Linformer. It makes it possible to use them efficiently at scale. Linformer is the first theoretically proven linear-time Transformer architecture. With standard Transformers, the amount of required processing power increases at a geometric rate as the input length increases. With Linformer, however, the number of computations increases only at a linear rate. This makes it possible to use larger pieces of text to train models, and thereby achieve better performance.

We are now using Linformer to analyze billions of pieces of content on Facebook and Instagram in different regions around the world.

This chart compares the level of complexity of different Transformer architectures.

Along with other AI advances, Linformer has helped us make steady progress in catching hate speech and content that incites violence. A couple of years ago, very little of the hate speech on our platforms that we removed was done so before anyone reported it. As detailed in Facebook’s quarterly Community Standards Enforcement Report released today, AI proactively detected 94.7 percent of the hate speech we removed.

Something Went Wrong
We're having trouble playing this video.

Earlier this year, we published our research on Linformer and released our code so other researchers and engineers could improve their models. Since our Facebook AI Research (FAIR) lab was founded in 2013, we’ve committed to an open science–based approach. Our research model revolves around publishing code and methodologies, collaborating with other researchers across industry and academia, and creating open benchmarks and challenges. We’re now sharing details here on how Linformer works and how we are using it to keep people safe on our platforms.

These are difficult problems, and our systems are still far from perfect. And even if we had perfect AI tools, there would still be difficult questions about what policies will serve people best. But progress in AI has made our platforms better and safer, and we are working hard to advance our technology further.

A simpler way to build cutting-edge AI models

Transformer models have become ubiquitous in language modeling, machine translation, speech recognition, symbolic mathematics, computer vision, and reinforcement learning. Transformers rely on a simple-yet-powerful mechanism called self-attention, which enables AI models to selectively focus on certain parts of their input and thus better understand the content.

They are notoriously resource-intensive, however, because their self-attention mechanism requires more and more memory and computations as the length of the input sequence grows. When you increase the input size from, say, to 4,000 to 8,000, the number of computations doesn’t double. It goes from about 16,000,000 to 64,000,000.

Linformer overcomes this challenge by approximating the information in the attention matrix — without degrading the model’s performance. As shown in the chart above, Linformer models can make predictions efficiently even as the size of the input grows.

Something Went Wrong
We're having trouble playing this video.

This graphic shows how Linformer produces lower-rank matrices.

To build Linformer, we first demonstrated that low-rank approximation is possible for self-attention. This indicated it was possible to greatly simplify the standard Transformer architecture without degrading performance.

Something Went Wrong
We're having trouble playing this video.

This flowchart shows how Linformer can be used to create models to detect hate speech

As the sequence length increases, Linformer’s efficiency gains grow.

In a typical Transformer, every single token at each layer must look at (or attend to) every other single token from the previous layer. This generates quadratic complexity, since the algorithm must iterate across the n tokens in the previous layer for each of the n tokens in the current layer.

This chart compares the performance of Linformer models with other Transformer architectures.

We tested two large datasets, WIKI103 and IMDB, with a RoBERTa Transformer model in order to calculate the eigenvalues, a standard measurement of the approximate rank of the matrix.

We were able to demonstrate that the information from N tokens of the previous layer can be compressed into a smaller, fixed-size set of K distinct units. With this compression, the system needs to iterate only across this smaller set of K units for each token. In other words, self-attention is what is known as low-rank and can be expressed using a smaller matrix of numbers.

Spectrum analysis of the self-attention matrix of Transformer model.

What’s next

These efficiency gains matter because we want to deal with hate speech before it has a chance to spread. Milliseconds count when determining whether a post violates our policies. Linformer is helping us do this today, but it could also lead to new AI-powered integrity systems that simply aren’t currently possible. Could we one day deploy a state-of-the-art model that learns from text, images, and speech and effectively detects not just hate speech but human trafficking, bullying, and other forms of harmful content?

There is much work to do before we reach this goal, but Linformer brings us one step closer. What’s more, by making large-scale Transformer models more efficient, Linformer also enables smaller research labs and engineering teams to train and test state-of-the-art AI even if they don’t have access to massive computing resources.

We’re committed to keeping people safe on our platforms. We believe that Linformer and other AI advances will enable us to continue making progress, and we look forward to seeing how others build on and advance our work.