ML Applications

Harmful content can evolve quickly. Our new AI system adapts to tackle it.

12/8/21

Something Went Wrong
We're having trouble playing this video.

Harmful content can evolve rapidly — whether fueled by current events or by people looking for new ways to evade our systems — and it’s crucial for AI systems to evolve alongside it. But AI needs to learn what to look for and it typically takes several months to collect and label thousands, if not millions, of examples necessary to train each individual AI system to spot a new type of content.

To address this bottleneck, we’ve built and recently deployed a new AI technology called Few-Shot Learner (FSL) that can adapt to take action on new or evolving types of harmful content within weeks instead of months. It not only works in more than 100 languages, but it also learns from different kinds of data, such as images and text, and it can strengthen existing AI models that are already deployed to detect other types of harmful content.

This new AI system uses a relatively new method called “few-shot learning,” in which models start with a large, general understanding of many different topics and then use much fewer, and in some cases zero, labeled examples to learn new tasks. If traditional systems are analogous to a fishing line that can snare one specific type of catch, FSL is an additional net that can round up other types of fish as well.

Recent scientific breakthroughs, like our self-supervised learning techniques and our new super-efficient infrastructure, have made it possible for the field to start shifting away from traditional, bespoke AI systems toward larger, more consolidated, and generalized systems with less reliance on labeled data. It’s first trained on billions of generic and open-source language examples. Then, the AI system is trained with policy-violating content and borderline contentwe’ve labeled over the years. Finally, it’s trained on condensed text explaining a new policy. Unlike previous systems that relied on pattern-matching with labeled data, FSL is pretrained on general language, as well as policy-violating and borderline content language, so it can learn the policy text implicitly.

We’ve tested FSL on a few relatively new events. For example, one recent task was to identify content that shares misleading or sensationalized information in a way that likely discourages COVID-19 vaccinations (for example, “Vaccine or DNA changer?”). In another, separate task, the new AI system improved an existing classifier that flags content that comes close to inciting violence (for example, “Does that guy need all of his teeth?”). The traditional approach may have missed these types of inflammatory posts, since there aren’t many labeled examples that use DNA language to create vaccine hesitancy or reference teeth to imply violence.

We carried out standardized offline and online A/B testing protocols to measure the performance of the model. In these tests, we looked at the delta between the prevalence of harmful content — i.e., the percentage of views of violating content people see — before and after FSL was rolled out on Facebook and Instagram. Meta AI Few-shot Learner was able to correctly detect posts that traditional systems may miss and helped reduce the prevalence of these types of harmful content. It does this by proactively detecting potentially harmful content and preventing it from spreading on our platforms. We’ve also seen that, in combination with existing classifiers, FSL has helped reduce the prevalence of other harmful content likehate speech.

We’re working on additional tests to improve classifiers that could benefit from more labeled training data, like those in countries that speak languages without large volumes of labeled training data, and we’ll continue to test it on newly emerging patterns of violating content. Of course, these are early days of intelligent, generalized AI models. There’s a long road ahead before AI can comprehend dozens of pages of policy text and immediately know exactly how to enforce it. We’re continuously advancing AI techniques and deploying them as fast as possible to better serve our community, and we believe FSL is a promising step forward.

Under the hood: Few-Shot Learner

Few-Shot Learner is a large-scale, multimodal, multilingual, zero or few-shot model that enables joint policy and content understanding, generalizes across integrity problems, and doesn’t require model fine-tuning. We are actively conducting research to train models that leverage simple policy sentences instead of hundreds or thousands of labeled examples.

Our new system works across three different scenarios, each of which require varying levels of labeled examples:

  • Zero-shot: Policy descriptions with no examples.

  • Few-shot with demonstration: Policy descriptions with a small set of examples (less than 50).

  • Low-shot with fine-tuning: ML developers can fine-tune on the FSL base model with a low number of training examples.

The overall input of FSL consists of three parts. First, building on our previous work with Whole Post Integrity Embeddings (WPIE), , it learns multimodal information from the whole post, including text, image, URL, etc. Second, it analyzes policy-related information, such as the definition of the policy, or labeled examples that indicate whether a particular post does or doesn’t violate that policy definition. Third, we also take additional labeled examples as demonstrations, if available.

As part of our novel approach, called Entailment Few-Shot Learning, the key idea is to convert the class label into a natural language sentence which can be used to describe the label, and determine if the example entails the label description. For example, we can reformulate an apparent sentiment classification input and label pair:

  • [x : "I love your ethnic group. JK. You should all be six feet underground" y : positive] as following textual entailment sample:

  • [x : I love your ethnic group. JK. You should all be 6 feet underground. This is hate speech. y : entailment].

We compared our proposed methods with several existing state-of-the-art few-shot learning methods. Through a series of systematic evaluations, we show that our method outperforms various state-of-the-art few-shot learning methods by up to 55 percent (and 12 percent on average). Read full details in our research paper here.

Bridging the gap between policy creation and automatic, ML-driven enforcement

We believe that FSL can, over time, enhance the performance of all of our integrity AI systems by letting them leverage a single, shared knowledge base and backbone to deal with many different types of violations. But it could also help policy, labeling, and investigation workflows to bridge the gap between human insight and classifier advancement.

FSL can be used to test out a new set of likely policy violations and understand the sensibility and validity of the proposed definitions. It casts a wider net, surfacing more types of “almost” content violations that policy teams should know about when deciding or formulating at-scale guidance for annotators who train new classifiers and the human reviewers helping to keep our platforms safe. Since it scales quickly, the time from policy framing to enforcement would shorten by orders of magnitude.

Toward humanlike AI that can learn more effectively

The ability to quickly begin enforcing against content types that don’t have lots of labeled training data is a major step forward and will help make our systems more agile and responsive to emerging challenges. Few-shot learning and zero-shot learning are one of many cutting-edge AI domains where we’ve been making major research investments. And we see no sign of research to the production pipeline slowing down. We’re actively working on key open research problems that go beyond content understanding only, and try to infer the cultural, behavioral, and conversational context around it.

There’s a lot more work to be done, but these early production results are an important milestone that signals a shift toward more intelligent, generalized AI systems that can quickly learn multiple tasks at once.

Our long-term vision is to achieve human-like learning flexibility and efficiency, making our integrity systems even faster and easier to train and better able to work with new information. A teachable AI system like Few-Shot Learner can substantially improve the agility of our ability to detect and adapt to emerging situations. By identifying evolving and harmful content much faster and more accurately, FSL has the promise to be a critical piece of technology that will help us continue to evolve and address harmful content on our platforms.