Research

ML Applications

A Facebook-scale simulator to detect harmful behaviors

July 23, 2020

For large-scale social networks, testing a proposed code update or new feature is a complex and challenging task. In person and also online, people act and interact with one another in ways that are sometimes difficult for traditional algorithms to model or replicate. People’s behavior evolves and adapts over time and is different from one geography to the next, which makes it difficult to anticipate all the ways an individual or an entire community might respond to even a small change in their environment.

To improve software testing for these complex environments — particularly in product areas related to safety, security, and privacy — Facebook researchers have developed Web-Enabled Simulation (WES). WES is a new method for building the first highly realistic, large-scale simulations of complex social networks. It has three important aspects:

  • It uses machine learning to train bots to realistically simulate the behaviors of real people on a social media platform.
    Bots are trained to interact with each other using the same infrastructure as real users, so they can send messages to other bots, comment on bots’ posts or publish their own, or make friend requests to other bots. Bots cannot engage with real users and their behavior cannot have any impact on real users or their experiences on the platform.

  • WES is able to automate interactions between thousands or even millions of bots. We are using a combination of online and offline simulation, training bots with anything from simple rules and supervised machine learning to more sophisticated reinforcement learning. This blend gives us a spectrum of simulation characteristics that trade engineering concerns, such as speed, scale, and realism; different use cases require different engineering trade-offs along this spectrum for maximum efficiency and effectiveness.

  • WES deploys these bots on the platform’s actual production code base. The bots can interact with one another but are isolated from real users. This real-infrastructure simulation ensures that the bots’ actions are faithful to the effects that would be witnessed by real people using the platform.

The WES approach can automatically explore complicated scenarios in a simulated environment. While the project is in a research-only stage at the moment, the hope is that one day it will help us improve our services and spot potential reliability or integrity issues before they affect real people using the platform. With WES, we are also developing the ability to answer counterfactual and what-if questions with scalability, realism, and experimental control.

An open approach to a collective challenge

Building sophisticated simulation environments poses a range of interesting scientific challenges. For example, to build bots that behave realistically and intelligently, we leveraged a combination of technologies and published research on a wide variety of topics, including search-based software engineering, machine learning, programming languages, multiagent systems, graph theory, game AI, and AI-assisted gameplay.

Furthermore, the search space of potential mechanisms and the ways in which behaviors interact with them is both enormous and complicated. Small changes in mechanism and/or behavior can interact in subtle and unexpected ways, imbuing the simulation with all the complexity of emergent behavior.

In order to speed progress, we’re seeking input from others in the research community. We’ve shared details on the WES approach and our initial prototype, WW, and launched a request for proposals (RFP) inviting academic researchers and scientists to contribute new ideas to WES and WW. These awards are made to world-leading scholars as unrestricted gifts to support their exploration of the science that underpins WES.

We received 85 submissions from 17 countries around the world. The interest, breadth, and depth of the research proposals are a testament to the way in which the WES research agenda touches on so many important and exciting research topics. We can only hope to answer a fraction of these open scientific questions ourselves. This is why we are so keen to partner with the scientific research community to help advance our collective understanding of these topics.

Building WW, a truly realistic test environment

We’ve used WES to build WW, a simulated Facebook environment using the platform’s actual production code base. With WW (the name is meant to show that this is a smaller version of the World Wide Web, or WWW), we can, for example, create realistic AI bots that seek to buy items that aren’t allowed on our platform, like guns or drugs. Because the bot is acting in the actual production version of Facebook, it can conduct searches, visit pages, send messages, and take other actions just as a real person might. Bots cannot interact with actual Facebook users, however, and their behavior cannot impact the experience of real users on the platform.

Something Went Wrong
We're having trouble playing this video.

This graphic shows how a WES uses the actual production version of a platform to test interactions between AI bots. It shows how we aim to design user protection mechanisms in WW and simulate their effects on the WW bots before deploying them on the real platform. In this case, the central red bot simulates a user who is sending harmful content to others. In WW we search for mechanisms to protect others from this harmful bot behaviour, test different options, configurations and parameters in the WW simulation to find the best option, before deploying it in production.

We can then run simulations to see whether the bot is able to thwart our safeguards and violate our Community Standards. By doing this at scale in WW, we can identify statistical patterns in the results and test ways to address problems. We can also leverage WW’s mechanism design component, which uses computational search (over the space of product variants), to learn product mechanisms that will tend to make it harder to behave in ways that violate our Community Standards.

Improving tools to build safer, better social platforms

In any large-scale system, not every person’s behavior will be benign, and everyone’s behavior will evolve and adapt depending on how the platform responds and what other people do. Some complex interactions will produce emergent properties that are difficult or even impossible to predict. With the WES method and our WW simulator, we’ve built a better way to address this important but difficult challenge. This will help us make our platforms safer, more stable, more robust, and performant. And by sharing our work and providing grants to academic researchers, we also hope WES will help others develop new ways to model these sorts of complex situations. Many, perhaps most, software platforms today enable communities of people to come together and interact in multifaceted, difficult-to-predict ways. Building better testing tools for these systems can therefore have a wide-ranging impact.

Written By

Mark Harman

Research Scientist