Research

What AI fairness in practice looks like at Facebook

March 11, 2021

Many technical approaches have been proposed for ensuring that decisions made by AI systems are fair, but few of these methods have been applied in real-world settings. That’s because fairness in AI is not simply about checking a box, solving a mathematical equation, or reaching a single, specific goal. There is no one-size-fits-all definition of fairness — in fact, experts across academic disciplines have identified dozens of definitions of fairness, many of which are mathematically incompatible with each other. Addressing this problem involves thinking deeply about ways to continually make AI-powered products and experiences that are inclusive and treat people fairly.

At Facebook, we see fairness as a process, not simply a property of a system. As part of Facebook’s long-term focus on responsible AI, we’ve published a new research paper about a holistic approach to addressing fairness issues in a real-world setting with large-scale, complex production systems. The approach considers fairness across multiple related dimensions: at the product level, the policy level, and the implementation level.

Fairness at the product level involves surfacing questions like, “Are the goals of this product consistent with providing people with fair value and treating them fairly?” Meanwhile, at the policy level it involves asking questions like, “Does a policy prohibiting certain types of harmful behavior within the product adequately address the unique harms experienced by some subpopulations?” At the implementation level, fairness questions relate to more technical details about a system’s performance. For instance, “Are people tasked with labeling data sets executing the labeling instructions correctly?” and “Are predictive models working equally well for all subpopulations, or are they generating errors that affect some groups more than others?”

In other words, a holistic approach to fairness considers not only whether algorithmic rules are being applied appropriately to all, but also whether the rules themselves — or the structure in which they are situated — are fair, just, and reasonable.

In the paper, we describe an approach to measuring fairness at the implementation level, which considers the distribution of errors a system produces and what potential impact those errors could have on different communities. We describe a method to measure model fairness in binary classifiers (a type of AI model most commonly of interest in the field of fairness in machine learning), assessing whether a model’s prediction is resulting in errors that affect one group more than others (e.g., people of different ages or genders, or people who speak different languages). We also share an approach to label fairness, introducing a method derived from signal detection theory that can be used to determine whether models may be inadvertently embedding the human biases of their trainers.

Unfairness in an AI model could have many possible causes, including not enough training data, a lack of features, a misspecified target of prediction, or a measurement error in the input features. Even for the most sophisticated AI researchers and engineers, these problems are not straightforward to fix — but looking at indicators that unfairness may exist at the implementation level can prompt and inform a deep dive into a particular model to diagnose and help remedy the root cause of an issue.

Addressing fairness concerns often requires weighing difficult trade-offs, learning from relevant subject matter experts, and hearing from people with lived experiences. It also means disentangling concerns that stem from implementation errors from those that derive from policy and product choices. That way, the right experts can suggest the best ways to mitigate fairness concerns without inadvertently introducing more or greater concerns. Finally, it’s important to continue engaging with diverse communities to understand the experiences people have with our products.

Sharing our initial approach to thinking about fairness in AI is part of our broader efforts to build technology responsibly and share our ideas and insights for doing so with the community. Our work around fairness is ongoing; we welcome feedback from experts and other stakeholders as we develop and refine our approach. Facebook’s Responsible AI team has relied on these and other approaches to help advance the fairness of an initial set of AI-powered systems at Facebook. We’re continuing to develop and adapt other methods to identify and tackle fairness concerns in more complex systems for which best practices are still emerging. We’ll continue to share these important discussions surrounding fairness and other pillars of responsible AI so that researchers, policy experts, and the broader community can collaborate, build on one another’s work, and make AI more fair for everyone.

Read the full paper here.

Written By

Miranda Bogen

AI Privacy Manager

Sam Corbett-Davies

Research Science Manager