ML Applications

Research

Facebook’s five pillars of Responsible AI

June 22, 2021

AI today is a core component of a vast range of technology used by billions of people around the world. Here at Facebook, it is part of systems that do everything from ranking posts in your News Feed to tackling hate speech and misinformation to responding to the COVID-19 pandemic. But, as with other emerging technologies, AI also raises many hard questions around issues such as privacy, fairness, accountability, and transparency.

The challenges raised by AI are new territory for everyone, so standards are still emerging and we admittedly don’t yet have all the answers to these important questions. We at Facebook — and we as a society — are at the beginning rather than the end of our Responsible AI journey, and finding the right approaches and then scaling them across Facebook’s large family of products and features will take time. But we recognize how important these issues are, and we’re committed to addressing them in an open and collaborative way.

That’s why Facebook created a dedicated, cross-disciplinary Responsible AI (RAI) team within its AI organization, much like we have previously invested in a dedicated Privacy team to lead our data privacy efforts and Integrity teams to enforce the policies that keep people safe on our platforms. Along with many others working across the company’s different product organizations, the RAI team is building and testing approaches to help ensure that our machine learning (ML) systems are designed and used responsibly. We are doing so with the benefit of regular consultation and collaboration with outside experts and regulators, and in line with RAI’s overall goal of ensuring that AI at Facebook benefits people and society.

It’s encouraging to see the growing momentum among governments, industry experts, and others to answer these challenges collaboratively. Most notably, the European Commission in April debuted its proposal for a risk-based approach to regulating AI. We look forward to engaging with EU lawmakers, and we welcome this proposal as a first step toward AI regulation that we hope will protect people’s rights while ensuring continued innovation and economic growth. As with other emerging technologies, we believe proactive regulation of AI is necessary. And it’s particularly important to ensure that AI governance is based on foundational values of respect for human rights, democracy, and the rule of law, particularly when countries such as China are pursuing global AI superiority unrestrained by those rights and values.

Those foundational values are at the root of the wide range of principles statements that have been released around responsible AI development, most especially the European Commission’s High-Level Expert Group’s Ethics Guidelines for Trustworthy AI and the Organization for Economic Cooperation and Development’s Principles on Artificial Intelligence, which Facebook helped develop. Facebook, in turn, has organized its Responsible AI efforts around five key pillars that were heavily influenced by those principles: Privacy & Security, Fairness & Inclusion, Robustness & Safety, Transparency & Control, and Accountability & Governance.

We’re sharing new details here on our efforts inside Facebook in this crucial area, including work from our RAI team and from across the company. While much of our work is still in early stages and we have much to do, these five pillars will guide our efforts to help ensure that Facebook uses AI responsibly.

Privacy & Security

At Facebook, we believe protecting the privacy and security of people’s data is the responsibility of everyone at the company.

That’s why we have built our cross-product Privacy Review process, by which we assess privacy risks that involve the collection, use, or sharing of people’s information. The process is also designed to help identify and mitigate the privacy risks we identify, including features and products driven by AI. We recently published an in-depth progress update on our company-wide privacy efforts, going into greater detail about our review process and the eight core Privacy Expectations that serve as its foundation.

Over the next several years, we’ll leverage the centralized tools we’ve developed to help manage the Privacy Review process to support continued investment in infrastructure improvements that will systematize the process of enforcing our privacy decisions. These changes will move more human-driven processes to automated ones that will make it easier to consistently enforce our privacy commitments across our products and services, including those driven by AI.

Additionally, in order to standardize the data flow and the development of AI for production purposes, we are unifying AI systems into one platform and ecosystem that will support all teams across the company. With a single platform, we can quickly and responsibly evolve models that perform countless inference operations every day for the billions of people that use our technologies.

Of course, AI can raise novel privacy and security concerns that go beyond questions of data infrastructure. In particular, face and speech recognition and other AI-driven technologies that make use of sensitive information have driven a great deal of privacy concern among policymakers and the public. That’s why we ask people eligible for Face Recognition to affirmatively turn it on before we recognize them, and provide clear controls to turn it off. Similarly, we allow people to turn off storage of voice interactions on their Portal devices. When stored, voice interactions improve our speech recognition algorithms, and we take strict steps to protect people’s privacy when our review team transcribes those interactions.

AI can also create new opportunities for protecting privacy, which is why we are heavily investing in research around privacy-preserving machine learning technologies like differential privacy, federated learning, and encrypted computation — and teaching the AI community how to deploy them. Our goal is to leverage this research to make our products work better for people while collecting less data and better protecting the data that we do collect. For example, the computer vision processing that allows Portal’s Smart Camera to accurately focus on people in the camera frame happens on the device. We are also publicly sharing the fruits of our research in the form of AI privacy resources like CrypTen, a tool to help researchers who aren’t cryptography experts easily experiment with ML models using secure computing techniques, and Opacus, an open source library for training ML models with differential privacy, to help advance the state of the art and improve AI privacy across the industry.

Fairness & Inclusion

At Facebook, we believe that our products should treat everyone fairly and work equally well for all people, which is why Fairness is one of the core Privacy Expectations that help guide the above mentioned Privacy Review process.

In the context of AI, our Responsible AI team has developed and is continually improving our Fairness Flow tools and processes to help our ML engineers detect certain forms of potential statistical bias in certain types of AI models and labels commonly used at Facebook, as described in our recent academic paper, with a goal of eventually scaling similar measurement to all our AI products.

The RAI team has also developed a framework for evaluating the fairness maturity of our products that is now beginning to be incorporated into the goals of all the product teams in our Facebook AI organization, and we aim to eventually require similar goals for all our product teams across the company. To help product teams formulate and meet those goals, we’ve established a multidisciplinary team of experts to offer targeted Fairness Consultations about specific products.

One fairness effort that we’re particularly proud of is our work to help ensure that the AI-driven Portal Smart Camera accurately focuses on people on-camera regardless of apparent skin tone or gender presentation. This foundational fairness work has helped inform computer vision efforts across a range of our products, and we recently released our Casual Conversations data set, composed of over 45,000 videos designed to similarly help researchers evaluate their computer vision and audio models for accuracy across a diverse set of ages, genders, apparent skin tones, and ambient lighting conditions.

Measurement is important, but fairness in AI can’t simply be reduced to a number or a checklist or a mathematical definition. What “fairness” means will often be unclear and contested, and can differ based on the particular product or context at issue. To help us consider these issues from a broad range of perspectives, Facebook’s Responsible Innovation team and Diversity, Equity & Inclusion team both facilitate input from a wide range of external experts and voices from underrepresented communities. They similarly solicit input from employees with diverse lived experiences, through both a Diversity Advisory Council and an Inclusive Product Council with a diverse range of stakeholders, to advise teams about how particular communities may be impacted by their new products and how existing products might be improved.

We are also prioritizing AI Diversity & Inclusion education efforts for our AI team when hiring and training employees, and setting clear D&I expectations for our AI managers. We aim to better ensure that the people making our AI products are from as diverse a range of backgrounds and perspectives as the people using them, and that we are inclusive of a broad range of voices in our decision-making.

Robustness & Safety

At Facebook, we believe that AI systems should meet high performance standards, and should be tested to ensure they behave safely and as intended even when they are subjected to attack.

That’s why we’ve established an AI Red Team, which partners with our product teams to test how robust our AI-powered integrity systems are against adversarial threats. We are also developing new software tools for testing and improving robustness — and then sharing them with the AI research and engineering community. For example, our open source Captum library provides state-of-the-art algorithms to understand more easily and effectively which features of an AI model built with our open source PyTorch ML framework contribute to that model’s outputs. Captum helps AI developers interpret their models, benchmark their work, and improve and troubleshoot unexpected model outputs. Captum will soon include tools to help simulate adversarial attacks, much as our Red Team adversarially tests our own models. And, just last week, we released another robustness tool called AugLy, which helps teach models to be more robust in the face of perturbations of unimportant attributes of data and focus more on the important attributes of data.

In addition to contributing to outside research on robustness, we are also learning from it. For example, leveraging methodologies proposed by external experts, we’ve developed a new framework and tools to better detect and mitigate “model drift” — that is, the degradation of a model’s predictive power due to changes in the environment.

Ensuring robustness and safety is an important challenge for all companies offering AI-driven services, not just Facebook. We believe industry needs to tackle that challenge collectively, which is why we’ve released the research and tools that we have, and why we’ve invested so much in collaborative efforts around adversarial AI testing. For example, we came together with the Partnership on AI, Microsoft, leading academics, and others to create the Deepfake Detection Challenge. That competition drew more than 2,000 participants who trained their deepfake detection models using a unique new data set created by Facebook. Likewise, we created and shared a data set of more than 10,000 new multimodal examples for the Hateful Memes Challenge, a first-of-its-kind competition to help build AI models that better detect multimodal hate speech. And most recently, to further speed up innovation across the field of AI robustness testing, we hosted a virtual cross-industry event in May — the “AI Red Table” — to facilitate the sharing of best practices and learnings between AI industry leaders.

Transparency & Control

At Facebook, we believe that the people who use our products should have more transparency and control around how data about them is collected and used, which is why it is one of our eight core Privacy Expectations.

Beyond privacy, in the context of Responsible AI, we are striving to be more transparent about when and how AI systems are making decisions that impact the people who use our products, to make those decisions more explainable, and to inform people about the controls they have over how those decisions are made.

That’s why we’ve introduced a number of tools over the years to increase transparency around why people see which News Feed content and ads (Why Am I Seeing This, or WAIST, tools), and to provide additional transparency and control over the data and off-Facebook activity that may impact how we select the News Feed items and ads that we think will be most relevant to them. We are also giving people more control over how our AI systems rank content in your News Feed, including giving you more control over which favorite friends or Pages should influence your ranking, or even letting you turn off AI-driven News Feed personalization altogether.

Transparency isn’t just important for the people using our products, though. We also look forward to discussing more about how our models work with outside experts and regulators — and doing so in a way that preserves privacy and doesn’t reveal trade secrets. That is why, for example, we launched in May a new online Transparency Center, where we’ll be publishing information about the News Feed ranking process, including more information about some of the important signals in that process, as well as providing updates on significant changes to those ranking algorithms.

The Responsible AI team is also collaborating with other product teams and building on previous academic research and industry efforts to develop Facebook’s own method for creating simple, standardized documentation of our models, in a form commonly known as model cards. The Instagram Equity team has made the most progress in this effort so far, already using model cards across Instagram’s integrity systems and aiming to apply model cards to all Instagram models before the end of next year.

We are developing these approaches to model documentation in dialogue with other related efforts across the industry. For example, we support the About ML initiative at the Partnership on AI, which aims at developing industry standards in this area. We also held two workshops this spring through TTC Labs to bring together AI designers and AI policy experts to help chart the future of AI model documentation.

We understand that experts, regulators, and everyday people all are eager to more easily understand why AI systems make the decisions they make. There are many challenges in explaining the predictions of complex AI systems, but we are working to push the limits, including with interpretability software such as Captum. Although work in this area is still in its infancy, our hope is that ultimately we will be able to build an integrated transparency solution that can automatically feed information from internal documentation efforts like model cards into new transparency features and controls for the people using our products.

Accountability & Governance

At Facebook, we believe in building reliable processes to ensure accountability for our AI systems and the decisions they make.

This requires building governance systems to ensure that our AI systems are performing to high standards, to satisfy external expectations and internal best practices, and to identify and mitigate any potential negative impacts those systems might pose. It also means making sure that wherever necessary or appropriate, humans are able to monitor these systems and intervene when necessary.

That’s why we’re doing the work described in this blog post, from investing in our Privacy Review efforts, to developing approaches and tools to improve our understanding and ability to address concerns about our AI systems, to increasing transparency and control around our AI products and features. And that’s why, except for removals of some content posing extreme safety concerns, we give people a way to appeal and seek additional human review of a broad range of content-takedown decisions, which are sometimes first made with the assistance of AI systems. We hold ourselves accountable through quarterly Community Standards Enforcement Reports and through an independent Oversight Board that considers further appeals of both removal and nonremoval of content and makes binding rulings around our most difficult and important content decisions.

We also recognize that quickly evolving technologies like AI can raise new and unanticipated issues, so we must continually improve our processes for identifying and mitigating negative impacts. To help address potential harms early in the product cycle, Facebook’s Responsible Innovation team provides foresight workshops, office hours, and extended collaborations to help product teams identify and brainstorm solutions to a wide range of individual and societal risks. Meanwhile, the Responsible AI team has begun developing its own AI-specific impact assessment framework that we hope will complement our existing launch review processes. We understand the importance of thinking holistically about how our AI products affect society, and we’re continually experimenting with new ways to educate and inspire our engineers to consider the big picture and weigh the long-term impact of their work.

Collaborating on the future of Responsible AI

Because this is still a relatively new field, there are not yet clearly defined standards and processes for AI governance and for assessing potential negative AI-related impacts. We — not just Facebook but also the tech industry, the AI research community, policymakers, advocacy groups, and others — need to collaborate on figuring out how to make AI impact assessment work at scale, based on clear and reasonable standards, so that we can identify and address potential negative AI-related impacts while still creating new AI-powered products that will benefit us all. We must similarly collaborate on developing basic practical standards around AI fairness, privacy, robustness, and transparency before they are codified into law.

That’s why we are prioritizing a broad range of collaborative research around AI best practices. For example, we’re actively participating in efforts to establish clear AI principles and best practices, including collaborating with the OECD’s new AI Observatory project to study and disseminate emerging best practices that are in line with its 2019 AI principles. Through our recently launched Open Loop partnership, we’re building innovative “policy prototyping” projects for testing new potential AI policy requirements with regulators and startups before they become law, to ensure that they are both practical and impactful; we have already launched projects in Europe, Asia, and Latin America, with more to come. We’re funding a global effort to solicit diverse academic research on AI ethics issues, supporting projects in Asia, Africa, and Latin America and providing foundational support for an independent Institute for Ethics in Artificial Intelligence at the Technical University of Munich. And we’re a founding partner in the Partnership on AI, the premier cross-industry, cross-civil-society multistakeholder forum for collaboratively developing AI best practices.

Of course, the people who use our products and services are another key stakeholder in how we chart our course around Responsible AI, and their lived experiences are another critical form of feedback to factor into our thinking. That’s why we have an integrated user research practice that allows us to better understand the core needs of the people using our products and help ensure that we are building technology and experiences that benefit people, society, and the world.

Maintaining a focus on the needs of the people using our products while working together across governments, industries, and broader AI expert communities in academia and civil society, Facebook aims to help chart a course for the future of Responsible AI development that leads to a safer, fairer, and more prosperous society for all. We look forward to that journey and to sharing more about our own Responsible AI practices as they evolve and grow.

Written By