Responsible AI

Meta’s progress and learnings in AI fairness and transparency

January 11, 2023

While AI has brought huge advancements to humanity and our planet, it also has the potential to cause unintended consequences, and technology companies must proactively work to mitigate these issues. So, as we did in 2021, we’re providing an update on our Responsible AI efforts over the past year.

The work described in this blog post includes datasets, balancing privacy and fairness, preventing bias in ad delivery systems, avoiding harmful or disrespectful associations, giving people more control over what they see, offering more transparency into AI models, and collaborating on standards and governance.

Our Responsible AI efforts are propelled by a cross-disciplinary team whose mission is to help ensure that AI at Meta benefits people and society. Our Civil Rights Team, for example, has been integral to our work, bringing subject-matter expertise with technical, policy, and legal assessments and collaboratively designing technical solutions.

Meta’s work on Responsible AI is driven by our belief that everyone should have equitable access to information, services, and opportunities. We believe that the responsible foundation we are building will ultimately shape future technologies, including the metaverse. As we reflect on the progress we made in 2022, we hope to foster more collaborative and transparent dialogue across disciplines and audiences about the path ahead for these critical issues.

Building diverse datasets and powerful tools for more inclusive AI products

One way we are addressing AI fairness through research is the creation and distribution of more diverse datasets. Datasets that are used to train AI models can reflect biases, which are then passed on to the system. But biases might also be due to what isn’t the training data. A lack of diverse data — or data that represents a wide range of people and experiences — can lead to AI-powered outcomes that reflect problematic stereotypes or fail to work equally well for everyone.

In 2022, we worked to prepare the Casual Conversations v2 (CCv2) dataset, which is unique in the field in terms of its proposed categories and the countries where the data collection will take place. This work, which we will release in 2023, is a continuation of the Casual Conversations dataset we released in 2021, which is composed of more than 45,000 videos designed to help researchers evaluate their computer vision and audio models for accuracy across a diverse set of ages, genders, skin tones, and ambient lighting conditions.

In 2022, we also introduced and open-sourced two new datasets to help measure fairness and mitigate potential bias in natural language processing (NLP) models. In particular, these datasets include a more comprehensive representation of different demographic dimensions to measure fairness in these models, including terms for gender identity, age, race, and people with disabilities.

Developing reliable, large-scale ways of measuring fairness and mitigating bias gives AI researchers and practitioners helpful benchmarks that can be used to test NLP systems, driving progress toward the goal of ensuring that AI systems treat everyone fairly. We shared these datasets with the research community so that people can better assess the fairness of their text-based AI systems and expand their terminology.

In addition to our work on datasets, we’ve continued to explore new ways of understanding fairness in complex AI systems. For example, we proposed a method for considering preference-based fairness in personalized recommender systems that would allow practitioners to get a more holistic view of fairness across all groups. We detailed this work in a research paper that was named Outstanding Paper at the 2022 AAAI Conference.

By fostering the development of more inclusive datasets, we can create AI systems with the potential to bring the world closer together, helping people communicate across languages and cultures and creating experiences that reflect the diversity of the more than 3 billion people who use Meta’s platforms.

Protecting privacy while addressing fairness concerns

Improving fairness will often require measuring the impact of AI systems on different demographic populations and mitigating unfair differences. Yet the data necessary to do so is not always available — and even when it is, collecting it and storing it can raise privacy concerns. After engaging with civil rights advocates and human rights groups that further confirmed the fairness challenges, we identified new approaches to help us access data with the potential to meaningfully measure the fairness of the AI models on our platforms across races.

We made advancements in 2022 in our ability to measure whether people’s experiences with our technology differ across race. We launched a research project on Instagram to make progress in assessing and improving our technologies to advance fairness. We worked with Oasis Labs, a privacy-focused technology company that, while honoring their privacy, lets people in the United States who choose to self-identify their race for research purposes do so. We worked with research partners to safeguard survey responses by adapting a well-established privacy-enhancing method called secure multiparty computation (SMPC), in which data is securely distributed among multiple facilitators who together can perform computations over the combined encrypted information.

This encryption-based approach, in which data is split between multiple parties, has been used for years in other fields, such as to secure auctions, distributed voting, and statistical analysis. SMPC provides a strong guarantee that individuals’ responses to the survey cannot be accessed by any party, including Meta.

In the area of ad fairness , our privacy-enhanced version of the Bayesian Improved Surname Geocoding (BISG) method, which we announced in 2021, is being used to make progress by enabling iterative, aggregate measurement (discussed below).

Through approaches like these, we can better examine our systems to identify areas where we can make improvements that get us even closer to our goal of building and operating AI systems that treat everyone fairly.

Innovating to improve fairness in ad delivery

A critical aspect of fairness is ensuring that people of all backgrounds have equitable access to information about important life opportunities, like jobs, credit, and housing. Our policies already prohibit advertisers from using our ad products to discriminate against individuals or groups of people. Specifically, to better protect against discrimination, we give advertisers running housing ads on our platforms a limited number of targeting options while setting up their campaigns, including a restriction on using age, gender, or zip code. However, even with neutral targeting options and model features, factors such as people’s interests, their activity on the platform, or competition across all ad auctions for different audiences could affect how ads are distributed to different demographic groups. Therefore, as part of our settlement with the Department of Justice and our ongoing work with the Department of Housing and Urban Development, we designed and started to roll out a new technology called the Variance Reduction System (VRS), which aims to better ensure an equitable distribution of housing ads and eventually employment and credit ads.

The VRS uses reinforcement learning, a type of machine learning (ML) that learns from trial and error to optimize toward a predefined outcome, so that the audience that ends up seeing an ad for housing, employment, or credit more closely reflects the population of people who are eligible to see that ad.

We’ll do this by regularly measuring the actual audience for a particular ad to see how it compares with the demographic distribution (age, gender, and estimated race or ethnicity) of the audience the advertiser has selected. Importantly, the Variance Reduction System will not be provided with individual-level age, gender, or estimated race/ethnicity to make these determinations but instead will leverage aggregate demographic measurements. It will use the privacy-enhanced BISG method (described above) to measure estimated race. If our measurements show a wide variance in the demographics of the selected audience compared with who is actually seeing the ad, the VRS will automatically act to reduce that variance. In the process, we’ll help ensure that ads are delivered more equitably and can be seen by audiences that otherwise might not have.

Generating responsible associations

Fairness doesn’t just mean improving equitable access to positive opportunities or ensuring that our products work equally well regardless of someone’s demographic characteristics or what language they are using. It also means working to ensure that our AI systems don’t generate content that is harmful or disrespectful to historically marginalized communities.

Meta already has numerous systems and policies in place to detect and remove harmful content such as hate speech. But harmful content of a different type can be generated inadvertently when AI-driven recommendation systems produce a harmful association. This can arise when pieces of content that are otherwise harmless as standalone topics are paired in an offensive way. When associated with groups of people, even benign topics form associations that can become potentially problematic or degrading, exacerbating existing stereotypes.

The risk of problematic associations is a shared challenge across platforms that use AI to make recommendations or generate content, including social media platforms. The harmful conceptual association between groups of people and negative semantic terms can arise through a variety of routes, which can reflect and reinforce biases and bigotries embedded in social and semantic data.

In 2022, we assembled a cross-disciplinary team, including people from our Civil Rights, Engineering, AI Research, Policy, and Product teams, to better understand problematic content associations in several of our end-to-end systems and to implement technical mitigations to reduce the chance of them occurring on our platforms that use AI models.

As part of this collaborative effort, we carefully constructed and systematically reviewed the knowledge base of interest topics for usage in advanced mitigations that more precisely target the problematic associations. As more research is done in this area and shared with the greater community, we expect to build on this progress and to continue to improve our systems.

Giving more control over AI-driven feeds and recommendations

AI-driven feeds and recommendations are a powerful tool for helping people find the people and content they are most interested in, but we want to make sure that people can manage their experience in ways that don’t necessarily rely on AI-based ranking. While we already allow people to adjust their Feed preferences in a variety of ways, this year we also introduced an AI-based feature called Show More/Show Less that lets people directly influence their AI-driven personal recommendations.

When people click to show more or less of a type of content when the buttons are featured on select posts, our AI systems work to understand the intent behind the click. For example, clicking Show Less on a cousin’s post about their new convertible might mean “show me fewer posts by that person” or “show me fewer posts about cars.” An effective recommendation system must be able to differentiate the person’s intent in order to successfully deliver the type of content people want to see, and as more people use this feature, the better it will get at understanding what they do and don’t want to see.

This year on Facebook, we also introduced a Feeds tab, which helps people find the most recent posts from their friends, Favorites, Pages, and groups. People can curate a Favorites list of the friends and Pages they care about most and filter their content in this new tab. We rolled out a similar feature on Instagram, where people can choose to see posts from their favorite accounts in a chronological feed. We also rolled out Following, a feature that allows people to see posts only from people they’re following

Developing new methods for explaining our AI systems

Because AI systems are complex, it is important that we develop documentation that explains how systems work in a way that experts and nonexperts alike can understand. One way we’ve done this is by prototyping an AI System Card tool that provides insight into an AI system’s underlying architecture and helps better explain how the system operates.

System cards are one of the many ways to practice transparency about AI models. For a close look at individual AI models, we shared Model Cards for some of our most impactful research releases, including BlenderBot, an open source language generation model, and OPT-175B, the first 175-billion-parameter language model to be made available to the broader AI research community. We also shared the code and a detailed look into the development process. By sharing these sorts of details about OPT-175B, we aim to accelerate research around language generation systems so the broader field can work toward making these systems safer, more useful, and more robust.

We also introduced our proposal for Method Cards, intended to guide ML engineers on how to mitigate potential shortcomings in order to fix bugs or improve a system’s performance. Our research aims to increase the transparency and reproducibility of ML systems by allowing stakeholders to reproduce the models, understand the rationale behind their designs, and introduce adaptations in an informed way.

As the industry evolves and discussions about model and system documentation and transparency continue, we will further identify opportunities to undertake and iterate on our approach over time, so we can reflect product changes, evolving industry standards, and expectations around AI transparency.

Testing new policy approaches to AI transparency, explainability, and governance

The rapid advance of emerging technologies makes it difficult to fully understand and anticipate how they might eventually impact communities around the world. To help develop forward-looking policies around the development and use of new technology Meta supports Open Loop, a global experimental governance program. By involving governments, tech companies, academia, and civil society, Open Loop aims to connect tech and policy innovation for closer collaboration between those building emerging technologies and those regulating them.

AI transparency and explainability were the focus of two of our recent Open Loop programs. In 2022, we published the findings and recommendations of our policy prototyping program on AI transparency and explainability in Singapore, which was rolled out in the Asia-Pacific (APAC) region in partnership with Singapore’s Infocomm Media Development Authority and Personal Data Protection Commission with 12 APAC companies to test Singapore's Model AI Governance Framework. The report summarizes the feedback received from the participating companies in implementing Singapore’s Model AI Governance Framework, and makes recommendations to policymakers on how to further improve the frameworks based on real-world implementation experience and feedback from industry. A similar exercise was deployed in Mexico, with the support of Mexico’s National Institute for Transparency, Access to Information and Personal Data Protection, where we tested a framework on transparency and explainability for AI systems in the country (forthcoming).

Moreover, as part of our second policy prototyping program in Europe, we recently published our first report and recommendations on the European Artificial Intelligence Act in partnership with European governments and regulatory authorities. Among the provisions tested by 53 AI startups and companies operating in the European Union, we assessed transparency requirements of the draft AI Act, commenting on their clarity, feasibility, and cost-effectiveness. We asked companies what level of technical skill would be necessary to meet the requirement of enabling “human oversight,” as well as the required skill level to enable interpretability. One of the recommendations from the report was to consider distinguishing more clearly between different audiences for explanations and other transparency requirements. We also assessed transparency obligations for individuals interacting with AI systems via a survey with 469 participants from European Union countries who were shown different variations of AI notifications mockups (report forthcoming). The study provides preliminary insights on the effectiveness of disclosure notifications on users’ understanding, trust, and sense of agency.

These Open Loop programs are emblematic of Meta’s evidence-based, broadly collaborative approach to establishing standards around the future of AI governance, and we look forward to continuing these programs in partnership with a broad selection of participating regulators, companies, and expert stakeholders.

Working together to build AI responsibly

By listening to people with lived experiences, subject matter experts, and policymakers, we hope to proactively promote and advance the responsible design and operation of AI systems.

We look forward to sharing more updates in the future and will continue to engage with diverse stakeholders about how we can move forward together responsibly.

Written By

Esteban Arcaute

Director, Responsible AI

Roy Austin

Vice President and Deputy General Counsel

Kevin Bankston

AI Policy Director

Joelle Pineau

Managing Director, FAIR