RESEARCH

Collaborating on the future of AI governance in the EU and around the world

June 15, 2020

The internet is entering a new phase. We are more aware than ever of the liberating benefits of a fast, open, and accessible online world, and we are more aware than ever of the risks that such a world contains. That’s why Facebook has been strongly advocating for the creation of specific rules to address some of the big issues facing the internet now and in the years ahead. In doing so, we want to protect the core values that underpin today’s open internet: that it is accessible to all; that it is transparent and accountable; that you control your own data; and that competition and innovation should be encouraged.

Last spring, Facebook CEO Mark Zuckerberg outlined four areas where Facebook believes smart regulation can make a real difference: harmful content, election integrity, privacy, and data portability. Artificial intelligence is another important area where we hope to work constructively with policymakers to shape the rules of the road. That is why we welcomed the opportunity yesterday to submit Facebook’s comments on the European Commission's recent White Paper, “Artificial Intelligence - A European approach to excellence and trust.”

AI, like the internet itself, is a revolutionary technology that promises a powerful wave of innovation, economic growth, and new approaches to long-standing global challenges. AI is a core technology for Facebook, enabling us to do everything from ranking posts in your News Feed to tackling hate speech and misinformation. It has also helped us rapidly respond to the COVID-19 pandemic, for example, by supporting the creation of new disease maps that help chart the spread of the virus.

However, just like other emerging technologies, AI also raises unique policy and legal challenges - hard questions about how to ensure that the growing number of AI systems that help us make important decisions are fair, transparent, accountable, and privacy-protecting. Which is why we are glad to see that the European Union, which has already proved itself to be a leader in technology regulation with its influential General Data Protection Regulation, is prioritizing those questions.

We agree that the EU - already a global leader in digital technology with a growing AI industry - can and should prioritize building an “ecosystem of excellence” in AI, leveraging this powerful technology to boost its research and industrial capacity, increase its competitiveness, and strengthen its economy. This work is particularly important now, as the EU considers how best to ensure its economic recovery after the pandemic.

We also support the Commission’s goal of creating an “ecosystem of trust” around AI in Europe based on its foundational values of respect for human rights, democracy, and the rule of law. In showing such leadership, the EU will encourage the development of a transatlantic AI governance framework - and, ultimately and hopefully, a global standard - rooted in shared respect for fundamental rights and freedoms and for democratic values. Particularly when countries such as China are pursuing global AI superiority unrestrained by those rights and values, it is all the more important that we embed them into our AI technologies and the laws that address them. Finding a regulatory balance that fosters these values while also enhancing, rather than undermining, the EU’s ability to innovate and compete will be challenging, but it’s a challenge with which Facebook is eager to assist.

Building both excellence and trust in AI has driven much of our work in recent years. For example, we have advanced the state of the art by publishing a wide range of innovative AI research and open source AI software tools through our global Facebook AI Research (FAIR) team, which has world-class AI labs around the world, including in Paris; supported a diverse range of academic initiatives focused on AI and governance, such as our partnership with the Technical University of Munich to support the creation of an independent Institute for Ethics in AI, as well as AI-focused academic projects in India, the Asia-Pacific region, and Latin America; and created AI for Good partnerships like our work with the Digital Ethics Lab of the University of Oxford’s AIxSDGs project to explore how AI can help meet the United Nations’ Sustainable Development Goals.

Facebook is also working to help define best practices around developing trustworthy AI - through the internal work of our dedicated Responsible AI team, which is developing new tools and processes to further improve the fairness and transparency of our AI systems; through innovative products like our “Why Am I Seeing This?” features, which explain to users why our automated systems are showing them particular content; and through external collaborations like those described above and our founding membership in the Partnership on AI. We’ve participated alongside the Commission and many EU Member States in the expert group that helped develop the OECD’s AI Principles, and we are now working through OECD's AI Policy Observatory to help put those principles into practice. We are also piloting innovative new approaches to regulatory sandboxes and policy prototyping, including a current collaboration with Singapore’s Infocomm Media Development Authority to test new ideas around AI explainability through Facebook Accelerator Singapore, and would welcome similar public-private collaborations with EU policymakers.

There is clearly a fast-growing global dialogue around how best to turn broad responsible AI principles into practical steps that both companies and policymakers can implement. That is why we broadly recommend in our comments that any new AI regulation should support and build on these ongoing efforts to establish best practices, rather than risk cutting them short with inflexible rules that may not be able to adapt to a rapidly changing field of technology. More specifically, our comments make two primary recommendations:

  • Clearly defining high-risk AI. Facebook is aligned with the Commission’s goal of limiting regulation to those highest-risk AI uses that require it - the question is how to define high-risk AI. Facebook urges the Commission to be precise in defining AI and the sectors and subsectors that it considers to be high-risk, and when it is defining what counts as high-risk AI applications within those sectors, we urge the Commission to avoid broad undefined terms and exceptions like “immaterial damage” or “exceptional circumstances.”

  • Aligning with GDPR around self-assessment of AI risk. We generally urge that any new AI regulation should build upon the requirements that already exist in GDPR, to provide greater legal clarity, avoid duplicative regulation, and ensure a proportionate approach to these novel issues. In particular, we highlight how the Commission’s proposed system of enforcement - requiring prior conformity assessments of AI systems by regulators or third-party auditors before those systems are deployed in the EU - risks unnecessarily overburdening AI developers and significantly impairing innovation and economic growth that would benefit European citizens. As a more balanced alternative structure, we point to GDPR’s accountability approach based on companies’ self-assessment of risk, with regulatory enforcement when companies fail to properly conduct a risk assessment or mitigate the risks they identify.

In addition to these recommendations, we also raise a range of practical questions and concerns around specific elements of the White Paper’s proposal, including highlighting tensions between the proposal and other legal obligations, like those around data protection and intellectual property, and raising technical concerns around some of the proposed mandatory requirements based on our practical knowledge and experience as an AI company.

As our comments highlight, AI poses complex new challenges to existing legal frameworks, and deciding what an effective and technically feasible AI regulation should look like will not be easy. But we are eager to continue the conversation and collaborate with the European Commission and other policymakers on these hard questions.

In fact, the conversation will be continuing later this week: on June 18, Facebook will be celebrating the fifth anniversary of our FAIR AI lab in Paris with an online interactive conference discussing the future of AI both in the EU and around the world. We look forward to convening even more events to foster dialogue around AI regulation and the Commission’s proposal over the coming months. Together, we hope to ultimately chart a course for the future of AI governance that leads to a safer, fairer, and more prosperous society, with Europe as a global leader in AI innovation.

Facebook’s full comments to the Commission can be found here.

Written By

Nick Clegg

VP, Global Affairs & Communications

Jerome Pesenti

VP of Artificial Intelligence