Research

Developer Tools

AI at F8 2019: Open source tools, advances in core technologies, and more

May 01, 2019

As part of our ongoing efforts to make our platforms better, safer, and more inclusive, we’re continuing to improve both the AI technology that underpins much of Facebook and the processes that help ensure our systems are fair and inclusive. We are pushing the state of the art in the computer vision (CV) and natural language processing (NLP) technologies that power our automatic detection systems, while also building new, self-supervised AI tools that are less reliant on labeled data. And to help the wider AI community, we’re open-sourcing new development tools to further streamline the transition from research to production-ready systems.

On the second day of our annual F8 conference, leaders on Facebook’s research, engineering, and product teams shared details about these recent advances in AI. Our speakers also addressed issues of fairness and inclusivity, and shared updates on our efforts to counter algorithmic bias and ensure that the design of our AI systems reflects the diversity of the people using our products. And we announced new features and educational collaborations for our deep learning framework, PyTorch, as well as two open source tools — Ax and BoTorch — that help anyone solve challenging exploration and optimization problems, such as tuning hyperparameters for machine learning models and designing next-generation hardware, more easily and efficiently.

Watch the video of all the keynote presentations from day two of F8, and read brief summaries below.

How AI is helping Facebook find bad content

Facebook CTO Michael Schroepfer kicked off Day 2 of F8 with a keynote speech highlighting the AI tools that help us tackle complex content challenges across our products, including using a combination of CV and nearest neighbor searches to catch policy-violating content. Schroepfer explained why the pace of progress in AI — in particular, the advancements on the horizon in semi- and self-supervised learning — makes him optimistic about the power of this technology to help address these issues.

Moving toward better content understanding and self-supervised learning

Manohar Paluri, with the Facebook AI group, provided a deep dive into our recent progress in content understanding and self-supervised learning, and how this work is helping make our platforms safer. He shared details on specific CV and NLP systems that are increasing the scale and speed at which we can automatically detect content policy violations, and discussed how self-supervision has moved from a promising research area to an approach that’s demonstrating real gains. For example, some of our language understanding systems that combine both supervised and self-supervised training consistently beat models trained using only supervised data.

Fairness as a process

As we use AI to help moderate content at scale, we have a responsibility to surface and address fairness questions, such as how we can give everyone a voice, while also protecting people and communities from harm. Joaquin Quiñonero Candela, also with the Facebook AI group, talked about best practices for fairness, including surfacing related risks and hard questions, and documenting the decisions that resolve these questions as part of a transparent process. We need to do this across every implementation step, from defining ground truth and generating training labels to making decisions and interventions based on the AI’s predictions.

Making AI more inclusive

Lade Obamehinti, who leads technical strategy for Facebook’s AR/VR software team, discussed the importance of validating our training datasets and design decisions to make sure they reflect the diversity of the people using our products. This inclusive AI process is being used by teams across Facebook to ensure that as we develop datasets, they are representative of people across the spectrum of age, gender presentation, and appearance.