Computer Vision

Research

Deepfake Detection Challenge: AWS and new academics join, initial dataset released

October 21, 2019

Building technology to detect deepfake videos effectively is important for the entire industry and society at large, and developing these tools will likewise require collaboration from experts across the AI community. To accelerate this important work, we’re now releasing the first subset of the 100,000+ videos being created expressly for the Deepfake Detection Challenge (DFDC). We are also announcing that Amazon Web Services (AWS) and several additional leading academic researchers are joining the DFDC initiative, which we unveiled last month to spur the creation of new tools to better detect when AI has been used to alter videos or images in order to mislead the viewer.

An initial dataset release to get feedback from the research community

The DFDC dataset will consist of a wide variety of new, high-quality videos created expressly for research on deepfakes. The videos use only paid actors, who have entered into agreements to help with the creation of the dataset, so as to avoid restrictions that could hamper researchers’ work. Information on the dataset and leaderboard, as well as on grants and awards funded by Facebook, is available on the DFDC website. Our new white paper is available here.

To ensure the quality of the dataset and challenge parameters, we are now sharing the first 5,000 DFDC videos with researchers working in this field. We will collect feedback and also host a targeted technical working session at the International Conference on Computer Vision (ICCV) in Seoul beginning on Oct. 27. The full dataset release and the DFDC launch will happen at the Conference on Neural Information Processing Systems (NeurIPS) this December.

Broadening the DFDC partnership

The ability of a broad community of industry partners, academics, media, and civil society organizations to come together has been key to this effort. We recognize that the community needs to work together to build effective tools to deal with deepfakes.

AWS is joining the effort as a technical partner and member of the Partnership on AI’s Steering Committee on AI and Media Integrity, which will oversee the challenge. The committee is made up of a broad cross-sector coalition of organizations including Facebook, WITNESS, Microsoft, the New York Times, and others in civil society and the technology, media, and academic communities.

AWS is also contributing up to $1 million in AWS credits to support the challenge. AWS will also make Amazon ML Solutions Lab experts available to provide technical support and guidance. Once the challenge has concluded, AWS will work with teams to host their models in the AWS model marketplace if they choose.

“As a longtime researcher in AI, I believe that artificial intelligence can and should be a force for good in the world,” said Caltech Professor of Electrical Engineering and Computation and Neural Systems Pietro Perona, who is also an AWS Fellow. Perona will serve as a technical adviser for the DFDC. “To counteract the use of deepfakes as a tool to deceive the public and threaten society and democracy, it must be faced head-on. That’s why I’m passionate about the opportunity to help organize the Deepfake Detection Challenge in collaboration with colleagues at Facebook, the Partnership for AI, and others to create a competition that can help mobilize the machine learning and computer vision communities. Competitions bring us together, help us clarify the question to be solved, identify actionable steps we can take, and define a yardstick for progress. This competition will unleash the talent of gifted researchers around the world to help maintain a bright line between reality and fiction.”

Professor Laura Leal-Taixé, head of the Dynamic Vision and Learning Group at the Technical University of Munich, has joined as an academic adviser and collaborator as we develop the dataset for the broader community. She and Facebook AI’s Cristian Canton Ferrer will chair the DFDC meeting at ICCV.

“The tremendous progress of technology to generate realistic images is advancing at a tremendous pace. While this is incredibly exciting from a technological point of view, there are also obvious concerns for society. Fake media is clearly one of them, and not so much for the current generation that is still not fully digital and relies also on printed press, but more for the future generations that will consume news only in digital format and will be totally vulnerable to the proliferation of fake news and targeted media,” Leal-Taixé said. “There is a challenge as a society to educate these new generations in media consumption, to raise awareness that fake news is already all around us. From our side, we have to put more effort in creating the new technology to counteract digital forgeries. This will only be possible if both industry and academia join forces, and I see DFDC as an excellent effort in this direction. Hopefully, this will inspire thousands of brilliant minds to work on this problem.”

Professor Luisa Verdoliva, of the Department of Industrial Engineering at the University of Naples Federico II, is also joining as an academic adviser for the DFDC. Professors Leal-Taixé and Verdoliva join other leading academic researchers in helping build the challenge.

We are confident that this open, community-based effort will speed development of new open source tools to prevent people from using AI to manipulate videos in order to deceive others.

Visit the DFDC website for more details.

Written By

Irina Kofman

Director, AI Business Lead

Alex Yu

Director, AI Business Development