HUMAN & MACHINE INTELLIGENCE

The future of online trust (and why Deepfake is advancing it)

June 27, 2021

Abstract

Trust has become a first-order concept in AI, urging experts to call for measures ensuring AI is ‘trustworthy’. The danger of untrustworthy AI often culminates with Deepfake, perceived as unprecedented threat for democracies and online trust, through its potential to back sophisticated disinformation campaigns. Little work has, however, been dedicated to the examina- tion of the concept of trust, what undermines the arguments supporting such initiatives. By investigating the concept of trust and its evolutions, this paper ultimately defends a non-intuitive position: Deepfake is not only incapable of contributing to such an end, but also offers a unique opportunity to transition towards a framework of social trust better suited for the chal- lenges entailed by the digital age. Discussing the dilemmas traditional societies had to overcome to establish social trust and the evolution of their solution across modernity, I come to reject rational choice theories to model trust and to distinguish an ‘instrumental rationality’ and a ‘social rationality’. This allows me to refute the argument which holds Deepfake to be a threat to online trust. In contrast, I argue that Deepfake may even support a transition from instrumental to social rationality, better suited for making decisions in the digital age.

Download the Paper

AUTHORS

Written by

Hubert Etienne

Publisher

AI & Ethics

Research Topics

Human & Machine Intelligence

Related Publications

August 15, 2019

HUMAN & MACHINE INTELLIGENCE

PHYRE: A New Benchmark for Physical Reasoning | Facebook AI Research

Understanding and reasoning about physics is an important ability of intelligent agents. We develop the PHYRE benchmark for physical reasoning that contains a set of simple classical mechanics puzzles in a 2D physical environment. The benchmark…

Anton Bakhtin, Laurens van der Maaten, Justin Johnson, Laura Gustafson, Ross Girshick

August 15, 2019

July 09, 2018

HUMAN & MACHINE INTELLIGENCE

Continuous Reasoning: Scaling the Impact of Formal Methods | Facebook AI Research

This paper describes work in continuous reasoning, where formal reasoning about a (changing) codebase is done in a fashion which mirrors the iterative, continuous model of software development that is increasingly practiced in industry. We…

Peter O'Hearn

July 09, 2018

July 12, 2018

HUMAN & MACHINE INTELLIGENCE

OASIs: oracle assessment and improvement tool

The oracle problem remains one of the key challenges in software testing, for which little automated support has been developed so far.…

Gunel Jahangirova, David Clark, Mark Harman, Paolo Tonella

July 12, 2018

April 24, 2017

HUMAN & MACHINE INTELLIGENCE

COMPUTER VISION

Episodic Exploration for Deep Deterministic Policies for StarCraft Micro-Management | Facebook AI Research

We consider scenarios from the real-time strategy game StarCraft as benchmarks for reinforcement learning algorithms. We focus on micromanagement, that is, the short-term, low-level control of team members during a battle. We propose several…

Nicolas Usunier, Gabriel Synnaeve, Zeming Lin, Soumith Chintala

April 24, 2017

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.