Deepfakes, Deep Harms
Main
Abstract
Deepfakes are algorithmically modified video and audio recordings that project one person’s appearance on to that of another, creating an apparent recording of an event that never took place. Many scholars and journalists have begun attending to the political risks of deepfake deception. Here we investigate other ways in which deepfakes have the potential to cause deeper harms than have been appreciated. First, we consider a form of objectification that occurs in deepfaked ‘frankenporn’ that digitally fuses the parts of different women to create pliable characters incapable of giving consent to their depiction. Next, we develop the idea of ‘illocutionary wronging’, in which an individual is forced to engage in speech acts they would prefer to avoid in order to deny or correct the misleading evidence of a publicized deepfake. Finally, we consider the risk that deepfakes may facilitate campaigns of ‘panoptic gaslighting’, where many systematically altered recordings of a single person's life undermine their memory, eroding their sense of self and ability to engage with others. Taken together, these harms illustrate the roles that social epistemology and technological vulnerabilities play in human ethical life.
Details
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.