Deepfake technology continues to advance, propelled by powerful generative artificial intelligences (AIs), such as DALL-E 2 and Midjourney. These AIs, expansive neural networks trained on vast datasets of images, videos, and associated text, possess the ability to generate novel images matching given descriptions. From photorealistic depictions to cartoon styles, generative AIs showcase remarkable versatility.
The process of creating deepfakes involves integrating the capabilities of generative AIs with those of other AIs designed to automatically identify individuals in images and videos. This synergy empowers creators with a level of proficiency akin to special-effects artists. Faces or bodies can be seamlessly swapped, and with the assistance of other generative AIs, voices can be replicated. The outcome is the fabrication of photos and videos that convincingly portray someone in situations they were never part of or remove them altogether, replacing backgrounds in a manner reminiscent of the BBC drama “The Capture.”
However, the evolving landscape of deepfake technology comes with inherent imperfections, albeit increasingly subtle ones. Early iterations of generative AI systems exhibited noticeable errors, providing indicators of manipulation. Examining details becomes crucial — irregular objects, like a hand or branch, may be misplaced, and inconsistencies may appear where an object intersects with a face. A common example is eyelashes peeking through hair. Evaluating realism is also essential, considering factors such as color, shadows, and backgrounds. Anomalies like peculiar hand placements, a foot merged with a tree, or the presence of too many arms may signal a deepfake.
While these telltale signs have historically offered a means of detection, the ongoing refinement of technology is diminishing these indicators. Paradoxically, as photo and video editing software improves, the ease of creating undetectable misinformation and deepfakes grows. Recognizing this challenge, companies like Adobe, creators of such software, are actively working on content authentication solutions. The aim is to enable users to distinguish between authentic and manipulated content, fostering a more informed and discerning audience.
The hope is that advancements in content authentication will act as a countermeasure against the rising tide of sophisticated deepfake technology, preserving the integrity of visual and auditory information in the digital landscape.