New EU Report on Deepfakes: They are here to stay

Jan 22, 2025

The European Parliamentary Research Service recently published a report assessing the technical, societal and regulatory aspects of deepfakes. The report highlights that technologies like deepfakes are getting cheaper to create and more accessible everyday and that we must keep in mind the risks while harnessing their potential. The reports outlines that the risks associated with deepfakes could be psychological (e.g. undermining trust), financial (e.g. identity theft or reputational damage) and societal (e.g. damage to institutions like democracy, the justice system etc.). Though deepfakes often target individuals, their impacts are cascading and will be felt at an organizational or even societal level.

Kinds of audio/video deepfakes

  1. The report highlights several types of video-based deepfakes, including facial expression manipulation, face swapping, and full-body puppetry, which allow for modifying emotions, replacing faces, or animating entire bodies

  2. Audio deepfakes involve voice cloning technologies that create synthetic speech indistinguishable from a target's voice with minimal input data

  3. Text synthesis tools using Natural Language Processing (NLP) mimic a person’s unique speaking or writing style, enhancing the authenticity of these deepfakes

Comprehensive Approach to Address Deepfake Risks
  1. Technological safeguards: The availability of high-quality algorithms, 5G networks, and pre-trained models has accelerated the proliferation of deepfakes. This means that we need robust detection tools that evolve alongside creation methods.

  2. Blurring Boundaries Between Truth and Falsehood: Deepfake technologies challenge the distinction between reality and deception, creating widespread uncertainty and undermining public trust in media and institutions.

  3. Regulation - not the sole answer: Collaboration between regulators, technology providers, and civil society is essential to protect businesses and the wider society from deepfake related harms. Policymakers must take a more comprehensive approach by addressing the following dimensions: Creation, Circulation, Target, Audience, and Technology.

The way forward:

Deepfakes are here to stay – and their impact is subjective. What might seem authentic to one person might not be to another. It is impossible for the human eye to accurately identify a deepfake at all times, without appropriate detection tools. Mitigating this risk involves “continuous reflection and permanent learning”.

At Karna, our take away from this report is the need to stay vigilant, and start thinking about protection from deepfake based fraud as an essential part of a cybersecurity strategy. We live in an environment that is welcoming to deepfake based fraud and that means that it is important to have sufficient and high-quality guardrails in place. To keep updated with the recent developments in deepfake audio and video technology and their impacts, follow us on Linkedin.

2024 © Project Karnā Inc.