Maat ScanMaat Scan

Explainer

Deepfakes Explained: What They Are and Why They Matter

By Maat Scan · April 7, 2026

In early 2024, a finance worker at a Hong Kong multinational was tricked into wiring $25 million after a video call with what appeared to be his CFO and several colleagues. All of them were deepfakes generated in real time.1 That incident became a reference point in security briefings worldwide, but it was not the last of its kind: deepfake fraud losses reached $347 million globally in the second quarter of 2025 alone.2

What a Deepfake Is

The term combines "deep learning" and "fake." A deepfake is any media, video, audio, or still image, where a real person's appearance or voice has been fabricated or replaced using AI. That covers face swaps in video, synthetic voice clones, and images generated entirely from a text prompt by models like Midjourney or Stable Diffusion.

Not all deepfakes are made to deceive. The same technology has legitimate uses: de-aging actors in film, restoring the voice of a person who has lost the ability to speak, powering comedy apps. The harm is specific to cases where consent is absent and the output is realistic enough to mislead.

How They Are Made

Early systems, built around 2017, required hundreds of source images and days of compute to train a face-swap model. Modern systems work differently. Diffusion models and large foundation models can generate or swap a face from a single reference photo in seconds, using consumer cloud services.

For still images, there are three common techniques. The first generates a face from a text prompt and composites it onto a body. The second uses inpainting to replace the face region of a real photograph while leaving the background intact. The third applies face-swap algorithms that map one person's geometric features onto another's. Each technique leaves different artifact patterns, which is why detection systems score multiple signals independently rather than relying on a single classifier.

Where the Harm Is

The most widespread misuse is also the least discussed publicly: non-consensual intimate imagery. Research consistently places this category at 96 to 98% of all deepfake content online, with women making up roughly 99% of victims.3 A 2022 report by UNICEF, ECPAT, and Interpol estimated that 1.2 million children across Southeast Asia had images manipulated and distributed without their consent.4 The psychological harm to victims is severe and well documented.

Political disinformation gets more media coverage. A fabricated video of a political figure circulates, gets debunked, and leaves behind a layer of doubt. Researchers call this the "liar's dividend": once the public knows any footage can be faked, authentic evidence becomes easier to dismiss. Courts and news organizations are already managing this problem.

Financial fraud has grown the fastest. In December 2025, Fortune reported that voice cloning technology had crossed a threshold where synthesized audio could no longer be reliably distinguished from a real voice in controlled tests.5 By mid-2025, deepfake fraud had cost businesses over $500 million in six months.2

Legal Responses

The U.S. TAKE IT DOWN Act, signed in May 2025, requires platforms to remove non-consensual intimate deepfakes within 48 hours of a victim's report.6 Several U.S. states have passed separate laws targeting election-interference deepfakes. The EU AI Act requires disclosure whenever AI-generated content depicts a real person.

In Japan, portrait rights (肖像権) provide civil remedy avenues for deepfake victims, and recent amendments to stalking prevention law have been interpreted to cover some forms of non-consensual deepfake distribution. A dedicated deepfake law is still under discussion as of 2026.

What You Can Do

For individuals: slow down before sharing images or video of public figures in surprising situations. Run suspicious media through a detection tool as a first pass. Reverse-image-search to see whether the image appears in other contexts, and check whether credible outlets have covered it.

For organizations: adopt content provenance standards such as C2PA, which embeds cryptographic attestation into media files. Build detection pipelines for user-uploaded content. And establish clear reporting paths for anyone targeted by non-consensual deepfakes.

Sources

  1. CNN, "Hong Kong employee tricked into paying out $25M in deepfake video call," CNN, February 4, 2024.
  2. Deepstrike, "Deepfake Fraud Statistics 2025," Deepstrike.io, 2025.
  3. Keepnetlabs, "Deepfake Statistics: The Numbers Behind a Growing Threat," Keepnetlabs.com, 2025.
  4. UNICEF / ECPAT / Interpol, "Disrupting Harm in Southeast Asia," UNICEF, 2022.
  5. Fortune, "Voice cloning has crossed an unsettling new threshold," Fortune, December 2025.
  6. Bright Defense, "What the TAKE IT DOWN Act Means for Online Platforms," Brightdefense.com, May 2025.