Guide
AI-Generated Profile Photos in Romance Scams: How to Protect Yourself
By Maat Scan · April 14, 2026
Americans lost an estimated $3 billion to romance scams in 2025, up from $1.2 billion the year before and less than $300 million in 2023.1 A tenfold increase in three years. The cause is not a surge in human scammers. It is AI: generated profile photos that pass visual inspection, voice clones that replace typed messages, and real-time deepfake video calls capable of holding a ten-minute conversation with natural blinking and environmental reactions.2
How the Scam Works Now
A traditional catfish stole real photos from an influencer or model. That approach had one weakness: a reverse image search would find the original. Modern romance scammers generate faces instead. The AI-created person has never appeared online, returns no image search results, and is built to match whatever demographic the target is most likely to respond to.
After initial contact (typically on a dating app, Facebook, Instagram, or WhatsApp), the persona builds trust over weeks or months. This is the "fattening" phase in what fraud researchers call pig-butchering: sustained emotional investment before any financial ask. By the time money enters the conversation — usually framed as a joint investment opportunity, a medical emergency, or a flight to visit — the victim has formed a genuine bond with a person who does not exist.
An MIT Technology Review investigation published in March 2025 tracked scam compound operations employing hundreds of workers and found that every operator interviewed reported using AI tools daily, primarily to generate profile content and manage dozens of simultaneous conversations at scale.3
What AI Changed
Several upgrades in 2025-2026 make these scams harder to catch than earlier versions.
The photos. Earlier AI generators produced faces with visible artifacts: off-geometry ears, blended hair edges, distorted hands. Current generators produce profile photos that only 46% of people correctly identify as synthetic in controlled tests, worse than guessing at random.4
The conversation. A study analyzing romance-baiting operations found that AI language models had replaced human-written scripts in many of them, generating personalized messages calibrated to each target's responses and automatically timing emotional milestones.5
The video calls. Scammers previously avoided video to protect the fake persona. Real-time deepfake tools now let an operator wear the AI-generated face and voice during a live call, holding a full conversation without obvious tells.2 The "just ask for a video call" advice that once offered reliable protection no longer does.
Red Flags That Still Hold
Despite these upgrades, several behavioral patterns remain difficult to fake consistently.
The relationship moves unusually fast.
Declarations of love, intense emotional connection, and talk of a shared future within days or weeks of first contact are a reliable warning sign. A real person getting to know someone online does not typically rush this.
There is always a reason not to meet.
International work postings, offshore contracts, military deployments, and family emergencies in remote locations are common justifications. If every planned meeting falls through over months, the explanation is rarely coincidental.
The conversation eventually turns to money.
The ask takes many forms: a flight to visit, a medical emergency, a time-sensitive investment opportunity. Any unsolicited request for money or financial information from someone met online is a serious signal, regardless of how long the relationship has been building.
They fail on physical verification tasks.
Asking the other person to hold a specific object in front of the camera, write a word on paper, or move their hand quickly across their face during a video call will trip up a real-time deepfake more reliably than a human. Current deepfake tools still struggle with close-range hand movements and sudden changes in lighting angle.
How to Check a Profile Photo
Reverse image search remains useful against profiles built from stolen photos but will return nothing for an AI-generated face. For AI profiles, the verification has to focus on the photo itself.
- Check the hair-to-background edge. AI compositing often leaves an unnatural halo or blending artifact at this boundary, especially when the background is blurred.
- Look at the ears and neck. These regions are inconsistent in AI images more often than the central face. Asymmetry, strange texture, or an ear that does not match the other one are worth noting.
- Find any text in the image. Signs, clothing labels, or books visible in the background. AI generators still handle text poorly and will produce garbled letters that only look plausible at a glance.
- Run the photo through a detection tool. Maat Scan and similar AI image detectors score multiple signals independently. A synthetic face from a current generator will often score lower on texture and geometry dimensions even when it looks convincing at first glance.
Also look at the profile itself, not just the photo. How long has the account existed? Does it have a realistic posting history, tagged photos from other people, and older interactions? A profile with only professional-quality images, sparse history, and no mutual connections warrants skepticism regardless of how appealing it appears.
If You Think You Are Being Scammed
Stop sending money immediately. Do not send more to "recover" previous losses — that follow-up request is a standard technique. Report the account to the platform. If money was sent, file a report with your national cybercrime authority as quickly as possible: in Japan, the National Police Agency's cybercrime reporting portal; in the U.S., the FBI's Internet Crime Complaint Center (ic3.gov). Recovery is rare but more likely when reports are filed promptly.
The emotional difficulty of accepting that the relationship was fabricated is real and well documented — 95% of romance scam victims report lasting psychological effects.6Victims are not naive. These operations are professional, AI-assisted, and designed specifically to exploit the way trust forms between people.
Sources
- Norton, "Norton Insights Report 2026: Artificial Intimacy," Norton.com, 2026.
- KnowBe4, "Love in the Age of AI: Why 2026 Romance Scams Are Almost Impossible to Spot," KnowBe4.com, 2026.
- MIT Technology Review, "Inside a romance scam compound — and how people get tricked into being there," March 27, 2025.
- Morningstar / Business Wire, "Love, Actually? Romance Scams Are Now Part of the Online Dating Experience," February 10, 2026.
- arXiv 2512.16280, "Love, Lies, and Language Models: Investigating AI's Role in Romance-Baiting Scams," December 2024.
- Barclays, "AI has raised the stakes in romance scams, and consumers want action," Barclays.com, February 2026.
- Washington Times, "AI-generated pictures and voices drive surge in online dating scams," March 17, 2026.
