Guide
Why Fake AI Photos Are Flooding Social Media — and What You Can Do
By Maat Scan · April 28, 2026
Days before Ireland's November 2025 presidential election, a video began circulating on social media appearing to show the leading candidate withdrawing from the race. National broadcasters were shown "confirming" the news. All of it was fabricated.1 The clip spread for hours before platforms removed it. The candidate won, but the incident illustrated exactly how AI-generated content now moves through social feeds — and why the platforms are still struggling to stop it.
The Scale of the Problem
AI image generators now produce roughly 80 million images per day across all tools combined, used by more than 150 million people each month.2 Not all of those images are deceptive — many are art, advertising, or experiments. But at that volume, even a tiny fraction of misleading content means millions of pieces of material entering feeds daily.
Deepfakes specifically — AI-manipulated media depicting real people — jumped from around 500,000 shared online in 2023 to an estimated 8 million by 2025, roughly 900% growth in two years.3 Human accuracy at identifying AI-generated images has dropped to 38%, below random chance, and accuracy for deepfake video falls even lower, to around 24.5%.4
Why Fakes Spread Faster Than Corrections
Three forces push fake images further than real ones.
First, platform algorithms reward engagement, not accuracy. A shocking image gets shared before anyone checks whether it is real. The recommendation systems amplify content that triggers strong reactions, which fabricated images are specifically designed to do.
Second, most people sharing fake content do not know it is fake. When accuracy falls below chance, the usual mental filter — "that looks off" — stops working.
Third, verification tools are slow. Fact-checkers need time. A video released three days before an election can influence millions of people before a debunking reaches the same audience. Meta's decision in January 2025 to end its relationships with independent fact-checkers removed one of the faster correction mechanisms on its platforms.
What Is Actually Spreading
Political content is the most visible category. In Ireland's 2025 election, a fabricated national broadcaster clip declared the election canceled and gathered thousands of shares before removal.1 The Netherlands saw approximately 400 AI-generated images used in attacks on political candidates. In the U.S., a major political committee released a deepfake video in early 2026 of a Democratic Senate candidate appearing to read his own old posts aloud.5
Beyond politics, AI-generated images now appear routinely in romance scam profiles, fake product reviews, and spam farm accounts. And in March 2026, CNN documented dozens of AI-generated images depicting fabricated scenes from a conflict spreading widely across social feeds within hours of being posted.6 Researchers call this category "AI slop": low-effort AI-generated content published at scale to drive advertising clicks, with no intent to inform.
What Platforms Are Doing
Platform responses have improved but remain uneven. TikTok integrated C2PA Content Credentials in January 2025, making it the first major platform to automatically detect and label AI content via embedded metadata. In Q1 2026, TikTok removed 2.3 million videos under its synthetic media policies — a 180% increase over the same period in 2025.7
YouTube has required AI disclosure labels since early 2025. Meta labels content "Made with AI" when its systems detect AI generation signals. The EU AI Act's Article 50, which mandates disclosure labeling for AI content depicting real people, begins enforcement in August 2026.
The limits are real. Labeling requires detecting AI generation in the first place, and detection accuracy against the newest generators has dropped to as low as 18%.8Every enforcement gap is an open door.
What You Can Do Before You Share
Checking a suspicious image takes under a minute. These steps work in roughly ascending order of effort:
- Reverse image search. Upload the image to Google Images or TinEye and filter by date to find the oldest indexed version. If the photo appears in a stock library or a completely different story, you have your answer.
- Check the source. Did the image first appear on a credible wire service like AP or Reuters? Or did it surface on a new account with no posting history? Source credibility sets your baseline skepticism.
- Zoom into the details. Hands, teeth, text visible in the background, and hair-to-skin boundaries are the areas AI models still get wrong most often. Zoom in at 200-400% before deciding.
- Run a detection tool. AI image detection tools like Maat Scan run a multi-signal analysis and return a confidence score. No tool is perfect, but a flagged result is worth taking seriously before sharing.
- Slow down on emotional content. The most persuasive-looking fakes are almost always emotionally charged. If an image makes you want to share it immediately, that reaction is exactly what its creator was aiming for.
Sources
- Euronews, "International Fact-Checking Day: How to spot AI-generated disinformation," Euronews, April 2, 2026.
- Imagera AI, "47 AI Image Generation Statistics for 2026," Imagera.ai, 2026.
- Programs.com, "The Latest Deepfake Facts & Statistics (2026)," Programs.com, 2026.
- Electroiq, "Deepfake Statistics By Types, Fraud, Crime, Scams and Facts (2026)," Electroiq.com, 2026.
- The American Prospect, "American Politics Is Already Inundated With AI Deepfakes," April 17, 2026.
- CNN, "Fake, AI-generated images and videos of the Iran war are spreading on social media," CNN, March 11, 2026.
- Storrito, "TikTok's 2026 AI Labeling Rules and What They Signal for Platform Governance," Storrito.com, 2026.
- arXiv 2602.07814, "Open-Source AI-Generated Image Detection Benchmark," February 2026.
