Maat ScanMaat Scan

Guide

How to Tell If a Photo Is AI-Generated: 7 Signs to Look For

By Maat Scan · April 2026

In a 2025 study that collected over 287,000 evaluations from 12,500 participants worldwide, people correctly identified AI-generated images only 62% of the time1 — barely better than a coin flip. The images fooling people most were not exotic scenes; they were ordinary portraits and landscapes that looked, at first glance, completely real.

That study was published before the newest generation of models arrived. The gap between "looks real" and "is real" has only grown. The good news: you can narrow that gap — not by memorizing a checklist, but by knowing which parts of an image AI still consistently gets wrong, and which warning signs have mostly disappeared.

What Used to Work (and Doesn't Anymore)

The most famous AI image tell — extra fingers — has lost most of its power. By 2025, major models like Midjourney v6 and DALL-E 3 had substantially addressed their hand problems. Garbled text on signs and clothing, once a reliable giveaway, is now handled well by leading generators2. If you're still leading with "count the fingers," you're working from outdated advice.

Sign 1: Skin That Looks Like Plastic

This is the most persistent artifact across all major generators as of 2026. AI-generated skin tends to have no pores, no fine hair, no minor blemishes — it presents as an impossibly smooth surface that catches light uniformly3.

Real skin is uneven. Under a flash or strong light, it shows texture, redness, uneven tone. An AI portrait of a 40-year-old often looks younger than a real photo of a 25-year-old, because the model's training data skews toward idealized photographs. Zoom into the forehead, the area beside the nose, the chin. If the texture looks like polished marble rather than skin, that is a signal.

Sign 2: Reflections That Don't Match

Glass surfaces expose AI's lack of environmental awareness. A model generates each part of an image based on statistical patterns — it does not understand that a mirror must show what is behind the camera, or that sunglasses must reflect the scene behind the subject.

Check glasses, eyes, metallic objects, and any shiny surface in the image. If a person is photographed outdoors but their sunglasses reflect an indoor scene — or nothing at all — the image is almost certainly generated. The eyes follow the same logic: the small reflections of light in the eye (catchlights) should be identical in shape and position in both eyes. When they differ, or appear as abstract geometric shapes, that is a tell.

Sign 3: The Edge Halo

Where a subject meets its background, AI images frequently show a subtle fringe — a slight blurring or color bleeding at the boundary. This appears because generative models build images statistically, not by capturing light that bounced off real surfaces.

Look at the edges of hair, shoulders, and ears against a busy background. Real photography creates either clean edges (hard light) or natural bleed (soft light). AI images often produce something in between — slightly too smooth, slightly too uniform — that resembles a compositing artifact more than a photograph.

Sign 4: Functional Implausibilities

A 2025 analysis of diffusion model artifacts found that functional implausibilities — objects that do not work as they should — appeared in 58.7% of images that people struggled to authenticate3. This was more common than anatomical errors (51.4%), yet people mentioned them far less often when explaining their suspicions.

What counts as a functional implausibility? A guitar with strings that do not connect to the tuning pegs. A watch face with no hands. A zipper that disappears mid-jacket. A coffee cup held at an angle that would spill its contents. These are details humans parse automatically; AI skips them because they do not appear prominently enough in training data to get learned correctly. When something in an image makes you think "that wouldn't work," trust that instinct.

Sign 5: Hair at the Boundaries

Hair is one of the harder surfaces for AI models to render — not because individual strands look wrong in isolation, but because of how hair meets other surfaces. At the hairline, the hair-to-skin boundary often blurs incorrectly. Strands may appear to float slightly off the scalp, or dissolve into the background rather than being distinct from it.

Wispy hair against a light background is a common giveaway: the strands look correct in the center but melt into the background at their tips in a way that looks more like a low-quality cutout than a photograph.

Sign 6: Lighting That Contradicts Itself

Real photography has a single light source, or a consistent set of sources. Every shadow, every highlight, every reflection in an image should be explainable by the same lighting setup. AI models generate lighting feature by feature, and inconsistencies creep in: a face lit from the left with a shadow falling the wrong way on the neck, a person indoors with the harsh highlights of outdoor sunlight, a cast shadow pointing in a different direction from the specular highlight on the subject's forehead.

Ask yourself: "Where is the light coming from?" If you cannot give a consistent answer, that is meaningful.

Sign 7: The Uncanny Background

Backgrounds in AI images are built from whatever the model predicts "should" be there, not from a real place. This produces a specific quality: backgrounds that look plausible at a glance but fail on inspection. Bookshelves with books that have no legible titles. Crowds where faces in the background are smeared and similar to each other. Architectural details — railings, windows, tiles — that are inconsistent in perspective or spacing. Natural textures like stone walls or grass that repeat in a way real surfaces do not.

The background is where AI generators cut statistical corners because most training images do not focus on background detail. When you are suspicious about an image, look there first.

A Note on Limitations

No single sign is definitive. A real photograph taken through frosted glass will have blurry edges; a studio portrait will have flawless skin. Even detection tools that combined multiple signals with human inspection achieved only 62–94% accuracy in independent 2026 benchmarks2, depending on the generator and how the image was post-processed.

What you are looking for is a cluster of signals. Two or three of the above in the same image shifts the probability significantly. When you need a more definitive answer, tools like Maat Scan run automated analysis across multiple technical signals that are not visible to the naked eye.

Sources

  1. Köbis et al., "How good are humans at detecting AI-generated images? Learnings from an experiment," arXiv / Microsoft Research, 2025. arxiv.org/abs/2507.18640
  2. "AI Image Detector Accuracy Test: We Tested 5 Tools Against Every Generator (2026)," Imagera AI, 2026. imagera.ai/blog/ai-image-detector-comparison-2026
  3. Guillaro et al., "Characterizing Photorealism and Artifacts in Diffusion Model-Generated Images," CHI 2025. arxiv.org/html/2502.11989v1