How to spot AI-generated profile photos in 2026
Two years ago, an AI-generated profile photo was usually obvious within a second. The eyes were slightly off-axis, the earrings didn't match, the background looked like melted glass. In 2026 that is no longer true. Modern diffusion models — Midjourney v7, Stable Diffusion XL, and a long tail of fine-tuned open-source variants — produce faces that pass the average glance test. The tells are still there, but they're smaller, and you have to know where to look.
Here is the honest position before we go any further: there is no checklist that catches 100% of AI faces, and there never will be. The models improve every quarter. What we are doing is shifting the odds. If you go from "I can't tell" to "I can tell about 80% of the time", that is the entire game.
The reliable tells, in 2026
The most consistent giveaway is still inter-image consistency. A real person photographed across a year of their life will have the same ear shape, the same dental line, the same chin-to-jaw ratio. AI-generated profiles often produce three or four photos that look like the same archetype but not quite the same human. Look at the ears specifically — they're the hardest body part for diffusion models to render consistently, partly because we rarely train them on side profiles.
The second tell is micro-asymmetry that doesn't decay. Real human faces are slightly asymmetric, but the asymmetry is consistent across photos because it's anchored in bone structure. AI faces are asymmetric in different ways from photo to photo — the cheekbone that's higher on the left in one image is higher on the right in the next.
The third is background incoherence. Look at what's behind the person, not at them. AI-generated cafés have menus that don't tile, books with non-words on the spines, light fixtures that don't cast plausible shadows on the wall behind them. A real Costa in Brixton has a Costa logo. An AI Costa has a logo that almost says Costa.
What used to work and doesn't anymore
You may have read older guides that tell you to count fingers, look for jewellery asymmetry, or check the teeth. These were great heuristics in 2023. They're now actively misleading. The current generation of diffusion models has been specifically fine-tuned on these failure modes. Hands are mostly fine. Teeth are mostly fine. Earrings often match.
The other thing that's stopped working: searching the photo on Google Images and finding nothing. AI-generated faces are, by definition, novel — they don't exist anywhere else on the web. Reverse image search will return zero matches and that proves nothing. (For real-but-stolen photos, reverse image search is still a useful tool — see our guide to reverse image search for online dating.)
A short anecdote
A reader recently sent us four photos of a man she'd been chatting with on Hinge for three weeks. Each photo was, individually, plausible. He had brown eyes in three of them and grey-green in the fourth. He had a small scar above his left eyebrow in two photos and not in the others. The hair part switched sides between photos taken "the same week". Any one of those discrepancies could be explained — lighting, makeup, a styling change. All four together is not a person, it's a model that's been asked to produce photos of "the same man, professional, mid-thirties, smiling".
She didn't notice any of it until she put the four photos side by side at full resolution. That is the most important habit you can build.
What TruthHound does (and doesn't do)
Our photo authenticity module checks for diffusion artefacts in the frequency domain (the kind of evidence that doesn't show up to the eye), as well as cross-image consistency. We give you a confidence score, not a yes/no, because the alternative would be lying. Confidence above 85% on multiple images is a strong signal. Confidence at 60% on a single low-resolution screenshot is not, and we'll tell you so.
We will not catch every AI face. We will not catch the next model that comes out next month before we've trained on it. What we do is make the gap between "obviously AI" and "obviously human" much narrower, and surface the specific signals that triggered so you can make your own call.
