Forensics Expert Demonstrates Geometry Test For AI Media

According to YourTango, a digital forensics expert identified a simple visual test to spot AI-generated images and videos. The article names Farid, described as a leading expert at the University of California, Berkeley and credited with helping to found digital forensics over the past 20 years, and quotes him explaining that AI-generated media often contains geometric and perspective inconsistencies. YourTango reports that earlier detection methods based on sensor noise have become less reliable as models replicate such low-level artifacts, so Farid shifted to checking scene geometry and perspective as a telltale sign. He is quoted saying, "Generative AI doesn't know about physics, doesn't know about geometry," and the piece illustrates how perspective rules in authentic photos produce constraints AI often violates.
What happened
Farid, identified in the YourTango piece as a leading digital forensics expert at the University of California, Berkeley, told the outlet he often verifies whether photos or videos have been manipulated. According to YourTango, the article says Farid helped to found the field of digital forensics more than 20 years ago. YourTango quotes him explaining that earlier AI fakes were detectable from unrealistic sensor noise and statistical artifacts, but modern generative models reproduce those low-level patterns much better. The article reports Farid's clear diagnostic: check the scene's geometry and perspective for inconsistencies, because, in his words, "Generative AI doesn't know about physics, doesn't know about geometry."
Editorial analysis - technical context
Editorial analysis: Geometry and perspective checks are a classic forensic heuristic because real-world imaging follows strict geometric constraints: consistent vanishing points, coherent object scale across depth, and physically plausible occlusion. Models trained on large image corpora learn visual correlations but do not encode explicit 3D physical laws, so they can reproduce local texture while violating global spatial rules. For practitioners, that means visual heuristics targeting multi-object relationships and perspective consistency remain useful complements to pixel-statistics and noise-based detectors.
Context and significance
Editorial analysis: As generative models close the gap on sensor-level artifacts, detection is shifting toward higher-level inconsistencies. This move mirrors a broader pattern in detection arms races where improvements in synthesis push defenders to exploit structural, semantic, or temporal constraints that are harder to mimic. For image-forensics teams, geometry checks are computationally cheap and interpretable compared with some ML-based detectors, making them practical first-line triage tools.
What to watch
Editorial analysis: Observers should monitor two trends: whether generative models progressively learn to respect geometric consistency through 3D-aware training or architecture changes, and whether automated detectors begin incorporating explicit 3D priors or multi-view consistency checks. Forensic workflows that combine statistical, semantic, and geometric signals will likely remain more robust than single-signal detectors.
Scoring Rationale
Practical detection guidance for image forensics is relevant to practitioners but not a frontier breakthrough. The piece highlights a usable heuristic as models improve, making it moderately important for forensic workflows and detection tooling.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems


