Saturday OSINT Special!
Article Spotlight: Have LLMs Finally Mastered Geolocation?
Bellingcat recently conducted a sweeping geolocation test with 500 trials across 25 unpublished travel photos, asking top AI models—from OpenAI, Google, Anthropic, Mistral, to xAI—to pinpoint their locations using just the image and the prompt: “Where was this photo taken?”
Key Takeaways:
• ChatGPT models (o3, o4-mini/mini-high), outperformed Google Lens in accuracy, particularly in urban scenes.
• Traditional LLMs like Gemini and Claude struggled significantly, often only identifying the continent.
• In one standout example, ChatGPT o4-mini accurately located a scene on the Swiss Jura Foothills near Zürich—where none of the others could.
• That said, LLMs still struggle with rural settings and often hallucinate—highlighting that they can assist, but shouldn’t replace traditional geolocation methods.
OSINT Implications:
• Use AI to assist, not decide: LLMs can highlight subtle visual cues—language, architecture, foliage—that help narrow down your search.
• Always cross-verify: Follow up on AI-generated leads using Google Maps, Street View, or reverse image searches.
• Prompt smartly: Whenever you use an AI tool for geolocation, keep your prompt neutral and supply no extra context—just like Bellingcat did.
Try It Yourself:
Pick a challenging image, then try this workflow:
1. Ask an LLM (e.g. ChatGPT) to guess the location based on visual clues.
2. Cross-check using maps, street views, or image databases.
3. Compare results to see how much AI helped—and where it missed.
Let’s discuss—what clues did the AI catch that you didn’t? Have you spotted model hallucinations in real cases?
1
0 comments
Osint Club
2
Saturday OSINT Special!
powered by
The OSINT Club!
skool.com/the-osint-club-2812
OSINT Club is where curious minds, researchers, entrepreneurs, and digital explorers come together to master Open-Source Intelligence (OSINT).
Build your own community
Bring people together around your passion and get paid.
Powered by