There’s no question that artificial intelligence (AI) is in the process of upending society, with ChatGPT and its rivals already changing the way we live our lives. But a new AI project has just emerged that can pinpoint the location of where almost any photo was taken – and it has the potential to become a privacy nightmare.
The project, dubbed Predicting Image Geolocations (or PIGEON for short) was created by three students at Stanford University and was designed to help find where images from Google Street View were taken. But when fed personal photos it had never seen before, it was even able to accurately find their locations, usually with a high degree of accuracy.
Jay Stanley of the American Civil Liberties Union says that has serious privacy implications, including government surveillance, corporate tracking and stalking, according to NPR. For instance, a government could use PIGEON to find dissidents or see whether you have visited places it disapproves of. Or a stalker could employ it to work out where a potential victim lives. In the wrong hands, this kind of tech could wreak havoc.
Motivated by those concerns, the student creators have decided against releasing the tech to the wider world. But as Stanley points out, that might not be the end of the matter: “The fact that this was done as a student project makes you wonder what could be done by, for example, Google.”
A double-edged sword
(Image credit: Google)
Before we start getting the pitchforks ready, it’s worth remembering that this technology might also have a range of positive uses, if deployed responsibly. For instance, it could be used to identify places in need of roadworks or other maintenance. Or it could help you plan a holiday: where in the world could you go to see landscapes like those in your photos? There are other uses, too, from education to monitoring biodiversity.
Like many recent advances in AI, it’s a double-edged sword. Generative AI can be used to help a programmer debug code to great effect, but could also be used by a hacker to refine their malware. It could help you drum up ideas for a novel, but might assist someone who wants to cheat on their college coursework.
But anything that helps identify a person’s location in this way could be extremely problematic in terms of personal privacy – and have big ramifications for social media. As Stanley argued, it’s long been possible to remove geolocation data from photos before you upload them. Now, that might not matter anymore.
What’s clear is that some sort of regulation is desperately needed to prevent wider abuses, while the companies making AI tech must work to prevent damage caused by their products. Until that happens, it’s likely we’ll continue to see concerns raised over AI and its abilities.
You might also like
AI regulation: “We need data protection law first”ChatGPT explained: everything you need to know about the AI chatbotWatch out – ChatGPT is being used to create malware
A new AI tool can pinpoint your location just by looking at a photo – and it could be a privacy nightmare. Artificial Intelligence, Computing, Software TechRadar – All the latest technology news Read More