A team of researchers at the University of Chicago has created a tool aimed to help online artists “fight back against AI companies” by inserting, in essence, poison pills into their original work.
Called Nightshade, after the family of toxic plants, the software is said to introduce poisonous pixels to digital art that messes with the way generative AIs interpret them. The way models like Stable Diffusion work is they scour the internet, picking up as many images as they can to use as training data. What Nightshade does is exploit this “security vulnerability”. As explained by the MIT Technology Review, these “poisoned data samples can manipulate models into learning” the wrong thing. For example, it could see a picture of a dog as a cat or a car as a cow.
Poison tactics
As part of the testing phase, the team fed Stable Diffusion infected content and “then prompted it to create images of dogs”. After being given 50 samples, the AI generated pictures of misshapen dogs with six legs. After 100, you begin to see something resembling a cat. Once it was given 300, dogs became full-fledged cats. Below, you’ll see the other trials.
(Image credit: University of Chicago/MIT Technology Review)
The report goes on to say Nightshade also affects “tangentially related” ideas because generative AIs are good “at making connections between words”. Messing with the word “dog” jumbles similar concepts like puppy, husky, or wolf. This extends to art styles as well.
(Image credit: University of Chicago/MIT Technology Review)
It is possible for AI companies to remove the toxic pixels. However as the MIT post points out, it is “very difficult to remove them”. Developers would have to “find and delete each corrupted sample.” To give you an idea of how tough this would be, a 1080p image has over two million pixels. If that wasn’t difficult enough, these models “are trained on billions of data samples.” So imagine looking through a sea of pixels to find the handful messing with the AI engine.
At least, that’s the idea. Nightshade is still in the early stages. Currently, the tech “has been submitted for peer review at [the] computer security conference Usenix.” MIT Technology Review managed to get a sneak peek.
Future endeavors
We reached out to team lead, Professor Ben Y. Zhao at the University of Chicago, with several questions.
He told us they do have plans to “implement and release Nightshade for public use.” It’ll be a part of Glaze as an “optional feature”. Glaze, if you’re not familiar, is another tool Zhao’s team created giving artists the ability to “mask their own personal style” and stop it from being adopted by artificial intelligence. He also hopes to make Nightshade open source, allowing others to make their own venom.
Additionally, we asked Professor Zhao if there are plans to create a Nightshade for video and literature. Right now, multiple literary authors are suing OpenAI claiming the program is “using their copyrighted works without permission.” He states developing toxic software for other works will be a big endeavor “since those domains are quite different from static images. The team has “no plans to tackle those, yet.” Hopefully someday soon.
So far, initial reactions to Nightshade are positive. Junfeng Yang, a computer science professor at Columbia University, told Technology Review this could make AI developers “respect artists’ rights more”. Maybe even be willing to pay out royalties.
If you’re interested in picking up illustration as a hobby, be sure to check out TechRadar’s list of the best digital art and drawing software in 2023.
You might also like
What is AI? Everything you need to know about Artificial Intelligence …The Snapdragon 8 Gen 3 is here to run AI on your next phone …Google Maps gets a big AI update – here are the 5 best new time …
A team at the University of Chicago has developed Nightshade, a tool that can poison generative AI models. Artificial Intelligence, Computing, Software TechRadar – All the latest technology news Read More