Bold Business Logo

Poisoning Generative AI with Nightshade

Nightshade will impact on the AI ecosystem

Artificial intelligence models have already dramatically changed the way many people approach their work. Bloggers rely on AI tools like ChatGPT to summarize needed data for content. Designers utilize a number of AI tools to create new images for various projects. And coders leverage AI’s capacities to identify coding errors and to speed up the coding process. While AI looks to be a very useful tool for these professions, that doesn’t mean it’s without controversy. This is especially true when it comes to artists who feel AI training models use their works without permission. As it turns out, however, these same artists look to have a new data poisoning tools that could prevent such unauthorized use. And it could have a major impact on AI ecosystems well beyond this.

a bad Mona Lisa shows the impact on the AI ecosystem
Programs that foil generative AI’s learning will have a big impact on the AI ecosystem.

(Someone gave ChatGPT a voice–read all about it in this Bold story.)

This new data poisoning tool was developed by a team out of the University of Chicago. Named “Nightshade,” it lets artists contaminate their images’ pixels so that AI models misinterpret them and produce distorted results. The nice thing for artists is that the changed pixels are invisible and only affect machine learning systems. That means their art still has a desired appearance but cannot be used via AI without serious repercussions. When it comes to AI training efforts, Nightshade and other programs like it could be a game-changer. And given its ability to publicize AI security issues, its impact on AI ecosystems could be much more substantial.

“It is going to make [AI companies] think twice, because they have the possibility of destroying their entire model by taking our work without our consent.” – Eva Toorenent, Illustrator and Artist

Issues with AI Training Models

The reason that programs like Nightshade have been pursued relates to the process by which these AI systems are trained. When it comes to image AI platforms, training and machine learning uses billions of Internet images. These images are not accessed or used with artist consent, however. This is why hundreds of lawsuits now exist against various AI companies claiming copyright infringement. In most cases, artists are quite helpless in winning their case, especially when going up against giants like Google and Meta. And even if such AI systems choose to work with artists, these companies usually retain most of the power in the negotiations. The impact of the AI ecosystem on artists from their perspective is hardly fair.

The problem with AI systems training on Internet images extends beyond a simple copyright issue. The specific issues used for training aren’t duplicated when AI creates a new image from a user prompt. Instead, AI-created images are completely unique and new, which has raised questions about image ownership. Regardless, artists contend that simply accessing their art without permission is a violation for which they should be compensated. With AI and machine learning system evolving so quickly, however, there’s little time to lose. As such, researchers began working on a new data poisoning tool as an alternative plan. With this new data poisoning tool, artists now have a way to fight back.

(Dive into how deepfakes are created for audio and video–check out this Bold story.)

“[Vulnerabilities] don’t magically go away for these new models, and in fact only become more serious. This is especially true as these models become more powerful and people place more trust in them, since the stakes only rise over time.” – Gautam Kamath, Assistant Professor, University of Waterloo

a skull and crossbones on some clouds
Sure, generative AI is cool, but what if you don’t want AIs to put their paws all over your content?

Introducing Nightshade and Glaze

Given the current impact of the AI ecosystem on artists, a team at the University of Chicago pursued an innovative approach. They create a couple of programs that allows artists to manipulate their images before posting them online. One such platform is Glaze, which changes an image’s pixels without affecting the entire images appearance. However, these changes are enough to trick AI into thinking the stylistic aspects of the image is completely different. Basically, this new data poisoning tool might convince AI it’s anime instead of cubism, or impressionistic instead of a cartoon. This is naturally quite effective in protecting artists’ work.

The more recent program these same researchers created is called Nightshade. At a basic level, Nightshade works the same way as Glaze. However, instead of manipulating AI in its interpretation of style, it does so at an object level. This new data poisoning tool can result in AI thinking a dog is a cat or even a car a cow. The researchers plan on combining Glaze and Nightshade together in the same package, allowing artists to have their choice on how to best protect their works. Plus, they also will provide these as open-source platforms that lets others enhance them to an even greater extent. Notably, these programs should have a serious impact on the AI ecosystem moving forward.

“We don’t yet know of robust defenses against these attacks. We haven’t yet seen poisoning attacks on modern [machine learning] models in the wild, but it could be just a matter of time.” – Vitaly Shmatikov, Professor, Cornell University

From Defending to Hacking

new data poisoning tool as witches brew
A new data poisoning tool gives content creators the ability to keep AIs away from their work.

The primary goal for Nightshade as a new data poisoning tool is to put power back in the hands of artists and creators. But while these tools can protect individual works, altering AI training models is a different story. As part of their studies, researchers introduced contaminated images in different volumes to see how AI training was altered. When 50 poisoned images of a single object like a dog was introduced, AI creations began to look distorted. Once 300 poisoned images were used, AI interpreted things as completely different objects. The problem is that there are billions of objects within images, which means volumes matter. At a minimum, many thousands of poisoned images would have to affect an AI training model to undermine its function. But this remains a possible end-result of these new tools in terms of a more global impact on the AI ecosystem. For now, Nightshade and Glaze open the door for the protection of artists’ works. But down the road, these same tools may introduce new ways for AI systems to be “hacked.” This would raise the bar for cybersecurity protections of these same AI systems greatly.

 

Did you catch Bold Business’ series on AI deepfake technology? Dig into one of the stories here!

Don't miss out!

The Bold Wire delivers our latest global news, exclusive top stories, career
opportunities and more.

Thank you for subscribing!