He has come up with a tool called 'Nightshade' to manipulate AI image-generators who illegally access artists' work by poisoning the AI set and infecting it.
While AI has become an increasingly developing advancement with an ability to create superior innovations, it is also an underlying threat to several artists. Many have been facing the issue of copyright infringement where artists’ original works have been stolen or used slyly by AI generators, thereby causing artists to lose their credit and work. To tackle this, My Modern Met shared that a scientist has come up with a tool called "Nightshade" to poison such AI generators. The article mentioned that the name was given from a plant called Nightshade which was historically used to poison kings and emperors. The tool was developed by computer science professor, Ben Zhao, from the University of Chicago.
Introducing Nightshade: a new tool that helps artists poison AI models with corrupted training data! Learn more about it from @UChicago researchers led by Prof. Ben Zhao: #AI #ComputerScience #ArtificialIntelligence https://t.co/H8GA67lt2N— AITopTools (@aitoptools) October 24, 2023
The brilliance of this tool is that it holds the ability to destroy AI generators. All artists have to do is inject the invisible pixel into their artwork and as soon as the same is uploaded to an AI set, it will cause it to break. While the tool is still under review, it is a breath of relief for many artists whose works are being stolen and used by companies and others without their knowledge or permission. Simply put, the tool has the power to disrupt the technology of the AI image-generators. The basic idea of the Nightshade tool is to manipulate the AI models in a way that forces the latter to produce unusual and false results. The aim is to make the models useless in their ability to copy and generate results.
#Nightshade: This new tool lets artists fight AI image bots by hiding corrupt data in plain sight.— InfoDroplets (@InfoDroplets) October 24, 2023
A team at the University of Chicago created Nightshade to protect ideas and content.https://t.co/y6JgB0RQxS pic.twitter.com/rzSFrRNU0f
The article also reports that once the infection finds its way into the models, it is tough to get rid of them and is likely to affect the system. This adds to the prevalent efforts of saving artists’ originality and ensuring that their work is not recreated in vain by fraudulent companies. A recent tweet from @spawning shared that over 78 million artworks that were being used by AI sets were opted out by artists. MIT Technology Review reported that Zhao has also created another tool called “Glaze” which adds to the protection and safety artists and their creations have. It allows creators to mask their styles so that AI sets are unable to identify and copy them. The article also mentioned that the tool is designed in a similar way to that of the Nightshade tool.
Computer Science Professor Ben Zhao at the University of Chicago has developed “Nightshade,” an online tool that can bring a watershed moment in the AI-Art landscape. #Nightshade #AI #ARThttps://t.co/cd2PzSRHUQ— Analytics Drift (@AnalyticsDrift) October 27, 2023
It works in the form of pixels that manipulate machines and AI sets heavily, thereby displaying illusions rather than the actual object or elements. While the new tool doesn’t cover the damages already done, it gives hope for artists to fearlessly create and share with the world more and more of their talent. Zhao replied to a tweet clarifying, “Nightshade’s purpose is not to break models. It’s to disincentivize unauthorized data training and encourage legit licensed content for training.” @AnalyticsDrift shared a tweet giving a rough idea of what Nightshade’s potential is and why it is the supporting hero for artists who take time and effort to create their masterpieces. @anaisisreading shared a phrase quoting Zhao that read, “We created Nightshade because right now, AI companies hold all the cards—and we need to tip the power balance back in favor of artists.”