How artists can poison their pics with deadly Nightshade to deter AI scrapers

How artists can poison their pics with deadly Nightshade to deter AI scrapers

University of Chicago boffins this week released Nightshade 1.0, a tool built to punish unscrupulous makers of machine learning models who train their systems on data without getting permission first.

Nightshade is an offensive data poisoning tool, a companion to a defensive style protection tool called Glaze, which The Register covered in February last year.

Nightshade poisons image files to give indigestion to models that ingest data without permission. It’s intended to make those training image-oriented models respect content creators’ wishes about the use of their work.

“Nightshade is computed as a multi-objective optimization that minimizes visible changes to the original image,” said the team responsible for the project.

“For example, human eyes might see a shaded image of a cow in a green field largely unchanged, but an AI model might see a large leather purse lying in the grass. “

Nightshade was developed by University of Chicago doctoral students Shawn Shan, Wenxin Ding, and Josephine Passananti, and professors Heather Zheng and Ben Zhao, some of whom also helped with Glaze.

Described in a research paper in October 2023, Nightshade is a prompt-specific poisoning attack. Poisoning an image involves picking a label (e.g. a cat) that describes what’s actually depicted in order to blur the boundaries of that concept when the image gets ingested for model training.

So a user of a model trained on Nightshade poisoned images might submit a prompt for a cat and receive notification of an image of a dog or a fish. Unpredictable responses of this sort make text-to-image models significantly less useful, which means model makers have an incentive to ensure that they only train on data that’s been offered freely.

“Nightshade can provide a powerful tool for content owners to protect their intellectual property against model trainers that disregard or ignore copyright notices, do-not-scrape/crawl directives, and opt-out lists,” the authors state in their paper.

The failure to consider the wishes of artwork creators and owners led to a lawsuit filed last year, part of a broader pushback against the permissionless harvesting of data for the benefit of AI businesses. The infringement claim, made on behalf of several artists against Stability AI, Deviant Art and Midjourney, alleges that the Stable Diffusion model used by the defendant firms incorporates the artists’ work without permission. The case, amended in November 2023 to include a new defendant, Runway AI, continues to be litigated.

FDA approves AI-powered skin cancer-screening device that’s just a teensy bit tricorder-ish

Did all that AI chatbot hype boost Bing’s market share? Oh, wait, never mind

AI political disinformation is a huge problem – but harder to fight than ever

How ‘sleeper agent’ AI assistants can sabotage your code without you realizing

The authors caution that Nightshade does have some limitations. Specifically, images processed with the software may be subtly different from the original, particularly artwork that uses flat colors and smooth backgrounds. Also, they observe that techniques for undoing Nightshade may be developed, though they believe they can adapt their software to keep pace with countermeasures.

Matthew Guzdial, assistant professor of computer science at University of Alberta, said in a social media post, “This is cool and timely work! But I worry it’s being overhyped as the solution. It only works with CLIP-based models and per the authors, would require 8 million images ‘poisoned’ to have significant impact on generating similar images for LAION models.”

Glaze, which reached 1.0 last June, has a web version, and is now on its 1.1.1 release, alters images to prevent models trained on those images from replicating the artist’s visual style.

Style mimicry – available through closed text-to-image services like Midjourney and through open-source models like Stable Diffusion – is possible simply by prompting a text-to-image model to produce an image in the style of a specific artist.

The team believe artists should have a way to prevent the capture and reproduction of their visual styles.

“Style mimicry produces a number of harmful outcomes that may not be obvious at first glance,” the boffins state. “For artists whose styles are intentionally copied, not only do they see loss in commissions and basic income, but low quality synthetic copies scattered online dilute their brand and reputation. Most importantly, artists associate their styles with their very identity.”

They liken style mimicry to identity theft and say that it disincentivizes aspiring artists to create new work.

The team recommends that artists use both Nightshade and Glaze. Presently the two tools each must be downloaded and installed separately, but a combined version is being developed. ®

>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : The Register – https://go.theregister.com/feed/www.theregister.com/2024/01/20/nightshade_ai_images/

Exit mobile version