Artists

Experts warn of ‘significant weaknesses’ in tools protecting artists’ work from AI


Two of the tools most favoured by artists to safeguard their work from AI have “significant weaknesses” that mean work can still be copied without consent, researchers have warned.

Glaze and NightShade, which were both developed to protect creatives against the invasive use of generative AI, are popular with digital artists who want to stop their unique styles being copied.

The two tools, which add invisible distortions (known as poisoning perturbations) to digital images to confuse AI models during training, have been downloaded almost nine million times.

But researchers at the University of Cambridge, Technical University Darmstadt and the University of Texas, have created a method – called LightShed – that can bypass the protections.

They say LightShed can detect, reverse-engineer and remove the distortions, effectively stripping away the poisons and rendering the images usable again for generative AI model training.

Serious vulnerabilities

“This shows that even when using tools like NightShade, artists are still at risk of their work being used for training AI models without their consent,” said Hanna Foerster from Cambridge’s Department of Computer Science and Technology, who conducted the work during an internship at Technical University of Darmstadt.

Although LightShed reveals serious vulnerabilities in art protection tools, the researchers stress that it was developed not as an attack on them – but rather an urgent call to action to produce better, more adaptive ones.

“We see this as a chance to co-evolve defences,” said co-author Professor Ahmad-Reza Sadeghi from the Technical University of Darmstadt.

“Our goal is to collaborate with other scientists in this field and support the artistic community in developing tools that can withstand advanced adversaries.”

AI protections for performers

Meanwhile, more than 1,400 Equity members and actors working in film and TV have signed an open letter “to express concern at the lack of progress on securing AI protections for performers” and say that they will not accept any deal that does not grant them key protections.

Continues…

Equity members signing a physical open letter. Photo: Mark Thomas/Equity

The letter has been organised by performers’ union Equity which is currently negotiating a new agreement with Pact, the UK screen sector trade body for independent production and distribution companies, that will set minimum pay, terms and conditions for performers working in film and TV.

The agreement will also set the minimum terms for contracts used by streamers such as Apple, Netflix, and Disney+.

Equity officials are meeting with Pact today (25 June) to discuss AI protections for performers. Equity says it submitted a claim for the new agreement more than a year ago, but so far Pact has not presented a counterproposal for AI. 

Equity member and actress Tamsin Greig said: “The current situation, with no explicit protections in our contracts, is completely untenable. Equity members have sent a strong message to Pact that we urgently need to regulate the use of AI in film and television, and protect performers’ image, voice and likeness.”

Advertisement




Source link

Shares:

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *