Artists

How can visual artists protect their work from AI crawlers? It’s complicated


The research team surveyed over 200 visual artists about the demand for tools to block AI crawlers, as well as the artists’ technical expertise. Researchers also reviewed more than 1,100 professional artist websites to see how much control artists had over AI-blocking tools. Finally, the team evaluated which processes were the most effective at blocking AI crawlers.

Currently, artists can fairly easily use some tools that mask original artworks from AI crawlers by turning the art into something different. The study’s co-authors at the University of Chicago developed one of these tools, known as Glaze.

But ideally, artists would be able to keep AI crawlers from harvesting their data altogether. To do so, visual artists need to defend themselves against three categories of AI crawlers. One type harvests data to train the large language models that power chatbots, another to increase the knowledge of AI-backed assistants, and yet another to support AI-backed search engines.

Artist survey

There has been extensive media coverage of how generative AI has severely disrupted the livelihoods of many artists. As a result, close to 80% of the 203 visual artists the researchers surveyed said they have tried to take proactive steps to keep their artwork from being included in training data for AI generating tools. Two-thirds reported using Glaze. In addition, 60% of artists have cut back on the amount of work they share online, and 51% of them share only low-resolution images of their work.

Also, 96% of artists said they would like to have access to a tool that can deter AI crawlers from harvesting their data. But more than 60% of them were not familiar with one of the simplest tools that can do this: robots.txt.

Squarespace provides a user-friendly option for controlling whether AI-related crawlers are disallowed in a site’s robots.txt.

Tools for Deterring AI Crawlers

Robots.txt is a simple text file placed in the root directory of a website that spells out which pages crawlers are allowed to access on that website. The text file can also spell out which crawlers are not allowed to have access to the website at all. But the crawlers have no obligation to follow these restrictions.

Researchers surveyed the top 100,000 most popular websites on the Internet and found that more than 10% have explicitly disallowed AI crawlers in their robots.txt files. But some sites, including Vox Media and The Atlantic, removed this prohibition after entering into licensing agreements with AI companies. Indeed, the number of sites allowing AI crawlers is increasing, including popular right-wing misinformation sites. Researchers hypothesize that these sites might seek to spread misinformation to LLMs.

One issue for artists is that they do not have access to or control of the relevant robots.txt file. That’s because, in a survey of 1100 artist websites, researchers found that more than three quarters are hosted on third-party service platforms, most of which do not allow for modifications of robots.txt. Many of these content management systems artists use also give them little to no information about what type of crawling is blocked. Squarespace is the only company that provides a simple interface for blocking AI tools. But researchers found that only 17% of artists who use Squarespace enable this option. This might be because often, artists are not aware that this service is available.

But do crawlers respect the prohibitions listed in robots.txt, even though they are not mandatory?

The answer is mixed. Crawlers from big corporations generally do respect robots.txt, both in claim and in practice. The only crawler that researchers could clearly determine does not is Bytespider, deployed by TikTok owner ByteDance. In addition, a large number of crawlers claim they respect robots.txt restrictions but researchers were not able to verify that this is actually the case.

All in all, “the majority of AI crawlers operated by big companies do respect robots.txt, while the majority of AI assistant crawlers do not,” the researchers write.

More recently, network provider Cloudflare has launched a “block AI bots” feature. At this point, only 5.7% of the sites using Cloudflare have this option enabled. But researchers hope it will become more popular over time.

“While it is an ‘encouraging new option’, we hope that providers become more transparent with the operation and coverage of their tools (for example by providing the list of AI bots that are blocked),” said Elisa Luo, one of the paper’s authors and a Ph.D. student in Savage’s research group.

Number of sites that explicitly allow at least one AI crawler in their robots.txt over time, and number of sites that removed restrictions on AI crawlers. The vertical lines indicate public data deals between major publishers (who control 40+ domains) and OpenAI.

Legislative and legal uncertainties

The global landscape around AI crawlers is constantly changing due to different legal changes and a wide range of legislative proposals.

In the United States, AI companies face legal challenges around the extent to which copyright applies to models trained on data scraped from the Internet and what their obligations might be to the creators of this content. In the European Union, a recently passed AI Act requires providers of AI models to get authorization from copyright holders to use their data.

“There is reason to believe that confusion around the availability of legal remedies will only further focus attention on technical access controls,” the researchers write. “To the extent that any U.S. court finds an affirmative ‘fair use’ defense for AI model builders, this weakening of remedies on use will inevitably create an even stronger demand to enforce controls on access.”

The work was partially funded by NSF grant SaTC-2241303 and the Office of Naval Research project #N00014-24-1-2669.

Somesite I Used To Crawl: Awareness, Agency and Efficacy in Protecting Content Creators From AI Crawlers

Enze Alex Liu, Elisa Luo, Geoffrey M. Voelker, and Stefan Savage, Department of Computer Science and Engineering at the University of California San Diego
Shawn Shan,  Ben Y. Zhao, University of Chicago



Source link

Shares:

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *