Artists Who Want to Regulate AI Are Wrong

By now you’ve probably heard about the controversial use of artists’ work posted online to train AI image generators like Midjourney, Dall-E and others. There’s a whole international movement now among artists to push for regulation of AI technologies to prevent scraping their artwork for training models. That sentiment seems to be universal in the online creative communities.

Who would have thought that one of the first things this technology would crack was the creative arts? That’s supposed to be an area uniquely human, unlike working a grocery till — but here we are. Artists all over the world are seeing their individual styles replicated by non-artists, their fees and job prospects are impacted, and all of this seems to have come about with no chance for them to fight back. Hence the call for regulation of the dangerous new technology. Are they right?

I hate to say it, but they are not right. In fact, they are sorely mistaken and if they get their way it’s only going to hurt them in the long run. Why? For the same reason that regulation always kills competitive industries: It will give some artists a protected status, while creating barriers to entry for upcoming creatives — particularly those who are adopting AI and adding it to their toolkit. I’ve written before about the shameful ignorance most artistic people have about the free market, as well as the legal and political structures that curtail it. In the case of AI art, there are several powerful free market forces already at play offering a much better chance for artists to secure their intellectual property rights than any regulation can do.

The Big Offense

At the heart of this issue is copyright. The argument that is being made by artists is that AI tools can now replicate their individual styles thanks to the use of their original art in AI training models — something they were never given a chance to opt out of, or were never even aware was happening. Consequently, work opportunities are dwindling as potential employers use AI tools to mimic an artist rather than hiring that same artist directly. But, if copying an artist’s style was sufficient to make a copyright claim, the whole issue would be clear cut. That being said, the creative landscape would be much less vibrant. Imagine if you were the creator of pointillism and then were able to claim a legal monopoly over that style of painting. It would be a sweet deal for you, but it would kill any potential for creative work by others. Therefore, style and genre are two aspects of creative work which are unable to be protected through copyright by design, so as to promote creative exploration.

So although the problem facing many artists stems from AI’s ability to replicate style, and style is undoubtedly the identifying characteristic that would make an artist popular or in-demand, the issue has taken a different turn. Rather than arguing about styles, artists are making claims based on the idea that they never consented to have their work used in the training models. I think this claim has some merit but it’s a tricky situation. The case of Instagram is illustrative.

It seems like Instagram has received the heaviest backlash from artists for allowing its content to be used in training models and for being slow to create opt-out permissions. Many look at this as an act of betraying their loyal use of the platform to build an audience that allowed them to create a market for their work. Now that archive of public posts has been used to make them obsolete. In this case you might be tempted to think they have a case for a copyright claim against the AI companies — Instagram doesn’t actually own those images, so how can it offer them up for AI training without the consent of the artists?

The rub is that artists did consent. It’s all right there in the terms of service, which give Instagram a non-exclusive, world-wide license to use and distribute images and videos posted to their platform. In this respect, we can see that there’s a degree of selfishness in the artists’ position: they want the benefits of Instagram, without giving the platform any rights in return. As long as these artists continue to retain their profiles on Instagram, which many do due to the size of their audience there, Instagram is within its rights to keep offering their art for training models. The case is basically the same on every social platform out there, and even platforms for website hosting may have similar agreements.

The Maybe-Great Social Media Exodus

I could, and probably will, write a whole set of articles about how poisonous social media platforms have become for the creative world — but generative AI tools wouldn’t factor in there at all. Nevertheless, it’s this exact issue that has created a half-hearted exodus from platforms like Instagram by artists who believe AI art is creating an unethical artistic environment. Many are moving over to newly-created anti-AI havens like Cara App, which was founded by Jhinga Zhang, a designer/photographer who won a now-famous copyright infringement case in Europe against painter, Jeff Dieschburg. In this case, Dieschburg had won a 1,500 Euro prize for his painting, which very accurately recreated a photo taken by Zhang.

Source: https://today.rtl.lu/news/luxembourg/a/1923316.html

Zhang wrote of her victory:

Source: https://www.zhangjingna.com/blog/luxembourg-copyright-case-win-against-jeff-dieschburg

Zhang has since become a major advocate against “unethical AI art” in the online art community, creating Cara App (which I do think is a cool app, for the record), which has grown explosively in recent months as artists create profiles as part of an anti-AI backlash. The Cara App team explicitly states that the app is anti-AI art, and that regulation is a mandatory component of resolving what they see as an unethical situation. Their official stance on AI images includes the following:

  • We do not agree with generative AI tools in their current unethical form, and we won’t host AI-generated portfolios unless the rampant ethical and data privacy issues around datasets are resolved via regulation.
  • In the event that legislation is passed to clearly protect artists, we believe that AI-generated content should always be clearly labeled, because the public should always be able to search for human-made art and media easily.

The problem I see with this entire movement is that they have tangled up multiple emotional concepts with an otherwise black and white legal situation. We already know that there is no realistic copyright claim here, because consent has been given in the rights disclosed to Instagram and other platforms. However, Zhang and many other artists keep describing the use of their images as being unethical, as being about consent, and as being about privacy. To understand why these elements do not come into play with AI, we should break down the actual infringement case that elevated Zhang’s status in the community.

Can AI Commit Plagiarism?

Let’s start with the obvious: The lawsuit between Zhang and Dieschburg had absolutely nothing to do with generative AI tools. It was an old-fashioned case of “hey, you copied me!” In fact, each side in that case had solid legal ground to stand on:

  • On Zhang’s side: it was fair to say that the subject of her photo is visually powerful and was the basis for Dieschburg’s painting’s success more so than his technical skill or an innovative painting style of his own.
  • On Dieschburg’s side: there is no law against repainting an existing work, and his work was apparently entered in a student contest, not being sold or auctioned in a professional setting. In other words, he didn’t set out to commit forgery or pass of Zhang’s work as his original idea.

Ultimately, the case would never have been brought had Dieschburg not won a financial prize for his painting. Financial damage is a pivotal issue in copyright — we can all go put the Nike logo on shoes but as long as we aren’t selling or giving them away in such quantity that it financially impacted Nike, it’s not going to be an issue. Likewise, Dieschburg was within his rights to repaint Zhang’s photo.

Ironically, it’s the fact that Dieschburg so accurately recreated the realism, striking pose and expression of the original model in Zhang’s photo that validates her claim. He didn’t necessarily use an innovative technique or unique painting style that differentiated the painting from the original photo and it doesn’t seem like anyone is arguing otherwise. The subject matter factored in, and was the major reason for its impact. So Zhang was correct when she wrote “Using a different medium was irrelevant. My work being ‘available online’ was irrelevant. Consent was necessary.” What she, and pretty much every artist calling AI art unethical, is missing is that she’s only right because the subject matter was also replicated.

We’ve made it to the core distinction between how AI models use the data they’ve been trained on and examples like the Zhang-Deischburg case. Anybody who has spent even 5 minutes on Midjourney can tell you that it is virtually impossible to make it directly recreate an existing image. The stable diffusion mechanism used by generative AI tools doesn’t assess an image pixel by pixel and then produce a direct mimicry. It builds an image from the bottom-up by inferring what you want as far as the subject & style from written prompts, and can even refer to an existing image for guidance toward a specific result — but it’s never the exact same image. Here is an example I produced, to demonstrate the point:

Exact prompt: https://s.mj.run/oTkmmh5MbNU portrait of famous actress, Audrey Hepburn, taken with a FUJIFILM X-T5 Mirrorless Camera — ar 2:3

In this case, the original image (below) was fed into Midjourney in addition to a descriptive prompt. The outputs look similar to the original image but it’s not exact. Furthermore, depending on who you ask they may or may not look like Audrey Hepburn.

Original image of Audrey Hepburn used in my prompt.

In my opinion, the Midjourney renders are in the ballpark but nobody can say this is an exact replica (if anything it’s a bad knockoff right?). Keep in mind too, that this is a very famous reference. Midjourney has millions of images of Audrey Hepburn in its dataset, but even with that it’s still unable to spit out a perfect copy of an existing work. It’s fair to say that this is the best case scenario for attempted plagiarism, whereas for contemporary artists and photographers AI tools like Midjourney aren’t even remotely close to accurate recreations of subject matter.

If you really want to recreate another’s exact work of art, you’re better off remaking it from scratch, by hand. In that sense, AI is adding nothing new to the age old crime of forgery. What AI can do now is the equivalent of an art student practicing a painting technique by studying the Old Masters. It’s not recreating the Mona Lisa, dimples-and-all. However, it can do all these things with an economy so extreme that it’s devastating the competitive advantage currently held by human artists. Of course that’s terrifying if you are an artist but Zhang and others are creating a misleading narrative with their copyright claims. They simply can’t copyright their artistic style, regardless of the fact that their style is what an employer might care about. Meanwhile, the one thing which they can protect: their specific artistic imagery, is the one thing that AI tools are not copying. AI styles might be lifted, but the images are in-fact original.

As for consent? Well, I could remind people that your art is not your bodily autonomy and acting like you’ve been violated is excessive, but that’s beside the point. In this case, it does matter that the work is online because the platforms hosting it have legal rights too. Consent has been given and can’t be withdrawn after the fact.

Is Regulation Really the Only Answer?

I’ve written before that artists and designers are by and large leftwing, so it’s no surprise that much of the anti-AI art movement is calling for regulation to prevent competition from AI tools. If they are successful, it’s going to be disastrous not only for AI companies but for those very artists. It’s a classic example of non-critical thinking that government will solve the problem with no downsides or drawbacks. They couldn’t be more wrong.

One of the most interesting things I’ve read recently about AI models is that they do not know how to forget. In fact, teaching AI models how to forget is one of the most energetically-pursued aspects of AI research right now and it directly relates to lawsuits brought on behalf of angry artists. Despite what I’ve written above, there have been some successful suits against AI companies over the materials used in their models. Unfortunately, the only way to hold them accountable right now is to give their AI models the proverbial death penalty: that is, the model and the entire data set it was trained on need to be deleted. That’s why companies are desparate to figure out a way for AI tools to forget selected bits of their training data. In other words, some of the most sophisticated technology we have and the businesses invested in it could be completely wiped out by this knee-jerk reaction from the artistic community.

How would this hurt the artists though? The most obvious way is that many artists have happily adopted AI tools (including me) because they allow for fast prototyping. I personally find that I like to then revert to hands-on adjustments, but a lot of artists are deep into the weeds and can combine AI tools into custom workflows that result in high-quality work. In fact, a new genre of art has been dubbed ‘controllist’ art, after control nets used in generative AI tools. Those artists are very efficient thanks to AI, and it’s clearly unfair that they go overlooked in this debate. We’re simply at another epoch change in art like the digital art revolution in the 1990s, and we should be celebrating the pioneers rather than the luddites. However, if the luddites get their way, the advancement of art industries will stall and probably other AI-adopting industries as well.

Here’s another facet of the equation which anti-AI artists are missing. Right now, AI is a huge fad and non-artists all over the place are gushing that they can now easily create their own images, videos, etc. so why pay for an artist to do it? Right now, that’s a dent in the artist’s paycheck but the excitement is going to wear off. Executives and producers aren’t artists, by which I mean they don’t have that urge to make creative things in the way artists do. They get their energy from other aspects of their work, and they will therefor still want to delegate the art to an artist — the difference now will be what kind of artists?

I think it will come down to Fast-Prototypers who have adopted AI tools into their repertoire and Premium Artists who have a trademark style and from whom original work will be in high demand among AI-knockoffs. Nothing is as good as the real thing and there’s already a bit of AI fatigue out there. The same artist who today is hired on an hourly rate for their watercolor will likely be able to charge specialty rates for that work in the future because it’s not AI. But not if they manage to kill AI with regulations today.

Competition — The Artist's Real Hope

Regulation is clearly a poor way to protect artists from the threat that AI poses. The obvious alternative is that there needs to be a free market for competing products that give artists a bit of leverage in the new AI environment. Sadly, given that many artists are leftwing they tend to undervalue this fact, despite that there are several competitive options already on the market.

The first and most obvious is Cara App, which we talked about before. If you set aside it’s founders’ push for regulation, Cara App is a great example of market competition. It’s an alternative to popular apps like Instagram, free of AI. Great! It has an uphill battle of course, and might naturally lean toward a niche user base rather than Instagram’s broad audience but that’s the case for any newcomer in any industry.

The more interesting example of active competition to AI are protective tools like Glaze and Nightshade, which provide a kind of anti-AI shield for artists who want to post their work to popular platforms. They are the products of a research team at the University of Chicago, who regularly stress test and improve them to stay competitive against improved AI learning models. These tools are a more compelling option than independent platforms like Cara App because they really take the fight to the AI battleground. In a nutshell, they make images worthless as training data by scrambling the metadata in the image files so that to AI models it looks like white noise. The cool part is that they do this without impacting what the image looks like to the naked eye. Glaze is the more defensive option but Nightshade is described as an offensive mechanism. It actually turns the image into bad data that can corrupt or wildly distort an AI’s learning. It’s like art-hacking.

Artists should see these competitive tools as the superior alternative to regulations. Unfortunately, some see them as a stop-gap awaiting government interventions. For example, the first artwork ever “shaded” using Nightshade was a piece called Belladonna by an artist named Eva Toorenent. But when interviewed by ImagineFX about AI’s impact on artists, including these protective tools, she said “These tools allow artists to protect themselves until regulations and laws are passed to safeguard creatives against generative AI.” But doesn’t that miss the point by a million miles? Why do you need a regulation when superior, adaptive techniques are already at your fingertips? You would think that the artist who helped pioneer the technology would have a bit more perspective, but artists like Eva apparently take it as an article of faith that big government will protect them better than their own creative initiative.

Belladonna by Eva Toorenent

These are just a couple of the competitive tools that I’ve been reading about and I’m sure there are hundreds more because AI has opened up an incredible new market for us. Unlike regulations, these competitive products are flexible, adapting to serve artists with the things they really need — both the purists and the AI adopters. The only complaint I’ve seen against tools like Glaze and Nightshade are complaints that they require a bit of processing power to run, which costs money. How seriously can we really take this argument? Most tools used by commercial artists are expensive anyway. Moreover, the team behind Glaze has even responded to artists and made their tool free to use, funding it with grant money instead. But honestly, shouldn’t artists who are fighting for what they claim is their main source of income be willing to pay to defend it?

I think this underscores the final point to be made on the issue: Artists aren’t really upset that AI is coming for their jobs, nor are they upset that it impacts creativity. What actually bothers a lot of them is that they have been getting a free ride to promote themselves on social platforms for over two decades and the bill finally arrived. That is why the response has been loud, but few have actually ditched platforms like Instagram. So they’ll just have to weigh the options: bad regulation that will ultimately kill their industry, or a reality check that means adopting new tools to help them compete in a new AI world.