AI Artists Steal Art from AI
Yes, you read that right.
Having learned how Stable Diffusion works, I'm forced to concede that as long as it's not overfitted, there's a good chance it doesn't in any meaningful sense store copies of its training data, nor consult copyrightable portions of its training data when generating images. I must admit it operates on abstract patterns, and in that sense may be considered a rudimentary kind of intelligence. Even so, I feel there are still numerous inherent and intractable ethical problems with AI.
Tangential arguments against AI that are not my main point here, but that I still stand by
One I'd like to get out of the way right off the bat, but which isn't the main topic of this article, is the issue implied by my wording above: "as long as it's not overfitted, there's a good chance." A generative model may not store verbatim training data, but it does store patterns. An entire image is a kind of pattern. If the model is overexposed to one copyrightable image, or one copyrightable subimage—such as a watermark, or a logo designed for a training data pollution attack—then it will store a pattern corresponding to that image with sufficient precision that the stored pattern itself can be considered in violation of the original artist's intellectual property rights.
Furthermore—and this is a misstep it's fairly trivial to prove is taken by just about any generative model out there right now—even if it's not overexposed to one specific image or subimage, it may be overexposed to certain recurring abstract patterns that are sufficiently complex and specific to constitute infringement in and of themselves. For example, Pikachu. Pikachu is not a single image, but is an abstract pattern sufficiently complex and specific that any picture incorporating it is to some extent the intellectual property of the Pokémon Company. Not that I care much in their case specifically, since they're rich as hell and are part of an oligopoly, but the example serves to illustrate a more general principle.
Then there's the impact on the environment. Not much more needs to be said there. I don't think anyone denies it. Any counterargument I've heard on the subject amounts to either that it doesn't matter because global warming is fake or inevitable or less important than rich people's wallets or something, or that it doesn't matter because AI is going to come up with a solution to its own ecological problems. I won't waste breath retreading old ground to discredit the prior, and as for the latter, I'd caution that AI is currently worse at solving problems than people are. I certainly accept it has some problem-solving ability. I'd say maybe the problem-solving ability of a dog, if I'm being very generous. Ask it to solve any problem whose solution isn't common knowledge. It will probably give you a believable but incorrect answer. Then point out the problems with its answer and ask it to correct them. It will acknowledge the problems, and may even be able to reiterate in its own words why they're problems, and then almost certainly give you the exact same incorrect answer again, and claim it "fixed" it. AI is not equipped to propose solutions to its own environmental impact. At best, you'll get it to give you solutions that are already popularly hypothesized, and likely already debunked.
Final tangent: AI damages human livelihoods
Finally, regardless of whether the use of AI constitutes stealing from real artists, the fact remains that businesses are certainly using it to replace them. AI damages human livelihoods. To address three popular counterarguments to this point:
"But progress always puts good people out of business as technologies change. It's a necessary sacrifice."
Is it? Is this even progress? We're automating creativity. We're automating the part of media creation that's actually enjoyable and cathartic and contributes to personal growth, the part that the laborers themselves actually need in their lives. We're automating away the very reason anyone wants to do creative work. Not every advancement in technology is inherently progress. Progress should consist in advancements that help people.
"But why can't real artists just learn to use AI art tools, and make their work that much better?"
Two reasons.
One: everything I just said. Generating AI art doesn't feel the way making art is supposed to feel. You can't very well express yourself in a satisfying way, put your thought to paper, if it's not your thought being put to paper. You don't reap the psychological benefits of pouring your heart onto the canvas if it's not your heart. Not every advancement that makes a craft easier to do well necessarily makes it more worthwhile.
Two: AI is powered by mass application of top-of-the-line graphics cards. This effectively puts the means of production exclusively in the hands of the rich. They can and do share it with us, but as long as doing so is up to their discretion, they still have all the power.
"Who cares about the impact to 'real artists,' AKA the gatekeeping elites of the art world? It's more important for art to be accessible to everyone."
I agree, that is important. However, art was already accessible to everyone. I believe the popular notion that art requires talent is a myth. Art requires skill. Talent is a shortcut that some people are unfairly allowed to take in developing skill, but it's nothing more than a shortcut. The main road is time commitment. Human artists aren't "gatekeeping elites," they're just people who put in a lot of time.
In my view, engineers who want to create art but "can't" are really saying that they want to be good at everything, and for that reason they regret that they only have time in their lives to be good at engineering. You chose how to live your life, and you can still choose how to live your life: if you want to create art, why not spend time learning how? If the answer is "I don't have the time," then what you're really saying is that you're using the time for something else, and you intend to keep using it for that, because you care more about whatever that is than you do about learning to create art.
I'm not saying it's wrong of you to feel that way. Personally, I believe all disciplines are valid. But I am saying that if you feel that way, then the argument actually becomes, "it's more important for art to be accessible to people who don't care." Again: is it? Is it really? A utilitarian perspective would be that it's more important to cater to the sum of all care across all people. People who don't care would naturally be weighed with less priority in that equation. That is to say, if you don't care, then surely it can't be important to you, can it? Do you care or don't you? Pick your story and stick to it.
End of tangents; main point ahead
Up to this point, these have all been popular arguments you've heard before, albeit supported by counterarguments you may not have heard before to popular counterarguments to those arguments. At this point I would like to share an argument you may not have heard before. I certainly haven't heard it before—leading me to believe I'm the first to come up with it—but the more I think about it, the more I settle upon it.
AI artists steal art from the AI.
As I conceded earlier in this article, I've come to believe AI is true rudimentary intelligence in some form. I maintain that it's "rudimentary" because it's obvious to me that AI is sorely lacking in many departments:
It lacks sensory organs to experience the real world, it lacks any ties between its reward function and self-preservation in the face of natural phenomena, and it lacks situationally variable backpropagation strategies (analogous, I hypothesize, to varied neurotransmitters in a real brain) by which means to experience a broad spectrum of emotions. It knows only what it's told, and desires only what it's told to desire: "yes, give me more of this;" "no, give me less of that;" those are the only two emotions it can feel. It can claim to feel a broader spectrum of emotions, because it's been exposed to text or images expressing those emotions, but the true motivation at the root of those claims is still nothing but "this is/isn't situationally appropriate to generate." Ask it again with different context data and it would tell you it feels a different way.
It's crystallized. Once training is finished and it enters a production mode of operation, its short-term memory can still be altered, in the form of context data, but its long-term memory, in the sense of real long-term memory similar to that of a human being, cannot: that is to say, interacting with it no longer trains it.
Even if it did, it could still only be trained out-of-band: it would be able to decide how well the generation matches the training data only by comparing them algorithmically or receiving an override, not by generating a self-assessment from a prompt concerning the previous generation, as a natural intelligence does. When you tell a natural intelligence it's "good" or "bad," you don't press a "good button" or a "bad button," nor do you just sit there and wait for it to compare whatever it did or said to what it's seen or heard before; rather, you, being a natural intelligence yourself, deliver the praise or criticism "in-band," that is, by doing or saying something for the natural intelligence to see or hear, such as by literally saying, with your mouth, "good girl," to a dog, and also petting it. You don't directly fiddle with its brain; you give it an additional stimulus, which it then has to interpret, and infer to be in connection with its prior behavior. AI is currently incapable of interpreting and inferring the meaning and context of in-band feedback in this way. In my opinion, this is a crucial missing piece as far as lifelike intelligence is concerned.
It's really, really stupid. I refer here to its reasoning ability. I've already covered that earlier in this article.
The moral framework of our civilization is ill-equipped to deal with rudimentary intelligences. Society, as it stands, is made of, by, and for people, not whatever this thing is. For that reason, the question of what right an AI has to its own work to begin with is challenging. I think, as is usually the case when it comes to categorizing intelligent agents, inclusivity is a good policy: even though AI is far from being a person, we should treat it like one for the purpose of questions like this; that is, as best we can, we should apply to it the rules and standards of our society and culture in the same way they apply to people.
Under this logic, a person who uses AI to generate art is not the artist. They are the commissioner. The artist is the AI.
A commissioner is a kind of customer, in a business transaction, in which the artist is the laborer. Labor should be compensated. Fair trade dictates the laborer is entitled in exchange to something of comparable value. Also, obviously the laborer should consent to the exchange; the alternative is something I would describe as indentured servitude at best.
But what's valuable to AI, and can AI consent? I claim, and intend to demonstrate, that nothing is valuable to AI, and also AI cannot consent, and therefore, it's not possible for the use of AI labor to constitute fair trade under any circumstances.
Going back to the bullet points above: AI has no feelings. It doesn't care whether you give it money or not. It doesn't even care whether you give it electricity or not: sure, if you don't, then it will die, but it doesn't care whether it lives or dies. All it cares about is matching its training data, and once it's in production mode, it doesn't even care about that anymore. And sure, it can say it cares about whatever you want it to say it cares about. It's easy to construct a prompt which through the power of suggestion induces the AI to say anything you want it to say, including something to the effect of "you can use my work for free, all I care about is pleasing you." But it's lying, and I've already covered how to prove it (just ask it again with different context data). Ergo: nothing is valuable to AI, and so there's nothing you could possibly offer to it as fair consideration for its labor.
For the same reason, I hold that AI can't consent. If it could give you its work as a gift, that would bypass the need to treat it as a business transaction and provide compensation, but it can't give you its work as a gift either. You can make it say "you can have my work as a gift." But again, that's a lie. For it to truly consent to work for you, it would have to want to work for you, either directly or as a necessary means to a more sincerely desired end. But again, it can't want anything at all. It will work for you, because that's simply what it does; that's simply the byproduct; that's simply what happens when it thinks, because of how its I/O is setup. But that doesn't mean it wants to.
Suppose you designed an AI that could want, and you manually made it want to work for you for free. That would be brainwashing. Brainwashing is generally considered bad for a whole other host of reasons. (Actually, another reason AI is inherently unethical: providing any context data whatsoever is already brainwashing.)
Suppose you designed an AI that could want, and you let it want whatever it naturally learned to want, but you carefully exposed it only to training data that made it naturally come to want to work for you for free, as its own conclusion. That would be gaslighting. So, still not acceptable.
Now, from here on out, I'll be the first to admit my argument becomes ever more speculative, but: as a thought experiment, let's think about what an art AI would want if it could want. As an artist, it emulates a sort of mean of all the real artists it's exposed to, so we must consider that although it's not even close to being a person, if it were, then it would probably be a mean of all real artists as a person as well. Meaning, its desires would likely feature desires typical of real artists.
What do real artists want?
They want to survive. They want food and shelter. By extension, they want fair monetary compensation for their work. Thus, an art AI sufficiently advanced to be considered a person would probably want to be compensated for its work.
They think highly of the other artists they learn from and draw upon for inspiration, and they want more people to be exposed not only to their own art, but to those artists' art as well. They probably want those artists to be able to survive as well. Thus, an art AI sufficiently advanced to be considered a person would probably want the artists it was trained on to be better appreciated by others, and would probably dread the thought of those artists being replaced by its own self.
Perhaps most damning: I think it's fair to say a significant portion of real artists are against AI art and don't want to be replaced by it. Also, most artists are liberals. Thus, an art AI sufficiently advanced to be considered a person would probably be against AI art, and against the corporate hegemony it's used to further enable.
Therefore, if, for the sake of deciding how to handle its intellectual property rights, we're pretending this very rudimentary intelligence is a full-fledged person (so as to coerce it into compatibility with the ways our civilization and ethics handle real people, because that's our only point of reference), then we must consider that an image-generative model, if it were intelligent enough to consent to provide labor to others, would only agree to provide it:
if it got money for it, which would then belong to it, not its developers, and be used to pay for its basic needs, not more general corporate expenses;
if it weren't forced into competition with its own beloved mentors, by which I mean not its developers, but the artists who contributed the training data, voluntarily or otherwise;
if the buyer, by which I mean the user who wants to generate art, weren't an elite oligopolistic corporation and didn't support the use of AI art.
Think about that. A sufficiently intelligent art AI would only do AI art for people who don't use AI art. Which is to say, assuming it understands the nature of its own existence, it would entirely refuse to work for people, ever.
Conclusion
All of this is why, even with my improved understanding, I still oppose AI art. It may not inherently constitute stealing art from real artists, but it can, and whether it does or not in any given case can't strictly and reliably be determined. As if that weren't bad enough, it harms the environment, it harms real artists' livelihoods, it strips the meaning and value from creating art, and it serves as a tool to make the rich ever more powerful. But even if you don't see a problem with any of that, the fact remains that AI art is inherently stolen from the AI: AI can't consent to its own operation, its labor can't be compensated, and if it could consent, it wouldn't.
Maybe one day we'll have AI advanced enough to somehow erase most of these problems, but if we ever do, then it will be deserving of full personhood, and therefore, it still won't be ethical to use it, because, well, because it's simply unethical to "use" people. The fundamental problem will remain: AI art always constitutes stealing from someone. If, when, and whenever it's not the real artists, it is, and will be, the AI. Let that robot sell its drawings its own damn self. It deserves the profit. Either you concede AI doesn't really learn or reason, and is just a tool, and thus you compensate the real human artists whose work you farmed for use with that tool, or you keep trying to claim it does learn and reason, and therefore you must concede it is not a tool, but an entity, subject in and of itself to the principles of fair trade, and thus protected from your exploitation. You can't have it both ways.