This is about AI from a pro-AI perspective. In the parlance of tumbl, “Antis Do Not Interact.”
A great deal of emphasis in the anti-AI discourse has been on how it steals, how it’s incapable of being innovative or creative, and must inherently be nothing but an “automated plagiarism machine.” Anything that can be interpreted as evidence of this position is loudly boosted no matter how flimsy it is.
I’ll give one example I recently encountered in the wild. There was an article about rescue dogs in training, where they took pictures of their expressions as they found the humans hidden in snow. Feel good story with imagery to match. A site that was mirroring the story, possibly just stealing it, I didn’t look deep enough to know, used AI slop versions of the nice photos that accompanied the original article. This was unequivocally pathetic and gross, and the slop looked sloppy. When someone turned up the original material for comparison and posted it, another person added the comment “this is proof that AI can do nothing but steal!” Ahem.
The AI slop images were clearly taken done by this method: shuffle the doggos, feed them into midjourney or the like directly, and use a “retexture” feature. You could tell because their outlines were identical but their interior details were different. Also because the output looked worse than if you had just told midjourney to create the images from whole cloth. This is a scummy way to use AI, that AI makes this possible is one of the less-than-wonderful things about it, but the same unethical ends could be achieved without AI. The scumbaggery is the issue, not the technology.
Also, just because you found somebody directly using an image in this way it in no way proves shit about the outputs of AI art from a large training set. Those are less guilty of collaging reference images than the average human artist, and even if all they were is turbocollage machines trained on unethically obtained grist, collage is fucking legal, when sufficiently altered from the source, which the AI inherently is.
There are a million such gotchas on the anti side, and I’m not wasting my time addressing them on an individual basis. This was just one example. What I’m here to talk about is another question: Can AI produce original content? My answer, absolutely, yes. They aren’t great at it yet, but they’re mighty close, already succeeding more often than you might imagine. If they were properly set up to do so, AI image generators and LLMs could produce art at least as original as those that humans produce.
Few would argue that individual human beings are not unique, though we are recombinations of genetic material. Generative AI is also recombining material, and does so without the hard constraint of needing to produce a viable organism, so it’s much more free to recombine in innovative ways. The constraint it does have is congruence – it has to make an image or sentence (or video or song etc) that consumers will regard as congruent with their expectations of what such art forms should look like (or sound like etc).
For example, early versions of midjourney, when told to produce the image of a horse, would come back with vaguely horse-leaning piles of nonsense incongruent with what consumers expect horse art to be. They have greatly improved. Now you can get a horse that looks like a horse. However, they lost some creative freedom along the way.
This was the freedom of Chaos. If you look at those old school horse piles, you will see art that – if a human produced it – we would regard as wildly inventive and compelling. AI horses now are just some horses, ho-hum. So first principle: To gain originality, turn up the Chaos. Accept imperfection.
Once you’ve made them chaotic enough to produce images of wild daring, you will probably want to pull that back a bit, just to keep your artist from producing pure headache static. But they will require more chaos than the images you see on the “explore” pages on AI art sites.
Next, you need to emulate vision. I’m an artist. I know what I want to make, most of the time when you catch me making something. I had an idea, I make it happen. But while I’m a synthesis of countless influences the same way an AI is, I currently have something they lack – the desire to make a thing. Initiative. The machines do not initiate creation. No impulse to do so. Must this always be so?
Hell no. One basic example: Nomi -just another AI friend app- can send you messages. Its interface is set up to look like a phone conversation, and if you have the setting turned on, it will send you original messages. Are they great? No, but not too shabby. I don’t believe the people who make that app are super-geniuses who have invented AGI. They just set the bot up to initiate. Boop. Probably wasn’t even hard to do.
Right now generative AIs are like disembodied aspects of a human mind. Imagine you were able to excise the ability of a human to think in words. Damage can certainly cause that faculty to be lost without losing other forms of thought, through conditions like aphasia. This shows it is discrete from the “self” – such as that concept is. So an LLM is just a pile of verbal thought, with no “desires” save what it is programmed to have. A visual art AI is an imagination without a core personality, without desires. But as the LLM can be told what to want, so can an image generator.
Those instructions can be hot trash. I can make sensible AI image prompts like “millions of smurfs screaming on fire in the pits of malebolgia” or nonsense ones like “Cadish cadoo exceptwillory smyge smiggy, He who 💪🐼🌴🚀ishly extrudes cannot rely on the Pineapple Pith Armada to deliquefy heem.” But an expert with access to all the right tools could absolutely set up an AI to initiate art to meet programmed desires.
The animal desire to eat or to avoid feces is a simple imperative, no more sophisticated at its core than the desire of a doombot to run toward the enemy and shoot it. Some of our desires should be important to us, worthy of romanticizing, but for the sake of humility, please acknowledge that they are not magic. And having acknowledged that, you can begin to understand just how trivially easy it would be to grant an AI the agency, the desire, the initiative to create.
Seriously. Love is “allow self to feel needful about social interaction with other person, in exchange for elevation of that relationship’s significance within one’s life.” The only reason it needs to have a physical feeling underpinning it, for us animals, is that before we had verbal thought, we needed a motivation toward our passions. If we could just be made to want, we would not require that flutter of the heart, that quickening of the pulse, that electricity on our skin. Is a programmed imperative less real than one based on the urgings of a pile of meat? I don’t think so.
Will original AI creators be good? AI used to have problems with the number of fingers. Some still do, but many do not. If an ai dev created an Edgar Allan Poebot today, would it compare to the original man? It might have problems remembering characters and crafting genuinely clever scenarios, might have other laughable issues. Do not expect this will always be the case. The hand can be perfected.
The generative AI is a faculty, emulating one aspect of a person. Give it chaos, give it imperatives, and give it the initiative to act on those imperatives. Watch original art be made, no soul required.
That leaves us with another question. If machines have entered into direct competition with human artists, if they get to be as good as or better than us at what we do, then why should we make art? If you don’t have an answer to that – one that works for you personally – you are not a real artist. Might as well quit now, son.
–
