I tried to use generative AI in my last designs, both for the auroch and for the wild boar. First I tried the AI that is supplied with the latest version of Photoshop. Since I am paying for that, I might as well use it. Then I tried to use Stable Diffusion because it is free. And after that, there is no way in hell I will pay for Midjourney, even if I could afford it.
I won’t post the pictures I got here, the internet does not need more AI-generated crap to pollute it. Suffice it to say, that whilst some results looked at least somewhat interesting, most were total crap and none were useable for my purposes. Right from the outset, I found out that the AI does not actually contribute anything to my work at all.
I am one of those people who has some trouble visualizing things in their mind. Especially human faces. I would be completely incapable of drawing the face of even the most beloved person in my life. That might sound odd for someone who doodled all the time and who used to be sufficiently good at drawing to get into art school where part of the entrance examination was drawing a Goethe bust, but that is the way it is. I generally need some kind of template to get started. However, as I found out with the AI, my inability to form a clear visual picture in my mind does not mean that I start without a clue as to how the result shall look. And that is where I clashed with the AI and lost.
I got plenty of two-headed or six-legged animals, plenty of zebu or bison-like animals instead of aurochs, and a lot of domesticated pig lookalikes instead of wild boars. The pose of the animal was never right, no matter what prompts I used. Even after I wrote “auroch running from left to right” I got a picture of what looked like a pair of six-legged water buffaloes running from right to left. At one time the Photoshop AI even deleted one of three generated pictures because the prompt “auroch bull charging” somehow triggered its anti-porn filter or something.
After a few days of faffing around, I found out that my old process – which I described in my previous post – is actually the better way to go. Because despite my poor visualization skills, I still have a pretty good idea about what I want to achieve and what the end result should look like. The AI did not help with that in any way. I only wanted it to generate the picture to get started, and I haven’t got even that. I had zero control over what came out of it. It was a game of chance and I never enjoyed those.
No doubt that with a lot of work and a supply of carefully selected images to train the AI myself, I might get some useable results, but I also got those with a simple Google photo search and making a biro sketch in a few minutes. And once I had the sketch, once I got started, I built on that.
I do not consider generative AI to be completely useless but if it is creative, to me it was a hindrance, not a plus. Sure, I used it to generate dozens of somewhat unique pictures, but despite me being the one giving the prompts, I was in no meaningful way the one who actually created the result.
The results were pictorial ends to a game of Chinese whispers. Sometimes amusing, sometimes interesting, but always widely off the mark.