I mentioned the fascinating Science Café talk on Deep Learning. At the very end, there was a thought provoking question raised by an artist in the audience who asked whether such machines could create works of art. The speaker pondered the question and answered that in his opinion, the answer is no. His reasoning was that in a work of art, the artist is trying to convey something based on their life experiences and emotions and a computer, however sophisticated and capable of learning, would not be able to draw upon such resources.
A machine that uses deep learning may be able, using the many art works in its database, to ‘know’ what a painting or sculpture should look like and produce something that is novel but it is not clear what meaning it is tying to convey. As an example, a computer that continually strings letters together randomly may, purely by accident, create the play Hamlet or what seems like an original play or novel. Would that be literature?
But while it is true that an artist is usually trying to convey some meaning through their work, there is also the view that once it leaves the artist’s hands, the person viewing the work creates their own meaning and that this meaning is on a par with the meaning that the artist was trying to convey and not subordinate to it. But this presupposes that the creator of the work had some meaning in mind, even if we are not told what it is.
Can the meaning of a work of art ever be completely disconnected from the intent of artist? It may well be that as a result of having a vast database of paintings of a particular genre that are labeled as such, say impressionism, one can give a computer the simple command to “produce an impressionistic painting” without any instructions as to meaning and it would produce an original one. But would we try to impute any meaning to it, if we knew that it had been generated by some algorithm that had large elements of randomness in the process?
Or suppose one encountered on a nature hike a piece of rock that had a beautiful smooth shape. If one thought that it had been molded by the elements, then it is unlikely that one would try to impute meaning to it but would just marvel at what nature can accidentally produce. But suppose someone later told you that the piece of rock had been carved by Henry Moore and placed there. Would that suddenly give the piece meaning that it previously lacked? In other words, if you are given what looks like a novel, play, painting or sculpture and did not know how it was created, would it matter if it were done by a human or a machine?
I definitely don’t have an answer to this question and suspect that no answer will satisfy everyone. But the advent of Deep Learning is undoubtedly going to raise many questions such as these.
Marcus Ranum says
I am not so sure about that. Some art appears to me, as a viewer, to not have any meaning. Then it is hard to say AI can’t make art because an AI can’t infuse meaning into it that needn’t be there.
Also, even if we accept that premise that “AI are dumb” and are merely re-mixing other people’s experiences, and thus cannot be creative -- re-mixing other people’s experiences and art is something that artists do. For example, one can look at Picasso’s famous painting as a “remixing of events that occurred in Spain” I’m a bit cautious about assuming human creativity is a big deal; we may find that creative AI happen faster than we expect them to. Since one of the limiting factors on self-training AI is having a set of ‘victory conditions’ by which it can judge itself, we are currently in the process of building huge numbers of feedback-collecting rating systems that the next generation of AIs will train to win. (Youtube feedback ‘bot, anyone?)
As far as knowing whether a work is by a human or an AI, it seems to me to be the same question of whether I would rather have an OK real Picasso or a brilliant fake. I can see how someone who invests in art would prefer the actual Picasso, but if it’s something I really like, I don’t think its origin matters.
Here is a scenario that I find a bit worrisome: suppose someone codes an AI that writes marketing jingles and assigns it to a feedback loop where it’s plugged in to a metric (“popular hummability”) and it just keeps optimizing for more and more hummable marketing jingles. Until it’s impossible to go anywhere or do anything without encountering these incredibly hummable market jingles -- it’s a basic feedback loop. I assume that eventually people will learn to dislike those marketing jingles because they are so annoyingly hummable, but once the algorithm’s in place and the feedback loop is set, it’s going to just come up with another angle that’s pleasing to the new ideal of hummable, etc. If we see marketing as a co-evolution between marketers who are trying to capture our eyeballs and us, (AKA: “the target market”) that are trying to overlook the marketing, it seems like we’d have highly optimized marketing that was evolved to be unavoidable.
John Morales says
Is there art in artificial intelligence?
(There are three words in the English language…)
Lofty says
Who knows. There is human made art I don’t understand and I’m sure that AI will create”art” that I won’t understand.
deepak shetty says
https://deepdreamgenerator.com/#gallery
hyphenman says
I’ll defer to Clarke’s first law and simple say, “yet.”
nickmagerl says
Our conversations here seem to revolve the belief that AI can’t put meaning into its work. Sounds to me like a tautology lurks in that belief. If an AI can sense its environment, and if those sensations can affect its actions then meaning is certainly possible.
sonofrojblake says
Two hundred years from now -- maybe sooner, maybe much sooner, that’s what the first line of this blog post is going to sound like. One day soon, it’s not just going to be taken for granted that machines can think, it’s going to be offensive to even suggest that they can’t.
woodsong says
Speaking of curious objects that raise the question of “Is it natural or manmade”, I saw this post on the Fossil Forum not long ago: http://www.thefossilforum.com/index.php?/topic/70152-we-cant-figger-this-one-out/
On first look, it appears to be a piece of art (found in an Oregon creekbed) in a Mesoamerican style. Examination by experts gave a verdict of “Natural”. It’s a septarian nodule formed in an environment where the mud tends to form spiral cracks as it dries--rare, but not unknown.
It still looks like a carving to me!
flex says
@Marcus #1, You reminded me of one of my favorite Arthur C. Clarke short stories from Tales of the White Hart where a computer continued to optimize an ear-worm (not Clarke’s term), until is created a tune which made people catatonic because their brain was completely engrossed with the tune. I wouldn’t be surprised if this idea influenced Monty Python (or was it the Goodies) for the sketch of the military joke.
But back to the OP, the concept that art needs to have meaning for both the creator and the viewer is an interesting one. I can think of instances where either is lacking. For example, while I acknowledge that zoo animals are able to manipulate a brush and apply pigment to canvas, I question whether these animals really do have the abstract thought necessary to progress beyond representational art to abstract art. Similarly, there are a lot of patrons of art who apparently have little idea of what the artist is actually trying to represent, but assign their own meaning (or no meaning) to a particular piece.
While I cannot claim to be knowledgeable about the art world, I think that while the artist has an important input into a particular piece of artwork, in the end it is the viewer who determines if the artwork is successful or not. If the artist, in some of the more abstract modern art, is intending to convey a sense of depth through color (as an uncle (once removed) of mine did for years), is successful or not is less important than what the viewer of the artwork felt/sees in the painting.
What does this mean in the sense of AI? Well, while I can’t say for certain, at the end of the day I suspect that you could program an AI to produce a piece of art by saying, “Make a piece of art in the style of Kandinsky but show the distance of modern society from rural society”. And the AI would be able to create such a piece. The artwork would, to the viewer, meet their expectations. It may even be more accessible than a genuine Kandinsky. So many of the people who would like to display art could find artwork, or have an AI generate artwork, which meets their desires.
But, that only meets the expectation of the viewer of art. The artist’s expectations are a different matter. And I really doubt that an AI will be able to simulate the organic, unexpected, disturbing and surreal aspects of an artist. I can imagine an AI simulating Saki’s writings, I can’t imagine an AI coming up with Shredi Vashtar without a seed.
sonofrojblake says