One online commentator asked for teddy bears shopping for groceries in the style of ukiyo-e, and seemed impressed. Another wasn’t so happy; they’d ordered up an image of cats drinking soup in the style of Gustav Klimt (“The Kiss” is apparently a favourite picture). An earlier request, for a Formula One race on Mars in the style of Van Gogh, had produced something that looked like a rejected image for a home-produced cover of an album by a minor stoned-out 1970s band — but had apparently met with a reasonable amount of approval. Klimt’s soup-slurping cats, however, were not hitting the spot. “It didn’t look anything like ‘The Kiss’,” wailed the instigator.Now why, I wonder, could that be? The Generative Artificial Intelligence (GAI) facility Dall-E-2, which generates images according to a written description of a few words, should surely have been able to deliver the goods. Cats, soup, Klimt — where’s the problem?
Ever since I heard about Dall-E in its first incarnation a while ago, I knew that I couldn’t avoid it altogether for long. It has, after all, pretty much stolen my name. All that’s missing is a “Y”.
Now Dall-E-2 is with us and, according to devotees, much improved. More detailed, more nuanced, more responsive. I’ll take their word for it. What is certain is that it has sent ripples, if not shockwaves, through a community of creative people: does it challenge their reason for being and our notions of what art should be?