AI-generated imagery is the internet’s new obsession. Models including Dall-E 2, Dall-E Mini and Midjourney – all of which interpret text prompts as visual output – have sparked a sudden proliferation in weird and wonderful pictures. But with that has come a fresh set of worries from the creative world, in particular imagemakers wondering if they’re soon to be out of a job. At the time of writing, only Dall-E Mini is available to the general public – a model that’s significantly inferior to the output of Dall-E 2 – but it’s hard not to wonder what will happen once the fully fledged models are unleashed on the world.
Bas van de Poel – co-founder and innovation director at design studio Modem and former creative director at Ikea research lab Space10 – was one of the chosen few to receive early access to Dall-E 2. There’s currently a waitlist to access the AI system, which can produce new imagery, edit existing images, and create variations of an original visual. It does all of this using a process called diffusion, explained in brief by its creator and AI research company OpenAI.
The studio used it to create Dall-E 2 Dream of Haikus – a series of verses paired with imagery generated by the AI model, and based on the accompanying poetry. “It’s interesting, because you basically have this empty input field, and it’s quite daunting,” says van de Poel. “It’s almost like having that blank sheet of paper in front of you, because you can literally ask it anything and it will generate it … when reading poetry or haikus you get this image in your head, and it’s interesting [to explore] what image an AI gets in its head when it reads or interprets it.”