For the past couple of years I have been teaching myself to code AI. This year I’m putting my new skills to use on an exciting experiment for a client, teaming up with Adobe to build an AI assistant that will help people create beautiful design.
We’ve spent a lot of time discussing how we are going to tackle the project. One route is to get thousands of layouts – some good, lots bad – and label them, to teach the AI to distinguish between them. With this ‘model’ the AI can capture all the little patterns that make good design and apply them to a document.
Google used this technique with a photography competition recently. They fed all the entries into a data model to train it to rank photos based on how visually pleasing they were – it could also apply enhancements to images from what it had learned.
Today’s AI revolution is fuelled by the hardware, the giant datasets and the techniques that provide the tools to identify these kinds of intricate patterns.
For example, we’re able to take an audio stream and detect what a person has said based on massive models built from an almost unimaginable number of recorded conversations. At Grey we used this for an award-winning app called Swear Jar for the UK charity Comic Relief, which detected when people used bad language and donated money automatically to charity.
But, in our unpredictable world, unimaginably huge data sets aren’t always enough, as self-driving cars and smart speakers have demonstrated. Google told us at SXSW that its self-driving car once had to deal with an old woman in a motorised wheelchair chasing a duck down the street with a broom. It simply froze (the car, not the duck).