Why we need to rethink AI – before it’s too late

In his new book Human Compatible, computer scientist Stuart Russell argues that it’s time for a radical shift in our approach to designing AI, before we lose control of systems that were built to help us

Stuart Russell has spent four decades working on AI. He is the co-author of the world’s biggest selling textbook on the topic, which is used in 1400 universities around the world, and has been teaching the subject since the 1980s. 

Given that he’s built a career in helping people learn how to create intelligent systems, it might come as a surprise that Russell’s latest book, Human Compatible, explains how this tech could spell the end of the human race.

Human Compatible sets out various ways in which AI could benefit humanity – whether helping us devise a cure for cancer, or improving literacy levels in developing countries through the use of artificial tutors. But it also presents a much more dystopian alternative – one where AI can be used to control and coerce people on a mass scale. 

While Russell doesn’t believe we’ll be under the control of robots any time soon (as he notes in Human Compatible, there are a number of technological breakthroughs that need to take place before that will be possible). But he believes that the AI community has failed to prepare for what might happen if we manage to create systems with human or superhuman level intelligence – and that could have disastrous consequences. 

If we want to create systems that benefit humans, then Russell believes we need a fundamental shift in the way we approach designing AI – one that is based on building machines that serve humanity’s objectives rather than that of companies or individuals. His book sets out a vision for how this could be done, and provides an urgent call to action for brands, designers and developers involved in building AI. We spoke to Russell to find out more about the risks posed by AI, and how we can design to prevent them.