If there was one word to sum up our relationship with technology this year, it would likely be anxious. As the aftermath of the Cambridge Analytica scandal continued to play out, a new lexicon for our contemporary tech world was spawned, and much of it frankly felt pretty bleak: from surveillance capitalism to deepfakes, everything appeared designed to make us nervous.
The role of design, and designers, began to be more thoroughly considered as the industry questioned whether it needed to have a code of ethics regarding the development of new technologies. While it can be difficult for individuals to generate change, there is a growing sense that collectively designers and technologists can make a stand against tech that they believe may be harmful to society.
Talking to CR, Cennydd Bowles, a former digital product designer at Twitter whose career now focuses on promoting ethics in design, suggested that the crucial stage for designers to speak up was in the critique phase.
“I quite like the role of ‘designated dissenter’ – a constructive antagonist who asks difficult questions during the critique phase. That person’s job is essentially to challenge and subvert the team’s decisions,” he says.
He pointed out that Google workers had collectively exited over objections to the proposals for Project Maven, a Pentagon drone AI project. “Tech workers are finally starting to realise that even though as individuals they might be quite weak, they’re collectively very strong and can push these changes through and make their voices heard if they band together,” says Bowles.
WE ARE WATCHING YOU
If the idea of designers taking control cheers you up a bit, then I’m afraid the wider understanding we gleaned this year of how brands are tracking us may bring you back down again. As our understanding of surveillance capitalism grew, so did our fear of it. “We should be very scared,” designer and developer Laura Kalbag, who also co-founderd Ind.ie – a not-for-profit focused on building a more ethical web – told CR.
“It’s incredibly hard to find apps that aren’t tracking,” Kalbag said. “I just want a period tracker that won’t share my information, because companies are using this to do emotion-based stuff. They’ll say ‘oh well, she’s on her period at the moment so she might be more vulnerable and willing to buy this product’. It feels like it’s sci-fi dystopia, but it’s the reality we’re currently living in.”
Kalbag believes it’s important that companies have a diverse workforce who will be able to highlight the range of ways in which technologies and the use of data may be harmful.
“We might build something that we think to be quite harmless, but we don’t necessarily think about how it might be used in a dangerous situation. That’s one of the reasons we need more diversity in the people building this technology. If you’re a straight white cis guy, you’re not necessarily aware of how dangerous the world can be to you in the majority of countries, and you’re not thinking about how harmful the technology can be. There’s a lot of people talking about needing diversity of people – people that are black, people that are Muslim, women. We need people that are aware of how technology could be used against them working in its design, to ask ‘why are we building this?’”
Sarah Gold, founder of If, which helps companies use data in ethical and positive ways, also pointed out the benefits of people power here too, especially regarding brands.
“I think we need to take much more of a data minimisation approach,” she told CR. “Because we’re about to have a new generation of customers that really care about transparency and services that are worth trusting. That’s going to be very difficult for brands to justify if they’re collecting everything they possibly can. They need to be more intentional about the data they collect – what they do with it, how long they keep it, the kind of data politics of what they’re collecting. I think that’s a far stronger conversation to have with customers and a much better way to build services.”
The year also showed how quickly new tech channels can become household names, even by those who barely know what they are. TikTok was the most downloaded iOS app of 2018 and in 2019, we saw brands and creatives rushing to get on it.
CR columnist and AI maestro Perry Nightingale also introduced us to the hugely popular Chinese deepfake app Zao, which he found fun and compelling and saw as signalling China as a new force in AI. “Like Google and Alexa, there is genuine first-mover advantage in big tech like this, and we have grown used to Western companies dominating these industries,” he said.
He was quick to point out the privacy concerns inherent to both Zao and TikTok though. “There will be a great deal of wariness in Western capitals of platforms like Zao and TikTok. For the record, I didn’t feel as a user of the app that I was part of a Chinese surveillance programme. It felt fun, natural and sort of magical in that way that new technologies can feel. But Zao, or Changsha Shenduronghe Network Technology as the company is really known, has my phone number, my email address, quite a bit of footage of my face – it could identify me on the street using facial recognition if it wanted. There have also been concerns that private companies can be forced by law to hand data to the Chinese government and its security services.”
Elsewhere artist Bill Posters messed with our minds via a series of deepfake videos which featured names from Mark Zuckerberg to Kim Kardashian to Morgan Freeman. As usual, the films highlighted that we have a lot of worry about in terms of control. “These new technologies – whether it’s AI, these forms of generative video or audio, or deeper machine learning processes that underpin them – are impossible to control, for starters, because privacy is a myth now,” says Posters. “That’s very clear. The truth is a multitude. We’re in a situation where it’s very, very difficult to ascertain what is real and what isn’t. This deepfake discourse forms part of a broader kind of critique of a tension that exists at the moment, about how meaning is created and what’s mediated to us and from which actors and which sources.”
And if that isn’t enough, a new book from computer scientist Stuart Russell titled Human Compatible warned that we needed to make radical changes in the design of AI before we lose control of the systems that were built to help us.
View this post on Instagram
‘Imagine this…’ (2019) This deepfake moving image work is from the ‘Big Dada’ series, part of the ‘Spectre’ project. Where big data, AI, dada, and conceptual art combine. .Artworks by Bill Posters & @danyelhau #spectreknows #privacy #democracy #surveillancecapitalism #dataism #deepfake #deepfakes #contemporaryartwork #digitalart #generativeart #newmediaart #codeart #markzuckerberg #artivism #contemporaryart
To illustrate the way design can directly influence AI, he cites the “roasted cat” example. “An example I’ve used in talks is, imagine you have a domestic robot,” he told CR. “You’re late, and you’re stuck at work, and the kids are hungry, so the domestic robot goes to feed the kids but the fridge is empty. And then [the robot] spots the cat. It understands something about nutritional value, but it doesn’t understand anything about emotional value, so rather than saying, OK, this is a change that might affect something – I don’t know whether people care about this cat or not – it just says, I’m optimising the objective, which is to feed the kids, so I’m going to cook the cat for dinner. When that happens, obviously it’s on the nightly news and it’s a global scandal and everyone is horrified … and nobody is going to want a domestic robot in their house if that’s the kind of thing they do.
“When it comes to something like social media, it’s much less obvious than a roasted cat,” he continues. “Now your grandmother has become a raging facist, it’s hard to pin the blame on, was that Facebook’s fault? Or Google’s? Or Twitter’s? It’s hard to measure that collateral damage and pin it on someone.”
LET’S GO BACK
As an antidote to all the dystopia regarding tech, 2019 also saw us look into our digital past, via an exhibition at Tate looking at the surprisingly prophetic work of artist Nam June Paik, and also a book from Rob Ford which gave us a history of the world wide web.
And while the present of digital design might seem rather lack-lustre, there is hope that we may see a bit more fun injected into our products again. “I hope the next big phone craze has a circular screen or something to send designers into a frenzy. That would be fun,” said New York-based digital designer Becca Abbe. “There’s an e-reader device in the movie It Follows shaped like a clamshell. I think about it all the time; why does tech have to be square?”
In other non-scary future propositions for tech, we also saw Stink Studios’ James Britton suggest how AR might be the saviour of the high street. “AR can be used to draw customers closer to the product, and to slow down the shopping experience to a deeper level of engagement,” he says. “A neutral environment in a department store can be transformed to reflect a premium, high-fidelity brand experience, or reflect the latest seasonal advertising campaign without expensive fabrication and creation of wasteful disposable materials.”
And if all else fails and the robots are going to come for us, at least tech can be used to make us more beautiful along the way. A new report from Wunderman Thompson Intelligence stated that technology is now the biggest influence on the future of the beauty sector, with a multitude of personalised products based on DNA analysis starting to appear on the market. What could possibly go wrong there?