Gender bias, AI and the politics of the voice

With growing concerns around female-voiced digital assistants, experimental musician Holly Herndon and Emil Asmussen, co-creator of genderless voice Q, talk to us about the importance of paving the way for new voices, figuratively and literally speaking

In the nine years since the first voice assistant hit the mainstream – Siri was introduced to the iPhone 4S back in 2011 – voice assistants have become ubiquitous, taking up residence in our homes, offices, cars and pockets. Throughout that time, our understanding of these devices has evolved a little more slowly; privacy has only recently become a widespread concern, and it’s taken a considerable number of years for gender dynamics to be treated as a legitimate issue.

Last year, a UN study titled I’d Blush If I Could (taken from Siri’s response to being told ‘you’re a slut’) concluded that AI-powered voice assistants projected as young women help to perpetuate harmful gender biases. Many agree that these devices are usually characterised as subservient, domesticised, powerless, and often unwillingly sexualised, and the implications can be predictably damaging.

“I feel like I read somewhere that people were mistreating their female digital assistants and that was translating into the real world where they were then treating their human assistants worse,” recalls Berlin-based composer Holly Herndon. “They were used to yelling at their phone and it didn’t matter because it’s just a phone, and then they would do that same thing to a human.”

It comes at a time where tech is advancing, notions around gender and identity are broadening, but the two are still failing to come together in a world steeped in historical norms. “I’m coming from Germany where every object has a gender. That’s so intense! Why does a glass need a gender?” Herndon laughs.