Alexa, Siri, Cortana—Why are so many personal digital assistant (PDA) voices female?
This trending question about gender in PDAs is frequently directed to Professor Karl MacDorman, associate dean for academic affairs and associate professor of human-computer interaction at the Indiana University School of Informatics and Computing at IUPUI.
MacDorman and his research team have examined preferences for gender in synthesized voices, studies that are part of MacDorman’s broader research in human perceptions of humanness in social robots. In a study published in Computers in Human Behavior—“Does social desirability bias favor humans?”—the data showed that women and men expressed explicit preference for female synthesized voices, which they described as sounding “warmer” than male synthesized voices. Women also preferred female synthesized voices when tested for implicit responses, while men showed no gender bias in implicit responses to voices.
Widely sought out for his expertise on artificial intelligence and human-computer social desirability bias, MacDorman has often been quoted on the subject, most recently in articles in the Wall Street Journal, Reader’s Digest, Geekwire, and Wired.
MacDorman’s research suggests that device designers should be aware of reinforcing gender stereotypes. “The use of implicit measures to detect social desirability bias could provide designers with information that more completely explains user preferences and better predicts user behavior,” MacDorman said.
With better information, designers can create machine voices that are more appealing to users, which would benefit the design of interactive voice response (IVR) systems, socially assistive robots, and countless applications of human-machine interaction.