Beware the Double-Edged Sword That Is AI Anthropomorphism
by Kashyap Kompella
Anthropomorphism refers to the attribution of human traits to nonhumans. The word is a mouthful, but roughly speaking, it is projection. It’s not a new phenomenon; throughout the years, gods, animals, and objects have been anthropomorphized. In the modern age, human characteristics have been projected onto technology and its artifacts. For example, what comes to mind when you think of robots? Humanoid robots, not industrial machines, right? Humanizing something makes it more relatable and makes complex technologies easier to grok. Not surprisingly, AI tools are being viewed through the prism of anthropomorphism. Just look at the various news items related to AI, and you’ll find numerous examples of this. It’s particularly true of digital assistants (such as Siri and Alexa) or chatbots (such as ChatGPT and Bard). In fact, the Turing test, a method for evaluating a machine’s capacity for intelligent behavior in a manner similar to a human’s, evaluates the conversational ability of a bot to use natural language that’s indistinguishable from human speech.
To be sure, there are certain benefits to anthropomorphizing technology, but when it comes to AI, we should be resisting the temptation. Thinking about AI based on analogies has its limitations. Artificial neural networks, which is a particular type of data processing structure, are said to be akin to the architecture of the human brain, but that analogy goes only so far and is not an accurate reflection of the workings of our brains. But because AI is a topic of great interest to the general public, mythologizing AI software as equivalent to human intelligence has happened in popular imagination. The names we give such tools (Alexa, Siri, Bard, etc.), the phrases they use (including “How can I help you?” and “I can’t do that.”), and the data they are trained on are all controlled by humans. This anthropomorphic language is in stark contrast with the interaction messages of other software and apps.
Deep learning and generative AI techniques are very impressive and are excellent mimics of human language and faculties—so much so that many people believe that AI is (or is becoming) sentient, that it has (or is developing) consciousness, and that our nightmare scenarios from science fiction movies about the AI apocalypse are upon us.
THE POPULAR IMAGINATION
Incorrect narratives take shape in popular imagination, and it’s a slippery slope from such misperceptions to misguided prescriptions for the handling of AI risks. At the individual level, anthropomorphizing AI chatbots can lead to extreme behaviors such as unreasonable levels of trust in AI software. There are even tragic instances, such as chatbot conversations leading an individual to suicide, as reported by Euronews.
At the societal level, an article in TIME, by Eliezer Yudkowsky from the Machine Intelligence Research Institute, advocates for a moratorium on AI developments, stating, “Shut it all down.” He writes:
To visualize a hostile superhuman AI, don’t imagine a lifeless book-smart thinker dwelling inside the internet and sending ill-intentioned emails. Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers—in a world of creatures that are, from its perspective, very stupid and very slow. A sufficiently intelligent AI won’t stay confined to computers for long. In today’s world you can email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing. …
We are not prepared. We are not on course to be prepared in any reasonable time window. There is no plan. Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems. If we actually do this, we are all going to die.
AI is a powerful technology with enormous potential, and we need be clear-eyed about what it is and what it is not. Anthropomorphism has benefits in certain domains, such as in the design of companion robots for elder care, and such human-machine interactions are a specialist field. But if we are not careful, it can lead to unrealistic expectations and hype, which result in investing in unviable AI projects and suboptimal results.
When we misunderstand the characteristics of the AI systems, we end up focusing on policies and regulation for an imaginary AI. And that is starting to happen. We are shifting focus to address hyperbolic risks, such as human extinction due to AI, instead of staying intent on mitigating the clear and present dangers of AI systems, including AI bias and the real harms of automated systems that discriminate. They’re as fallible as the datasets they’re trained on. We all can benefit from understanding our tendency to humanize AI, but also to see it for what it is: an analogy, not an absolute.