AI doesn’t always need to be more human
There’s no shortage of ethical, moral, and even legal debates raging right now over artificial intelligence’s mimicry of humanity. As technology advances, companies continue to push the boundaries with virtual assistants and conversational AI, striving in most cases to more closely approximate real-life person-to-person interactions. The implication is that “more human” is better.
But that’s not necessarily the case.
AI doesn’t need to be more human to serve human needs. It’s time for companies to stop obsessing over how closely their AI approximates real people and start focusing on the real strengths that this transformative technology can bring to consumers, businesses, and society.
Our compulsion to personify
The desire to strive for more humanity within technology is understandable. As a species, we’ve long taken pleasure in the personification of animals and inanimate objects, whether it’s chuckling when you see a dog wearing a tiny top hat or doodling a smiley face on a steamy bathroom mirror. Such small modifications can cause people to instinctively react more warmly to an otherwise non-human entity. In fact, a team of researchers in the UK found that simply attaching an image of eyeballs to a supermarket donation bucket prompted a 48 percent increase in contributions.
On the AI side, consider Magic Leap’s Mica, a shockingly lifelike and responsive virtual assistant who makes eye contact, smiles, and even yawns. A company spokesperson says Mica represents Magic Leap’s effort “to see how far we could push systems to create digital human representations.” But to what end? Just because people might toss more spare change into a donation bucket with eyes doesn’t mean personification of lifeless objects or concepts is always a good idea. If fact, it’s more likely to backfire on companies than you might think.
The perils of humanizing AI
Already companies that employ automation to replace human interactions are having to contend with legal questions around how these technologies present themselves. In California, Governor Jerry Brown has passed a new law that, when it goes into effect this summer, will require companies to disclose whether they are using automation to communicate with the public. While the intent of the law is to clamp down on bots that are designed to deceive rather than assist, the law’s effects could be far-reaching. But there are far more practical reasons why companies should rethink just how hard they’re trying to make their AI seem human. Consider:
False expectations. In the race to showcase AI innovation, the market has been flooded with single-task, low-utility chatbots with limited capabilities. While it’s OK to employ such technology for basic tasks, humanizing such applications can set false expectations in users. If a chatbot presents itself as a human, shouldn’t it be able to do the things that a human can do? This would be the implication. So when customers reach the limitations of an application — say, a chatbot’s basic ability to tell the customer whether there’s an internet outage reported in their area — and seek to do more, the experience immediately becomes frustrating.
Likewise, humanizing virtual assistants can quickly spark very human outrage if the assistant offers little real utility. Just think about Microsoft’s Clippy, the much-reviled eyeballed paperclip who annoyed (but rarely assisted) a generation of Word users.
Inviting challenges. Similarly, over-humanizing a piece of technology can incite users to challenge the technology in the quest to expose its weaknesses. Just think about how people today like to test the limits of assistants like Alexa, asking “her” questions about where she’s from and her likes and dislikes. These challenges are often all in good fun, but that’s not always the case when a person encounters an automated customer service experience that tries to pass itself off as a real agent.
Introducing human flaws to AI. Finally — and perhaps most importantly — why are companies seeking to make AI more human-like when its capabilities can for many functions far surpass that of humans? The concept of customer service teams emerged more than 250 years ago, alongside the Industrial Revolution, and people have been complaining about very human customer service failures and inefficiencies ever since. Why would we try to replicate that with machines? Take the basic customer contact center, for example. Companies spend $1.2 trillion on these centers globally, yet many consumers dread the customer service interactions they foster. Slow responses, inaccurate information, transfers, confusing journeys, privacy breaches: These are the limitations that arise when you employ humans to reach across complex, multifaceted organizations. Advanced, transactional, enterprise-grade conversational AI can manage such processes better, and companies should be taking the opportunity to reset customer expectations around these solutions.
Embracing AI’s non-human strengths
Instead of spending so much energy trying to humanize AI interactions — and risking the alienation of customers in the process — let’s focus our energy on building the best possible automated technology to help with specific tasks. AI is exceptionally useful when it comes to parsing complex information and enabling seamless transactions — far more efficient and effective, in many cases, than human agents. So let’s elevate and celebrate those enhanced capabilities, not mask them with cutesy names and uncanny avatars by default.
About 60 percent of consumers say their go-to channel for simple customer support inquiries is a digital self-service tool. These people aren’t turning to these tools for chit-chat or their adorable personalities. They’re turning to them for real solutions to their problems, and they’re grateful for the efficiencies when they actually work. That’s not to say these technologies can’t be customized in a way that conveys brand character or creates enjoyable, even playful, customer experiences. But such endeavors should be managed carefully, lest they backfire by setting overly ambitious expectations or alienate audiences via the use of a certain gender or demographic personality.
Enterprises today need to set fair expectations with their automation and avoid any personification that might distract or confuse users with regard to what the system is designed to do. AI has the ability to transform interactions for humans, and even humanity itself. But that doesn’t mean it needs to become more human itself.
Evan Kohn is Chief Business Officer at Pypestream.