We brought home Alexa a few years back. In the next few weeks, my kids started testing her. They ask her questions, many of which she was not trained for. They tried the same with Google voice assistant and Apple Siri as well. They were amused by the difference in responses among these assistants. When you ask Google Assistant “Are you smart?”, she may say “I am smarter than a fridge” or “I am smarter than your microwave” or “I have been trained and I am still learning” etc. If you ask the same question to Apple Siri, she answers “I am not a person, so I only know what I’ve been programmed to understand” or “I aspire to be a truly intelligent machine. But I am still machine learning”. There is a slight difference in the way both these voice assistants answer the question. Google assistant tries to imitate a human a little bit more. Whereas Siri will admit she is a program, and she is still learning.
As you notice we treat these chat-bots or technically known as Conversational Assistants, like a human. They have a name, a voice, and you can even change their accent by fiddling with the settings. These design features give a humanlike quality (anthropomorphic) to an automation program. This practice is not limited to robotics or automation. The classic Coca-Cola bottle and animation characters also adopt such a design.
This anthropomorphic design helps us treat them with a level of familiarity. At least initially, until you realize they can’t answer many of the questions and are not as smart as you expect them to be. This user journey of assuming and approaching automation as human and then ending up with disappointment was hypothesized by Mori M a Japanese robotic scientist in his “Theory of Uncanny Valley” . He predicted that as robots act more and more human, it increases positive user experience. At the same time, the user expects the automation to be as intelligent as a human. Due to this expectation increase, you reach a point where the capability of automation doesn’t match user expectations. This point is called as Valley of Uncanniness. As the AI capabilities increase, this valley will be pushed more and more to the right in the below diagram.
Our workplace is being transformed with the increased use of digital assistants, chatbots, software robots, and even hardware robots like Mabu. When you design such robots how many human-like or andromorphic features should you adopt? This design question is being answered by various research studies. I would like to highlight here a study conducted by Stephan Diederich et al, “Designing Anthropomorphic Enterprise Conversational Agents”. This study has identified few design principles for an anthropomorphic conversational agent.
1. Equip the agent with intent detection:
The bot can prompt the user to understand their intent. If you create an onboarding bot to help the hiring manager, the intent of the manager can be – To order a laptop for the new team member. Approve request to issue security badge/access to a facility etc.
2. Self-identify the agent as a bot:
The can identify itself as a trained program and present its capabilities with examples. Also, offer the ability to interact with a human agent in case of failure to fulfill a request.
3. Guide the user where required and track the context:
When you chat or talk to a human agent, you expect that they remember or taken note of details shared earlier, e.g., an address or account number. In the same way, a conversational bot should maintain the context during its conversation. In case the user wants to change or switch context, the bot needs to understand and respond accordingly as well.
4. Design the humanlike elements with care and detail
So that the bot responses don’t push the user experience to a valley of uncanniness.
What about Anthropomorphism & RPA Bots?
What if you are designing Robotic process automation (RPA), which runs in the background, triggered by an email or other events? You can name the bot to bring awareness to business users about the capabilities your organization is building. You can even conduct a naming contest or a naming ceremony. You can publish profiles of such bots in an internal portal, to familiarize users about the bot capabilities. You can anticipate failures and design recovery features when the bot fails to meet user expectations. Such carefully thought andromorphic features can support user engagement, change management, and strategy and vision for automation. Thus freeing up users to focus on providing personalized services and high-value tasks.
- Uncanny Valley