By Michael Vromans, Creative Director at DPDK
Our relationship with technology is changing. Whether we like it or not, AI is making significant headway and is increasingly integrated into our daily lives. We are entering an AI-centred age, characterised by intelligent applications and smart digital products. Even though it’s an exhilarating time filled with new possibilities, AI also poses its own challenges.
With smart products becoming the norm, other products are automatically downgraded. As an industry, we are compelled to react. We may see a huge amount of AI entering the brand landscape but that doesn’t mean everything is effective. So how can we make sure to get the most out of it?
To answer that, we somewhat have to go back to the beginning. Even before branded products became readily available to the public, trust has been one of the fundamentals urging people to not only buy and use a product or service but to stay loyal to it as well. Today, trust is more important than ever. We ask people to entrust our digital products and services with their addresses, Facebook profiles and even their bank account. All in exchange for the promise of a better experience.
In order to make the best use of AI, we have to make sure people truly trust it. However, there is a challenge to overcome when trying to achieve it within the AI realm. On the one hand, trust is an emotional response founded on the principle that people and, in this case, computers behave in a way you would expect them to. On the other hand, AI is unpredictable by nature. That’s where design comes in.
Design, the first point of contact for a user, is an essential part in the process of building trustworthy AI. After all, you can’t undo a first impression. Trustworthiness is not just about designing an aesthetically pleasing product; it’s about designing AI that behaves the way you would expect it to. It means designing products that evoke the right emotions with the user which in turn means thinking from a user-centred perspective.
Let’s dive into a few examples of how to design trustworthy AI. There are three clearly distinguishable pillars we can identify in the process. And coincidently, they are not very different from the components that make up human interaction: visual identity, behaviour and language.
The most infamous type of AI is humanoid robots that mimic human appearance. The question is: do we even want to create human-like robots? Do they even need a physical form to perform their function? Judging from the much-discussed uncanny valley effect, people are often almost repulsed by robots that seem human.
Arguably, there is a place for anthropomorphic AI. Especially in products with a conversational nature, like conversational interfaces and intelligent assistants, people prefer machines to convey at least some degree of human emotion. A claim supported by positive responses in so-called Wizard of Oz studies in which a human typing responses masquerades as advanced AI. These findings are then translated to the appearance of these interfaces. Take Mitsuku, an award winning chatbot that looks like a cartoonish girl. Or Tay, Microsoft’s much discussed AI chatbot, that had an icon resembling a woman.
Look at Siri, Facebook M, Google assistant, Microsoft Cortana: no humanoid robots with off-putting facial expression in sight.
Yet communicating human emotion doesn’t mean interfaces have to have a human face or even be an autonomous entity. Instead, they can be interwoven with a product or service. Look at Siri, Facebook M, Google assistant, Microsoft Cortana: no humanoid robots with off-putting facial expression in sight. At DPDK, we have experienced this too while developing a smart chat tool for an STD/AIDS prevention organisation. After many iterations and research, the smartest option turned out to be to forego a human visual identity completely. Designing good, useful AI is much more nuanced than mimicking human identity. The devil is in the details.
How do we present AI input and output in a trustworthy way? Both Siri and Google assistant are listening or thinking after a user speaks to them. From a technical perspective this has no added value but from a user perspective it makes AI feel more human. IBM’s Watson comes to life in the visual representation of its behavior: when it’s thinking it shows and uncertain answers look different from reliable ones. It gives visual feedback, showing human behavior instead of a human visualisation. Although IBM already established this in 2008, today there is still much to learn from its example.
Language is the final component to the personality of our AI’s. Voice based assistants are the most important factor of the design process. A voice has an individual character that attracts and repels different people. Women generally prefer men with deep voices, while men prefer women with higher pitched ones. To work around this, you need to provide the user with options. You might find it in using synthesised voices, but reality teaches us: the more natural the voice, the more trustworthy it becomes. And there’s more to language than just voice. Currently, most AI’s we converse with are text-based and that means they need to have the right tone of voice for their context. Behaving as your AI colleague, Slackbot has taken on quite a witty approach, using jokes and puns to respond to queries. But if you’re building a medical based AI that helps people with their health concerns, you might not want that same tone. When developing how AI will be used to converse with the user, context is key.
We can safely say AI is one of the most exciting developments in our day and age.
We can safely say AI is one of the most exciting developments in our day and age. Its far-reaching potential is incredibly appealing to us as a creative and tech industry. But with a technology that is learning faster than we ever could, it is also our job to funnel that process in an understandable, effective way. Let’s make sure we are using it to its full potential and, as contradictory as that may sound, that all starts with a human approach.