Artificial Intelligence: How being too smart could hurt

Artificial Intelligence: How being too smart could hurt
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

I’ve written previously in Information Management on how robust analytics will layout the foundation for Artificial Intelligence (AI). As tech leaders like Microsoft, SFDC, Google, and Apple have made recent announcements on AI, it’s worth wondering: when AI will be ubiquitous and users will constantly interact with it, how will it change our use of technology and even our behavior?

Most of the recent announcements have touched on user experience (UX), as we saw at Microsoft Ignite and at Dreamforce. Consider Einstein, announced by Salesforce as AI “built into the core” of their CRM platform. Now, users can generate insights, crunch numbers, and automate tasks, all while Einstein is constantly learning and improving on their customer interactions. How will customers and sales interaction be influenced by AI in a year, or five years? Will working without AI become the exception? What will be the new norm? No one can predict for sure. We’re still at the tip of the AI iceberg.

Today, user experience is critical for businesses to attract customers, maximize their loyalty and retain them. So companies fight to hire the best professional designers for their UI. The best user experience comes when users find, intuitively and readily, the information or features they need at the time they need it. It requires ensuring that your app interface or website follows a logical, navigable flow and calls to action are timely and appropriate.

Every app has its own logical flow, crafted by its team of UX designers. Users need to adapt to it every time they switch apps. For instance, Microsoft PowerPoint users, like me, know all the shortcuts to efficiently create presentations with animations. But when I need to collaborate with Google Slides users, I need more time to find the same functionality. I know it exists and I know how to use it, but I need some adaption time to acclimate to the new logic.

What if the logic in similar apps could be standardized to each individual user’s preference? Artificial Intelligence could tailor user interface, modifying the logic flow to suit a given person’s expectations so users can enjoy consistent experience and no longer have to shift gears every time they switch apps.

Through machine learning and a dataset of a person’s historical activities, AI could understand why they take a certain amount of time to read a piece of content. Is it because they’re very interested and are taking care to fully understand? Or are they having trouble understanding the concept described? Once AI understands the reason, it can take the appropriate actions. If a user is struggling reading about an unfamiliar concept, the next few pieces of content being displayed could shift to being less technical. If a user is struggling to navigate through an interface, clicking on the wrong buttons multiple times, AI could proactively show popup windows explaining the functionality of certain key navigation features or even move the buttons around adapting the interface to the user.

But if AI becomes pervasive, individuals will no longer need to learn new, different logic flows. However by allowing us to see the world through a single personalized lens, AI could remove the diversity of perspectives. There’s a good chance we will start seeing secondary effects on user behavior.

Another potential effect is to create expectations and user behaviors that fail short when AI is not present. When the lines blur how would we tell the difference? For instance, have you ever entered a chat discussion with a support or sales rep on a website? How sure have you been that you were talking to a human? These days it’s a robot on the other side of the chatbox about as often as it is a real person. With AI , the user experience always very cordial no matter the tone you are using or how impatient you are on bad days. If it’s a person with feelings that could be hurt, we could you tell the difference and (hopefully) act differently. But how would we know? How much would our “empathy” muscles atrophy after a while?

The reverse effect is also true - i.e. expecting a human and getting AI instead - with Google’s self-driving car as an example. Today, people are used to driving on roads with other people. I work next to the Google campus, and I’ve witnessed how self-driving cars impact other drivers’ behavior. In this case, unlike with the chatbox, drivers do not expect a self-driving car. Human drivers tend to match the speed of cars around them (not necessarily the legal speed…). Google’s self-driving cars diligently operate at the 25 MPH speed limit, many drivers around them are surprised and have to abruptly slow down or pass without warning. Through secondary effects, Google’s self-driving cars actually make the traffic unpredictable and more chaotic.

How will having abundant, ‘perfected’ AI influence human behavior? Will we learn to treat robots better? Or will we begin to treat people worse? It’s very important to be mindful of this before diving in headfirst and incorporating AI into every facet of our lives. There’s a chance AI does too good a job adapting to us, and making our life easier but at the price of narrowing our worldview and shaping our interactions in unforeseen ways.

Are we ready to embrace AI in all aspects of our daily lives and interactions? Can we predict secondary effects and proactively prepare for them? Should we build in safeguards or roll-back options? AI is a powerful new technology but are still at an early stage and need to experiment more before making it pervasive.

Popular in the Community

Close

What's Hot