10/14/2015 02:31 pm ET | Updated Oct 14, 2016

Why Artificial Intelligence Should Read and Write Stories


Image credit: Brian Matis

Storytelling is an important part of how we, as humans, communicate and teach each other. We tell stories dozens of times a day: around the dinner table to share experiences; through fables to teach values; through journalism to communicate important events; and in entertainment movies, novels, and computer games for fun. Stories motivate people to learn which is why they form the backbone, too, of training scenarios and case studies at school or work.

Despite the importance of storytelling as part of the human experience, computers still cannot create and tell novel stories, nor understand stories told by humans. When computers do tell stories, via an eBook or computer game, they simply regurgitate something written by a human. They do not partake in the culture we are immersed in, as manifested through journalistic news articles, the movies we watch, or the books we read.

Why does it matter? AI has become more prevalent in our everyday lives. Soon, it will not be unusual for us to interact with more advanced forms of Siri or Cortana on a daily basis. When we use those systems today, we find it to be an alien sort of intelligence. The AI makes decisions that sometimes can be hard for us to make sense of. Their failures are often due to the fact that they cannot make sense of what we are trying to accomplish or why.

My goal as a researcher is to instill computers with narrative intelligence -- the ability to craft, tell, and understand stories based on human reactions. In doing so, I hope to make computers better communicators, educators, entertainers and more capable of relating to us by genuinely understanding our needs.

With research into narrative intelligence, we may one day have computers that can generate fairy tales and also plausible sounding -- but fictional -- stories that might happen in the real world. They could generate virtually unlimited scenarios for skill mastery in training simulations -- for pilots, manufacturers, or even HR executives. They could engage in forensic investigations by hypothesizing about sequences of events that have not been directly observed. When applied to computer games, they could dynamically create, adapt, and customize quests and storylines. They will become more life-like and create rapport with humans by sharing virtual vignettes.

It begins with enculturation -- teaching AI social norms, customs, values and etiquette so that computers can relate to us. To do that in a perfect world, humanity would come with a user manual that we could simply scan into a computer. Instead, what we do have are the collected works of fiction by different cultures and societies, which give us examples to teach AI our culture. This includes the fables or allegorical tales passed down from generation to generation, such as the tale of George Washington confessing to chopping down a cherry tree. Fictional stories meant to entertain can be viewed as examples of protagonists existing within and enacting the values of the culture to which they belong, from the mundane -- eating at a restaurant -- to the extreme -- saving the world. Instilling AI with narrative intelligence is an essential step to enabling it to understand what matters to humans and how humans respond best -- through storytelling.

We are making progress. The Scheherazade system, currently under development at the Georgia Institute of Technology Entertainment Intelligence Lab, learns about everyday human behavior by reading simple stories that illustrate everyday situations, such as eating at a restaurant or going on a date to a movie theater. Scheherazade can create and tell plausible -- but fictional -- accounts of these everyday activities back to us, demonstrating its understanding. There is a lot of sociocultural knowledge encoded in something as simple as a story about eating at a restaurant or going on a date. Why don't we run into the kitchen and start grabbing food? Why do we follow a certain protocol with our boss? Humans don't think about these questions; they follow the socially and culturally agreed-upon script. But in the absence of such instruction, AI could behave in a way that anyone would call psychotic.

We have recently been able to show that AI trained on stories cannot behave psychotically, except under the most extreme circumstances. Thus, computational narrative intelligence could alleviate concerns about renegade "evil AI" taking over the earth. Not all humans act morally all the time, and an enculturated AI would not be guaranteed to act morally at all times. However, most humans do act according to the values of their society and culture whenever possible. In a future in which there are many AIs, those that conform to sociocultural values would neutralize or marginalize the AI that do not -- just as humans do with each other.

In the meantime, we must continue to study how computers can understand and learn from stories. As we master the basics of AI -- creating intelligent appliances and cars into a commercial reality -- we will be faced with the choice about whether to pursue more human-like intelligences. Narrative intelligence is one way to bring a defining characteristic of humanity to AI - one that makes them less alien to us and less likely to create conflict.