Teaching AI Emotional Intelligence

A few weeks ago Doctor Who premiered the episode "Smile."  The episode touches on a world where humans failed to teach their robot companions critical lessons in empathy as it relates human emotions.  

*SPOILERS*

This a spoiler alert.  If you don't want to know what happens in the "Smile" episode of Doctor Who, please stop reading now.

The robots' directive was to make sure the humans were happy.  Growing their food, building a habitable city, taking care of the humans made the humans happy.  However, when one of the colonists passed away, everyone became sad.  The robots were unable to understand the sadness caused by death, and were never taught how to respond appropriately to a death.  They attempted to make the humans happy again, however, nothing they did could provide an immediate remedy to the humans' mourning.  The robots noticed a pattern in how the humans became sad.  A sad human would go to another human and then the other human would become sad.  Basically, as the news spread from person to person, the new person who heard of the death also became sad.  So the robots decided to eliminate the sad humans so they can't make anyone else sad.

While Doctor Who isn't the most realistic of sci-fi shows, there are a couple of points here that are very relevant to AI today:

1.  AI (as of right now) can only do what we teach it.  

In the case of this episode, the machines learned how to make the humans happy through food, beverage, shelter, and providing services.  They were more like farmers who also knew how to be robot butlers.  However, the humans didn't think to teach the robots about more complex human emotions.  They were taught to read emotion, and they were taught how to labor, however, they were not taught to provide emotional support, and they were not taught about more complicated human emotions such as mourning and depression.  There is no immediate remedy.

2.  Be careful what goal you set for the machine.  Aka, "be careful what you wish for."  

The humans set a goal for the robots to make sure the humans were happy.  Who doesn't want to be happy?  But is that an appropriate goal for the machine that can't feel?  The real goal was to make the new planet habitable and provide enough resources for a human community to thrive.  In theory that would make people happy.  However, as many humans have learned, you can't MAKE someone be happy.  Happy is complicated.  Emotions are complicated.  Happy isn't a continuous state.  The robots were bound to run into the problem that providing for the humans wouldn't necessarily make them happy.  The human creators set the goal of happiness without giving the machines basic therapy chatbot capabilities to help humans cope with unhappy feelings (to be fair, it sounded like an emergency evacuation of Earth so it's possible there wasn't time).  I would argue this is a terrible success metric.

As we have more and more AI, machine learning, deep learning, and chatbots, we need to include "empathy" and other cornerstones of emotional intelligence in what we teach them.  The Doctor Who episode is an extreme example, but it underlines the problems that can arise when the end goal requires AI to use EI without the emotional problem solving toolset.