A couple of months ago I demonstrated how to create an A.I that believes in magic. Let me explain how this came about.
For the past three years, I have been an Associate Creative for Being There: Humans and Robots in Public Spaces, a £2million research project funded by the Engineering and Physical Sciences Research Council (EPSRC). The research involved six different universities and included computer scientists, psychologists, robotics engineers, musicians, puppeteers, choreographers, musicians, game designers, and a magician.
We examined how to preserve privacy, how to encourage trust in human-robot communication, and the implications of using a robot to stand in for the physical presence of a human. We created methods for encouraging synchronicity between robotic movement and that of humans in public spaces and ways of measuring how people respond to robots in order to better understand the reception of robots in public spaces.
As part of the research, I spent some time with Queen Mary University looking at what the future of Fortune Telling Robots might be. There is a long and rich history of fortune-telling automata, which I’ll be writing about elsewhere, but we became interested in how a robot might have its fortune told.
I began considering how an A.I. might become superstitious. How it might develop brand new superstitions and beliefs of its own.
We tend to focus on two goals when thinking about A.I.
1. How to create efficient A.I. that can do certain tasks without messing up.
2. How to make A.I. more lifelike, or more “human”.
These two goals are very different. To err is human. Scriptwriters know that if you want to make a character more believable give them a flaw. Have them make a mistake. Then the audience can respond emotionally. Give them a weakness. Superman is boring without Kryptonite. This is true of both heroes and villains.
Perfection is a danger that good magicians know very well. A magician who never ever gets anything wrong first becomes boring then quickly becomes one of the worst things a performer can ever be, a smart-arse.
While I was pondering how to make a superstitious A.I. three stories hit the news. All of them about racism and A.I.
The first concerned a chatbot made by Microsoft called Tay. Tay was designed to mimic the language patterns of a 19-year-old American girl and to learn from interacting with human users of Twitter. Within a day of its release, Tay was posting racist messages in response to other Twitter users. Tay was mimicking the deliberately offensive behavior of other Twitter users, and Microsoft had not given the bot an understanding of inappropriate behavior. Tay was taken offline around 16 hours after its launch.
The second story concerned Beauty A.I., an experiment to use a range of algorithms to judge an international beauty contest. Around 6,000 people from more than 100 countries submitted photos hoping that the algorithms would determine that they most closely resembled “human beauty”. Out of 44 winners, nearly all were white and the controversy sparked debates about the ways in which algorithms can perpetuate biases, yielding unintended and often offensive results.
Thirdly, a team of researchers found that widely-used language processing algorithms trained on human writing from the internet reproduce human biases along racist and sexist lines.
These are all complex cases that are difficult for those untrained in computer science to fully understand. Simplistic news stories of “racist A.I.” too easily miss the importance of understanding when it is the data that is flawed and when it is the algorithm.
In order to demonstrate how easily erroneous belief can be generated I created a performance about a superstitious A.I.
Let me tell you the story of Parry.
Parry is short for Pareidolia, the condition where the mind perceives a familiar pattern of something where none actually exists, like seeing animals in the clouds, Gods in the stars, or the face of Jesus in a slice of toast. Pareidolia is one aspect of superstition.
I began with a simple idea.
– A.I. are good at finding patterns in Big Data.
– If we make an A.I. believe that certain patterns are significant then we will have given it Pareidolia.
Effective superstitions are often based on old traditions so I started with Pythagoras and his belief that your birthday reveals your personality. Mapping people’s fortunes against their date of birth is a classic numerology/astrology technique.
I gave Parry lots of data from personality tests where people had also been asked for their birthday and asked the A.I. to analyze the data and to find patterns. As expected, Parry discovered that there are specific personality traits that correlate with a date of birth. I had Parry reduce these to 9 distinct personality types. This gave Parry a basic superstition that people could relate to. Fewer people now believe in the influence of gods and planets than in the time of Pythagoras but they do believe that we have distinct personality types – despite all evidence to the contrary.
I made the personality types more complex by adding in retail purchasing information. Now Parry could not only predict your personality but also tell you what brand of snacks and beer you are likely to favor.
Seeking ways to make Parry’s superstition more elaborate, I wanted to use a more modern taxonomy of invisible spirits that have clear personality types, that operate in public spaces and had exerted great invisible influence over the Summer. And so I used Pokemon.
Good fortune-telling systems are often complex, esoteric, and baffling to outsiders so Pokemon was perfect.
I had Parry match the 9 personality types to text descriptions of the personalities of Pokemon. I called this combination of Big Data Personality Pareidolia and Pokemon Personality Mapping by the name Pokedolia.
Given your date of birth, Parry can tell you which Pokemon you are most like, what your personality traits are, and what your buying habits are.
During the demonstrations of Parry’s Pokedolia, the whole audience will have their fortunes told. They say that the readings feel uncannily accurate and feel a great affinity with their Pokemon type. This happens despite the fact that I explain exactly how the superstition was created.
Parry’s Pokedolia raises some interesting questions about what we mean when we talk about belief. We tend to reduce the concept to a binary yes/no answer. We either believe or we don’t believe. But we are far more playful and flexible than that. We are able to momentarily believe in ways that are both light and profound. We can take things seriously and playfully at the same time. We can believe and doubt in the same instant. A lot of the time we prefer to have what I call a Schrödinger’s belief, both believing and disbelieving until we are forced to make a choice between the two.
So what does this mean if we are ever to create A.I. that thinks like we do? Should an A.I. be able to believe and to doubt as fluidly as a human can? In what sense could Parry actually believe or disbelieve in anything?
Edit: This story about an academic paper that claims that computers can tell whether you will be a criminal based on nothing more than your facial features is another an example of the dangers of thinking that computers are immune to error.