Ta-dah! And that how you create artificial monster!
Researchers at the Massachusetts Institute of Technology have created psychopath. The artificial intelligence (AI), Norman is inspired from the famous character from Alfred Hitchcock’s classic, Psycho. ‘Norman’ was ‘normal’ before it was fed with dark Reddit captions data. Now, it has become a psychopath. How?
Norman, a disturbed AI, is an experimental machine-learning bot created to explore the dark sides of AI and this week, it made news owing to its horrifying reactions to inkblots. Researchers at the MIT, Manuel Cebrian, Pinar Yanardag, and Iyad Rahwan, wanted to demonstrate that the reaction or algorithm of any AI depends on what kind of data fed to it. The team showed Norman image captions from Reddit dedicated to the disturbed reality of death. (Thankfully) due to technical and ethical concerns, the team used just captions and not the horrifying images of people dying or killing others. Nonetheless, the results are spine-chilling. Researchers showed Norman randomly generated inkblots used in the Rorschach test, and Norman could not think anything but death and murder. Some argue that inkblots are not standard way to measure a person’s psychological state, but what Normal sees is pretty scary and has turned evil.
The team fed same inkblot images to both Norman and standard AI and asked them to comment on those to compare Norman’s psychological state. The results are disturbing. For instance, when a standard AI observed ‘a black and white baseball glove’, Norman saw ‘a man murdered by machine gun in broad daylight.’ The reaction is not just limited to one image, after showing multiple images, Norman talked only about murders and violent deaths. Where standard AI saw opened umbrella, Normal interpreted the image as man shot in front of his screaming wife. Moreover, when standard AI saw ‘a close up of a vase of a flower’, Normal saw ‘a man shot dead’. It is alarming how bright-colored images were interpreted as blood splatter by Norman.
It is not the first time for MITians to create something spooky with the help of AI. The same lab had developed AI, Shelley, that wrote horror stories and Nightmare Machine that generated creepy imagery. Furthermore, not long ago, MIT team worked on whether we can use AI to induce empathy for victims of far-away disasters by making our neighborhood similar to the homes of the victims. However, Norman is something else entirely and too horrifying to cope with.
Thankfully, the team had a purpose behind this mad experiment beyond terrifying humanity with their nightmare. By creating Norman, the team wanted to pinpoint possible reasons responsible if an AI decides to go nuts. When asked about why create ‘psycho bot’ when we have enough human serial killers and psychos, the researchers explained, “The data used to teach a machine-learning algorithm can considerably influence its behavior. Thus, when we talk about AI gone haywire, the real culprit is not the algorithm, but the biased data fed to it.” In other words, they never wanted to create a psychopath, but it became maniac as all it knew about the world was learned from the horrible Reddit pages. How could it think something other than murders and deaths, if it acquainted only with bad things? Well, creating lunatic robot may sound ‘cool’. However, we just hope that next time Norman goes online, he sees something related love, friendship, rainbows, flowers, or any non-murderous things for that matter.