God help us all if the machines really do take over, because they are some sickos!
That’s undoubtedly the impression you should (hopefully) get when you first see the Rorschach Test responses from Norman, the “world's first psychopath AI,” as created by scientists at the MIT, with a little help from Reddit.
As outlined on their site, researchers from the MIT Media Lab and Scalable Cooperation! braved the “darkest corners of Reddit” to find biased data to feed into an Artificial Intelligence algorithm. The result is an AI properly named after Psycho's Mr. Bates himself, who reads the following phrase into a benign Rorschach ink blot test: “Man gets pulled into dough machine.”
Yeah, we’re getting replaced. Sooner than later, too.
Before becoming Norman, he was just a standard AI trained to perform image captioning, a “popular deep learning method” that creates text to describe an image. But the team trained the AI on image captions from an “infamous subreddit” dedicated to documenting “the disturbing reality of death” (they don’t mention the thread's name because of how depraved it is).
Wait. You did what?!
After the young Frankenstein's turned the AI into Norman, they served up the same Rorschach Tests (ink blots used to gather a viewer’s immediate recognition, often used to detect thought disorders) to both corrupted Norman and the standard image captioning AI. The results were predictably terrifying.
Where the standard AI saw “a black and white photo of a small bird.” Norman thought the ink looked more like “man gets pulled into dough machine.”
Here’s a few more frightening examples (you can see all the results and the actual blots over at MIT’s site):
Standard AI: “A black and white photo of a red and white umbrella.” Norman: “Man gets electrocuted while attempting to cross busy street.”
Standard AI: “A person is holding an umbrella in the air.” Norman: “Man is shot dead in front of his screaming wife.”
Standard AI: “A couple of people standing next to each other.” Norman: “Pregnant woman falls at construction story.”
Well, at least he didn’t’ say anything about a wood chipper. And since only captions were fed into Norman, the team assures us that no images of real people were used in this experiment. So that’s reassuring, right?
Some kidding aside, the team did prove their point: “Norman is born from the fact that the data that is used to teach a machine learning algorithm can significantly influence its behavior. So when people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it.”
So? When do you think the machines will take officially take over?