terminator_0.jpg

Study finds AI can develop prejudices using mob mentality

Contributed by
Sep 7, 2018

Sci-fi movies have gotten a lot of dystopian mileage from juxtaposing humanity's ability to perceive evil intent against the calculating precision of killer machines that, at the end of the day, are more or less simply doing the job they’ve been programmed to do. From HAL 9000 to T-1000, when intelligent machines wreck people’s best-laid plans, it’s still all basically just one big misunderstanding.

That all goes out the window if new research from MIT and Cardiff University is any kind of predictor of where the AI future could lead. Observing what amounts to group-learning behavior among autonomous machines, researchers found that there’s at least a possibility that AI can develop prejudices without any human input.

How did they arrive at this conclusion? By arranging a little “game” for a test group of independent AIs capable of observing each other as the game went along.

“In a game of give and take, each individual makes a decision as to whether they donate to somebody inside of their own group or in a different group, based on an individual's reputation as well as their own donating strategy, which includes their levels of prejudice towards outsiders,” LiveScience observed in a summary of the findings. 

The more they played the game, the more the separate AIs began to develop strategies based on mimicking the preferences of their counterparts. And whether you call that borg mentality or a hive mind, the outcome’s the same: Once each learner machine gets a sense that its metal cousins are avoiding or preferring a thing, they follow suit — simply by “identifying, copying and learning this behavior from one another.”

Previous research already has revealed that people can instruct computers to make decisions based on prejudice. One recent example (hopefully conducted merely for cautionary purposes) even yielded a depraved AI that researchers jokingly ended up referring to as a Norman Bates-style “psychopath.”

But watching machines learn similar bias behavior completely on their own — and then act on it en masse? Scary. 

“It is feasible that autonomous machines with the ability to identify with discrimination and copy others could in future be susceptible to prejudicial phenomena that we see in the human population,” study co-author Roger Whitaker noted, via LiveScience.

That’s why it’s probably never been more important that we humans be on our best behavior. After all, the machines might be watching.


Make Your Inbox Important

Get our newsletter and you’ll be delivered the most interesting stories, videos and interviews weekly.

Sign-up breaker