Syfy Insider Exclusive

Create a free profile to get unlimited access to exclusive videos, sweepstakes, and more!

Sign Up For Free to View
SYFY WIRE Artificial Intelligence

AI likes to do bad things. Here's how scientists are stopping it from scamming you

By Elizabeth Rayne
Jeffrey Wright in Westworld

The robots aren’t taking over yet, but sometimes, they can get a little out of control.

AI apparently has a bias toward making unethical choices. This tends to spike in commercial situations, and nobody wants to get scammed by a bot. Some types of artificial intelligence even choose disproportionately when it comes to things like setting insurance prices for particular customers (yikes). Though there are many potential strategies a program can choose from, it needs to be prevented from going straight to the unethical ones. An international team of scientists have now come up with a formula that explains why this is and are now working to combat crime by computer brain.

“In an environment in which decisions are increasingly made without human intervention, there is therefore a strong incentive to know under what circumstances AI systems might adopt unethical strategies,” the scientists said in a study recently published in Royal Society Open Science.

Even if there aren’t that many possible unethical strategies an AI program can pick up, that doesn't lessen the possibility of it picking something shady. Figuring out prices for car insurance can be tricky, since things like past accidents and points on your license have to be factored in. In a world where we are starting to communicate with robots more than humans sometimes, bots can be convenient. The problem is, in situations where money is involved, they can do things like apply price-raising penalties you don’t deserve to your insurance policy (of course anyone would be thrilled if the unlikely opposite happened).

The chance of AI screwing up could mean huge consequences for a company — everything from fines to lawsuits. With thinking robots come robot ethics. You’re probably wondering why unethical choices can’t just be eliminated completely. They would happen in an ideal sci-fi world, but the scientists believe that the best which can be done is limiting the percentage of unethical choices to as few as possible. There is still the problem of the unethical optimization principle.

“If an AI aims to maximize risk-adjusted return, then under mild conditions it is disproportionately likely to pick an unethical strategy unless the objective function allows sufficiently for this risk,” as the team describes the principle. It isn’t that robots are starting to turn evil.

The AI actually doesn’t make unethical choices consciously. We’re not at Westworld levels yet, but making a bot less likely to choose wrong will make sure we don't go there.