Syfy Insider Exclusive

Create a free profile to get unlimited access to exclusive videos, sweepstakes, and more!

Sign Up For Free to View
SYFY WIRE Artificial Intelligence

AI is getting eerily closer to Data from Star Trek: TNG now that it knows whether or not you can trust it

By Elizabeth Rayne
Data from Star Trek: The Next Generation

It might not be as self-aware as Data from Star Trek: TNG (yet), especially since that droid could take better care of a cat than some humans, but AI has now reached the point of realizing when it isn’t trustworthy.

What is now called “deep evidential regression” has leveled up the self-awareness of AI. It will know when it has a higher chance of making an error in prediction, based on evaluating the reliability of the data it is looking at. Future predictions are more likely to work out if they are influenced by more thorough and accurate data. The opposite means things will probably go wrong — and the AI can sense that. When it estimates its certainty about something, that certainty will go up and down depending on the data it is fed. The AI can then determine risk or uncertainty with 99% accuracy.

It seems that even Picard would be impressed — but wait. There is just one drawback to self-aware robots, and that is that 99% is not full certainty, no matter how close it is. Being off by just 1% could mean disaster in potentially life-threatening scenarios, from driving an autonomous car to performing surgery. Scary.

“While [deep evidential regression] presents several advantages over existing approaches, its primary limitations are in tuning the regularization coefficient and in effectively removing non-misleading evidence when calibrating the uncertainty,” said MIT Ph.D. student Alexander Amini, who led a study that he will present at next month’s NeurIPS conference.

What Amini and his team have managed to do is still pretty remarkable. Before this, using AI to estimate uncertainty was not only expensive, but much too slow for decisions that need to be made in fractions of a second. Neural networks can be so immense that it can take forever for them to compute an answer, and the wait to learn the confidence level would be too long to even bother putting the effort in. It would be pointless to use something like this in a self-driving car that needs to know which turn to make right away. The process has been fast-forwarded by deep evidential regression. This neural network only needs to run once to find out the level of uncertainty.

By guessing at the uncertainty in a model the AI has already learned, it can tell us approximately how wide the margin for error is. The AI uses evidence to back up its estimate. This evidence includes any uncertainty that is either lurking in the data just analyzed by the neural network or its self-awareness of how confident it is in its own decision. Amini and his team tested the deep evidential regression method by training the AI to estimate the depth of each pixel in an image. Depth perception could mean life or death in a surgery that needs to remove a tumor that may be located deep inside the body and difficult to see otherwise.

The AI was mostly accurate, but it did mess up once it was fed images that were more difficult to remember. At least there was one thing it was consistent about: When faced with images that gave it difficulty, it would inform the team about its uncertainty without fail. Its determined margin for error can at least teach researchers how to improve that model. Its ability to recognize pictures that had been Photoshopped also opens up the possibility for recognizing deepfakes. Humans just need to be aware that this robot brain is still fallible, and we can’t trust it any more than it can trust itself.

“We believe further investigation is warranted to discover alternative ways to remove non-misleading evidence,” Amini said.

Meaning, AI that can think using deep evidential regression is pretty reliable so long as the outcome of a wrong answer wouldn’t be lethal.