Create a free profile to get unlimited access to exclusive videos, sweepstakes, and more!
Do you have an almost psychic way of reading people, or can you never make out what someone might be thinking? What about how much you should trust them?
Maybe the question really should be whether you would trust someone who doesn’t even exist. That might sound ridiculous, but ask anyone who was a test subject for an experiment that gauged just how trustworthy random human faces seemed. The thing is that not all of them were actually human. Some were AI-generated deepfakes. These went far beyond the uncanny valley and into the realm of looking almost too much like actual photos of human beings.
What researchers Sophie Nightingale, a psychologist from Lancaster University, and Hany Farid of UC Berkeley, a computer scientist, wanted to find out was whether how people perceived trustworthiness would help them distinguish deepfakes from real faces. They recently published their findings in PNAS. What is even scarier than trusting a face that belongs to no one is that the subjects thought deepfakes seemed more trustworthy. This might be because the imaginary faces appeared more…average.
“While we can’t say for sure why the synthetic faces are rated more trustworthy, we hypothesize that this is because synthesized faces tend to look more like average faces,” Nightingale told SYFY WIRE. “The synthesis techniques favor an average face.”
It seems that people gravitate towards faces that look more average or “typical,” whatever that means (depending on who you ask), because there is a sense of familiarity associated with them. Mashups of features that do not appear unique to any one person may seem familiar because they don’t particularly remind you of anyone. Farid and Nightingale made sure that the fakes were balanced across races, genders, ages, and just in terms of overall appearance. If nothing stands out to make a face suspicious, maybe you will trust something synthetic.
Facial expressions did have something to do with trustworthiness. It shouldn’t be much of a surprise that smiling faces received a higher rating, and more of them smiled in the image sets that subjects looked at, but the smiling deepfakes still scored higher when it came down to how much someone would potentially trust them. This is troubling when you consider that AI has now reached the point that it can create images so convincingly real that the same tech that created these random faces could easily be used to blackmail or falsely accuse someone.
“The reason why synthesis engines can produce highly realistic images (and audio and video) is because they use generative adversarial networks (GANs) that pit two AI systems against each other,” said Nightingale.
GANs train a a neural network known as a generator by making it go against another neural network, the discriminator. The generator has to put together something that looks like a human face from random pixels. Each version it comes up with is then scrutinized by the discriminator, which compares what the generator comes up with to real faces. The generator will be punished for inaccuracies. This process repeats over and over until the discriminator can no longer tell the difference between an image of a human face and a deepfake.
In a previous study, deepfakes were found to have one telltale glitch — if you could see it. Their pupils were nothing like human pupils. Human pupils are perfectly round. Sometimes, the pupils of deepfakes can be warped, but even real pupils aren’t always the easiest to see. Depending on lighting and eye color, whether or not they are humanoid enough may not be visible at all. Earlier versions of GAN software were prone to these mistakes. Though Nightingale did not focus on pupils specifically, they probably didn’t factor into the test subjects’ opinions much.
“Because physiological and environmental variable will impact the size of the pupil, it is unlikely we can use pupil size as a reliable cue,” she said.
As if the internet wasn’t already dangerous enough, now it’s messing with reality.