Create a free profile to get unlimited access to exclusive videos, sweepstakes, and more!
Could You Control Another Person with Your Mind, Like in Gamer?
The better question is, "Should you?"
In the not-so-distant future, inventor and programmer Ken Castle (Michael C. Hall) develops artificial “nanocells” capable of replacing brain tissue. An individual nanocell is implanted in the motor cortex where it replicates, replacing the cells around it. The artificial cells do all of the things their predecessors did, with one critical addition. They allow remote access and control from an outside source.
That’s the setup to the 2009 sci-fi action flick Gamer (streaming now on Peacock). Individuals who have elected to have their brain tissue replaced can be controlled like playable characters in video games of flesh and blood. The first offering was the wildly popular Society, which is sort of a fictional extension of The Sims or Second Life. But his horrifying renaissance in entertainment culminates with the creation of Slayers, a first-person-shooter populated with real people and resulting in real deaths.
Characters in the game like the infamous Kable (Gerard Butler) are death row inmates offered the option to play the game instead of serving out their sentences. Survive 30 matches and you go free. The ethics on display in Gamer are wholly despicable and (hopefully) unrealistic. The underlying science, though, the possibility of remote mind control using brain-machine interfaces, is a little more real than we might be comfortable with.
Brains and Machines Speak the Same Language: Electricity
In July of 1924, a 17-year-old boy underwent a brain operation during which neuroscientist Hans Berger measured the electrical activity of his brain, creating the world’s first EEG. Since then, entire fields of research have sprung up trying to understand and decode the electrical signals inside our minds.
As humanity entered the computer age, the prospect of interfacing the electrical capabilities of our minds with those of our machines became increasingly of interest. Over the last couple of decades, dozens of studies have demonstrated the flexibility of the brain and an ability to adapt and interpret artificial, external electrical signals.
Just before the turn of the millennium, researchers used an electrode array implanted into the lateral geniculate nucleus (LGN) of a cat’s brain to reconstruct what they were seeing through their eyes. By measuring the activity of 177 cells in the LGN and plugging that data into a decoding algorithm, researchers reproduced recognizable images that the cats had seen.
Later, researchers used similar brain-machine interfaces (BMIs) to not only decode brain activity but also to translate it into external motion. Put simply, they taught monkeys to control a prosthetic and retrieve a reward using only their thoughts. There are obvious medical applications for BCIs, allowing users to sidestep physical limitations through the use of technology by going directly to the source.
The above examples are all ways of capturing signals from the brain and decoding them in novel ways, but BMIs can also take external stimuli and feed it to the brain. Digital visual signals can be captured by an external camera, for instance, and sent to the brain’s visual cortex. Multiple trials have demonstrated that such a system can restore some level of sight to non-sighted folks.
Are Two Heads Better Than One? Brain-to-Brain Interfaces
It’s clear at this point that it’s possible to augment the mind through connected technologies, but that’s not the same as controlling a person remotely like a video game player. For that, we’re going to need a brain-to-brain interface (BBI) capable of collecting brain activity, decoding it, and transmitting it to another mind.
A 2019 study published in Scientific Reports demonstrated the world’s first BBI connecting not two but three people in a collaborative game. In the experiment, all three participants used non-invasive EEG-based devices to read and transmit their thoughts. Two participants were designated Senders while the third was the Receiver, and all three of them were working together to play a game similar to Tetris.
The Receiver was all alone and couldn’t see the game in action. The only information they had were the thoughts of their collaborators, transmitted over the internet directly to the brain. Despite those challenges, the Receiver was responsible for actually rotating blocks in the game as they appeared.
Meanwhile, the Senders could see the game and know whether a block needed to be rotated, but they weren’t at the controls. Instead, the Senders would think about rotating a block or leaving it alone and the Receiver would interpret those thoughts and take action. The Senders could then see the choice made by the Receiver and send feedback to correct a wrong choice if necessary. On average, each group of three had an accuracy of 81.25%.
A separate 2019 study took the final step by demonstrating that a human being can directly control the movements of a cyborg rat. The human participant (we’ll call them the Player) used a non-invasive EEG-based BMI which recorded and transmitted their brain activity. The Playable Character (rat) had a more invasive array of microelectrodes implanted in their brain, not unlike Ken Castle’s fictional nanocells.
The Player would think about moving their right or left arm and those thoughts were translated into commands to turn right or left. Then, the electrodes carried out those commands in the living body of a rat through direct electrical stimulation of the brain. In experiments, the rats successfully navigated complex mazes under the complete control of the human Player. It’s a long way from running rats through mazes to playing Mario Bros. with actual plumbers, or a first-person-shooter with death row inmates, but it’s also shockingly close.
Catch Gamer streaming now on Peacock. And remember, it’s not supposed to be an instruction manual.