Syfy Insider Exclusive

Create a free profile to get unlimited access to exclusive videos, sweepstakes, and more!

Sign Up For Free to View
SYFY WIRE robots

Mind-controlled robots are almost real — but more helpful than scary

Robots that can be controlled by signals the human brain emits are coming out of sci-fi and into science.

By Elizabeth Rayne
Cassidy Robot Hand Getty

There are always going to be those people who are terrified of robots becoming sentient and taking over the planet. Then there are those who think robots could save the world.

Cyborg armies that can think for themselves aren’t going to come stomping in to the beat of the Imperial March anytime soon. What could be coming into being are instead robots that can actually be controlled by signals the human brain emits, with no voice or touch control needed. For tetraplegic patients, who are unable to move or so much as speak, and others who are severely disabled and have limited communication, technology like this could reboot their lives.

We are now that much closer to having an actual mind-controlled robot now that researchers have invented a computer program that will be its nerve center. Co-led by Fumiaki Iwane of the Learning Algorithms and Systems Laboratory at Ecole Polytechnique Federal de Lausanne (EPFL) in Switzerland, a team of researchers recently published their advances in Communications Biology. They started with existing technology and fast-forwarded it into the future.

"For the online EEG signal processing, based on the training dataset, we created a decoder which can infer if participants foresee the robot making an error," Iwane said. "The decoding output of this decoder was exploited in an online setup to customize the robot trajectories."

Training the robots needed some patience. It took about an hour for it to learn how to recognize error signals. The reward function helped it learn ideal behavior. When human subjects caught the robotic arm not exactly going where they wanted to, those trajectories used them to foresee error. This is why a decoder was needed to track how often there was foreseeable error. If the decoder didn't pick up on the brain foreseeing error, Iwane and his team assumed that both human and robotic subjects were fine with what was going on and could get an idea of what the error-free interaction between them looked like. 

Using a robotic arm that had previously been developed, Iwane and his team already had several advantages. The arm is capable of moving in every direction as well as shifting objects around and getting past anything in its way. They wanted to upgrade it to a brain-computer interface (BCI) that could be controlled to avoid obstacles (though they could have focused on any other task for this experiment). BCIs can learn someone's preference and adapt pretty fast, as well as measure and understand someone’s neural activity, which is how they can pick up on how someone wants to move the artificial limb.

Patients can directly control the robotic arm because a BCI can translate how the brain wants the arm to move. There is one issue; not everyone may be capable of the intense concentration it requires to zap specific messages from neurons to the BCI. Spinal cord injuries often affect the brain. While the robot was tested out on able-bodied subjects because of the immense strain that the long trials would have put on those with motor disabilities, there may eventually need to be upgrades to accommodate for the absence of certain abilities. Iwane believes that the system will have to have some autonomy and be trained to do some tasks without intervention.

"Based on the previous study, we knew [of] the elicitation of specific brain signals, so called error-related potentials, when people perceive the erroneous actions of the external devices," he said. "Such signals can be observed by the negative and positive deflection of the electrophysiological signal with respect to the onset of the erroneous actions."

Robotic limbs have already been taught motion planning. This is the breakdown of a movement into smaller, specific motions that could make the movement more efficient. Because part of motion planning is figuring out a trajectory based on a dynamical system, a time-dependent system operates based on what is going to change over time and how that factor will change. The arm’s velocity depends on where it is in relation to the target after a target is determined. Motion planning gives the robot a chance to adapt and react in real time.

Another advantage is that, if the robot makes a mistake, it doesn’t need a patient to physically redirect it, which is something that would not be possible for most people with motor disabilities. The arm itself will be able to tell it went wrong from human error-related potentials. These are the signals sent by the brain when actions and exceptions have not matched up and it is making sense of an error. The BCI proved that it was a fast learner so long as it learned in increments. It also succeeded at placing and replacing objects while avoiding obstacles.

Still, there are obstacles to overcome as the researchers keep developing it and possibly using the technology in other applications, such as a mind-controlled wheelchair. 

"When using a mind-controlled wheelchair, unlike a lab setting, there are continuously changing environments and pedestrians," said Iwane. "Thus, both error detection and extraction of the reward function without risks to the user may be more difficult."

So you can sigh in relief that robot armies won’t be won’t be staging a takeover anytime soon, but with a little help from the human brain, robots could take the dis out of disability.