Syfy Insider Exclusive

Create a free profile to get unlimited access to exclusive videos, sweepstakes, and more!

Sign Up For Free to View
SYFY WIRE Science Behind the Fiction

Inception's dream-reading technology is becoming reality

By Cassidy Ward
Inception-movie-image-38.jpg

In 2010, Christopher Nolan invaded our dreams. Following the success of his first two Batman films, Batman Begins and The Dark Knight, Nolan set out to complete a film he’d been itching to make for nearly a decade. The result was the mind-bending and star-studded thriller Inception.

The protagonist, Dominick Cobb (Leonardo DiCaprio), is a thief who makes his living stealing corporate secrets by invading a target’s dreams. Our dreams are meant to be safe, untouchable. They are constructed of our most private thoughts, unbidden even by our own conscious selves. To invade them is to invade the most sanctified halls of our minds. If we’re not safe in our dreams, where are we safe?

Thankfully, our private internal narratives, both waking and asleep, remain safe. But scientists are working to change that.

DECODING THOUGHT

It’s true that our thoughts are silent, incapable of being accessed by anyone but ourselves unless translated through some physical medium, speech, writing, or art. Even then, there’s something lost in translation. Often we struggle to find the right words to express our internal language. And then, of course, people lie. What’s expressed physically does not always accurately represent what’s going on inside.

The brain-body barrier, while integral to every facet of our existence, has remained an impassable wall for as long as we’ve existed. Science, knowing no uncrossable horizon, seeks to tear it down.

Despite the (so far) mostly unknowable mechanics of the brain’s inner workings, we know that thoughts do have a physical counterpart. They aren’t just abstractions fluttering through our minds like wisps of smoke on the wind. There is electrical activity, connectivity between varying portions of the brain; some physical communication is happening, which results in what we perceive as thought.

Since at least 2005, neuroscientists have been working to unravel the physical brain activity associated with thought. That’s when Yukiyasu Kamitani and Frank Tong published a study in the journal Nature Neuroscience, which showed that simple brain activity could be collected and interpreted using functional MRI readings.

Their study used awake subjects, and the focus of their research was simpler than complex thought. By combining the use of fMRI scans and a learning algorithm, Kamitani and Tong were able to decipher which direction a person was looking. It was a simple start but proved the concept that brain activity is able to be measured and translated to an external observer.

The next step was taking this technology and advancing it to the point where it could decode more complex thoughts.

Scientists at Carnegie Mellon University used similar techniques to accomplish the feat. Led by Marcel Just, professor of psychology at Carnegie Mellon, they were able to find that complex thoughts are constructed of an “alphabet” of 42 components. These components include things like "person," "size," "action," etc. The brain uses these components, in various configurations, to make up complex thoughts.

By identifying these components and the associated brain activity, and using an algorithm, Just was able to match 240 complex sentences with brain activity. And the process works both ways: The system can predict the neural activity that will accompany a sentence it’s never seen before and can decode neural activity into the semantic content of a sentence. Their process achieved an accuracy of 86 percent when testing against sentences the algorithm had not previously encountered.

This technology offers a sort of textual representation of human thought. One can imagine a ticker tape, constantly unspooling, reading out the unceasing thoughts of a person’s mind. But thoughts are more than words.

Research from the Psychology Department at the Harvard School of Medicine has shown that visual and verbal thinking often bleed into each other. Even when we’re thinking primarily verbally, a visual component usually accompanies it. You might be thinking the words you’ll say during an upcoming job interview, but you see yourself sitting in the chair, too. You imagine the person sitting across from you. Decoding those visual elements of thought is a necessary component of truly bringing our thoughts out of our minds.

Enter, again, Yukiyasu Kamitani at Kyoto University. The same MRI technology used in other studies was used to measure the blood flow in participants and map their brain activity as they viewed more than 1,000 images. Then Kamitani and their team built a deep neural network, an artificial intelligence, to act as a stand-in for actual participants. They repeated the process, refining the algorithm until it could more accurately interpret brain activity and translate it into an image.

The end result is a series of “paintings” made by software, representing imaginal thought.

These constructed images aren’t perfect — they don’t precisely look like the image being seen or imagined — but they’re close. They look like the sort of thing you might see in a nightmare world, with almost-recognizable features, but grossly distorted.

Despite the lack of fidelity, it’s clear their machine learning system is capturing thought and spitting out an image at least vaguely resembling the real thing. The problem is not in the theory, but the execution, and we have every reason to believe that will only be refined over time.

DECODING DREAMS

We’ve laid the groundwork for interpreting thought and bringing it out of the mind’s lockbox and into the world of external interpretation. Now, in order to realize the vision Nolan set out on screen, we need to take these processes and apply them to dreaming participants.

The primary problem, besides the low fidelity of thought translation output, is that brain activity is different when you’re asleep than when you’re awake. Waking participants can actively contribute to the study; they can provide insight into what they are seeing or imagining, or those details can be dictated by researchers. Not so with sleeping participants.

In this scenario, scientists are operating from behind a veil and are dependent on what a participant can remember upon waking. Further, some regions of the brain experience lower or different levels of activity during sleep. The methods that work while an individual is awake don’t necessarily translate directly to a sleeping brain state.

Still, machine learning systems were able to cull some data from sleeping participants. Most of our dreams occur during REM sleep, a deep sleep state that happens hours after we first drift off. But some dreams occur shortly after we fall asleep. This is what the researchers attempted to capture.

Using functional MRI machines, participants fell asleep, were woken, and asked to explain what they’d seen in their dreams. This process was repeated hundreds of times in order to establish a baseline for the algorithm. The contents of these brief dreams were broken down into 20 common classes and assigned images from the internet that matched those concepts. Participants were also shown images of these concepts while awake, in order to verify brain activity experienced while sleeping.

Once this baseline was established, the algorithm was asked to measure activity while sleeping and match it with images from the web. Upon waking, participants were asked again to explain what they’d dreamed. The algorithm correctly predicted the basic contents of their dreams with 60 percent accuracy, better than could be achieved by pure chance.

The cause of this margin of error is difficult to identify; it’s likely that some, perhaps even most, are a result of errors in the computer system's ability to interpret the data. Some, though, could be due to our famous inability to correctly parse our own dreams upon waking.

Our attempts to parse the activity going on in our minds, and accurately interpret it, are still in their nascent period, but these studies show that the information is there, inside each of us, just waiting to be unlocked. Hopefully, not by Leonardo DiCaprio and his band of dream thieves.