In 1982, Steven Lisberger and Walt Disney Studios introduced us to Tron and the world of the Grid. The tale follows Kevin Flynn (Jeff Bridges), a software engineer previously employed by ENCOM. An attempt to prove the theft of his intellectual property, a series of video games, quickly turns into a sci-fi romp through digital space.
After an experimental laser goes off, Flynn is digitized, broken into his constituent parts, and pulled into a virtual world. Once inside, he discovers an artificial space populated not by ones and zeroes but by fully-formed, aware persons, in place of programs.
The idea of a digital reality separate from our own was not an idea unique to Lisberger. It hearkens back to written works by Bradbury (The Veldt) and various stories by Philip K. Dick, among others. William Gibson’s Neuromancer would hit shelves two years later and help define cyberpunk as a genre.
Tron was beaten to the screen by a tale of simulated reality, albeit a more physical one, in Michael Chrichton’s Westworld, now adapted into a hit series on HBO. There is little question, however, that Tron was the earliest popular work to bring the idea of simulated realities to the masses. The film would go on to attain cult status and spawn a franchise that includes comic books, video games, an animated series, and a live-action sequel.
The idea would, of course, reach its natural conclusion in suggesting that we might ourselves be living inside a simulation without knowing it, as represented in the now-quintessential example of cyberpunk cinema, The Matrix trilogy. The Matrix suggests that humanity has been thrust into a false existence by nefarious entities existing in the real world and asks the question, “If we are living in a simulation, how would we know?”
The argument over the nature of our reality is one that has been raging for centuries, going back at least to the time of the Greeks and Plato’s Allegory of the Cave. In Plato’s allegory, prisoners shackled within a cave are shown only the shadows cast by puppets they cannot perceive. He suggests that a prisoner within the cave might perceive the shadow of a book and call it a book, might even understand it as a book, when in truth he sees only a shadow. Only by becoming unshackled, by turning around and seeing the truth of the thing, can he really know what a book is.
How could we, if we were in a simulation, tell the difference between books and shadows?
In 2003, Nick Bostrom, a philosopher at the University of Oxford, proposed a trilemma that’s come to be known as The Simulation Argument within a paper entitled, "Are you living in a computer simulation?," published by Philosophical Quarterly. Bostrom suggests that one of three propositions is true: either humanity is very likely to go extinct before reaching a “posthuman” stage (one wherein we are capable of running complex simulations); any posthuman civilization is unlikely, for whatever reason, to run simulations; or we are almost certainly living in a simulation right now.
In simple terms, the idea is that a reasonable extrapolation of modern computational advancement suggests that in the relatively near future we will have the ability to simulate our own reality. Accepting that future, an indistinguishable simulation would give rise to aware entities like ourselves. Those entities would in turn create their own simulations nested within the initial simulation. This would result in an infinite number of nested realities and the likelihood that we exist in the “prime reality” is statistically unlikely. Therefore, either we never reach a point where we’re capable of creating passable simulations of reality, or we’re likely living in one now.
This line of thinking, while seemingly sound, is not without its detractors. The simulation hypothesis is contingent on the idea that consciousness is a matter of computation, something that can be replicated by machines. There is, as yet, no empirical data to support that idea. The only minds we know of, with quantifiable consciousness, are biological. Our best computers, as advanced as they may be, seem to lack whatever spark it is we seem to possess.
In Consciousness Explained, Daniel Dennett calls consciousness “just about the last surviving mystery.” While we don’t yet have the answers to many questions, Dennett suggests that the mystery of consciousness is in a world of its own, arguing that not only do we not have the answers, but we also lack the ability to even know how to ask the right questions.
In the seventeenth century, René Descartes made the now famous statement, “Cogito ergo sum” or, “I think, therefore I am.” The idea that thinking, in itself, suggests there is something or someone there to do the thinking. That notion has held for centuries in proving, at least to the individual, that consciousness exists. But does nothing by way of showing how or why consciousness exists. We won’t bother ourselves with the why, but we can ask a few questions and tickle at a few answers as to how.
We’re pretty good at figuring out how the brain processes certain types of information. We have a decent understanding of what parts of the brain are responsible for certain body functions or for processing certain types of information. Where we’re less well versed is in how we process all of that information into subjective experience.
There are several hypotheses regarding the nature of consciousness, some of which, full disclosure, suggest that no simulation, however complete, could ever replicate real experience. Others, however, suggest that consciousness is emergent, a property of complex calculations made in the brain.
“Almost everyone agrees that there will be very strong correlations between what's in the brain and consciousness," said David Chalmers, philosophy professor at the Center for Consciousness at the Australian Natural University.
This suggests that consciousness is a quantifiable and calculable thing, potentially replicable by machine intelligences. And if we are, in the end, simply biological machines, than our experience can potentially be simulated by sufficiently complex programs. This leaves room, however small, for the simulation hypothesis.
How might we know, for sure?
A recent panel at the American Museum of Natural History discussed this very question. Moderator Neil deGrasse Tyson, director of the museum’s Hayden Planetarium, and well known science popularizer, fielded questions and suggested that the likelihood of our living in a simulation might be high.
John D. Barrow, professor of mathematical sciences at Cambridge University has suggested that a simulation would contain detectable glitches. The idea being that a simulation so large would require complex calculations that when observed at close range might reveal their true nature.
The trouble is, if we are living in a simulation, any results we might discover could be equally simulated by whatever intelligence has caged us. In the end, if our reality is indistinguishable from a “real” existence, then we’ve no reason to worry. Carry on, you digital wonders.