Westworld Season 2

Science Behind the Fiction: How far are we from Westworld-style androids?

Contributed by
Apr 23, 2018

Do androids dream of violent revolution against their human creators and handlers? That's one of the questions that HBO's Westworld is trying to figure out. Based on the 1973 film of the same name (which was written and directed by Jurassic Park author Michael Crichton), the hit sci-fi series from Jonathan "Jonah" Nolan (brother of Christopher) and his wife Lisa Joy takes place in a future (30 years from now, to be exact) where theme parks full of androids that exist to fulfill humanity's most hedonistic pleasures and vices: sex, violence, you name it.

Westworld is just one of these parks, created by Delos Incorporated, set in a romanticized version of the Old West filled with cowboys, Indians, outlaws, and train robbers. Human-like androids or "Hosts" occupy this space and can be abused by any human patrons of the park, no questions asked. There's just one small problem: those robots gained sentience at the end of the first season and began killing their human masters with impunity. Season 2 picks up as the rebellion is in full swing, led by Dolores Abernathy (Evan Rachel Wood).

Continue below to discover the science behind the fiction...

**Spoiler Warning: There are spoilers for both Season 1 and Season 2 of Westworld on HBO below**

With the show now back on the air, we spoke with David Eagleman, a world-renowned neuroscientist, who serves as a scientific consultant on the series, about the real science behind all the machine mayhem. An adjunct professor at Stanford and author of several books on the human mind like Incognito: The Secret Lives of the Brain, Eagleman took us into a few of the concepts that Westworld tackles on a weekly basis as well as how accurate the showrunners and writers wanted the science to be.

Westworld Season 2

Credit: HBO



Just how close are we to creating Westworld's level of Artificial Intelligence?

"We're nowhere even vaguely close," Eagleman said, leaving no room for doubts or debate. "AI gets a lot of attention for things like AlphaGo and beating the chess champion and for being able to categorize billions of pictures really rapidly. But essentially what AI is good at right now, is just categorization like this is a cat picture versus this is a dog picture and stuff like that. Although it's superhuman in categorization, it can't do things like navigate a complex room and pick up objects easily and do social manipulation with language."

What AI is really lacking in, he says, is the ability to take past learning experiences and apply it to new situations. In a way, the most advanced computer isn't even as smart as a toddler.

"I can show a three-year-old a papaya once and say this is a papaya and then she's got it, so there's still a massive gap between how human brains work and even what our best AI can do at the moment," he said.

Will robots reach a point where they consider us inferior and rise up against us?

There are two camps of intellectual thought on this, according to Eagleman. One group believes that if AI ever becomes close or equal to humans over the next few decades, it will be an extension of the technology we have now. The second group, to which Eagleman considers himself a party, is of the idea that for AI to become verisimilitudinous to humanity, we need start thinking about consciousness and what it means to be human in a very different way.

"We need to discover entirely new things and ways of thinking about this to be able to create the kind of thing that has its own desires and motivations and cares about things like hosting a revolution," he said.

Professor Eagleman also added that the mind is inherently a collection of contradictory factors that make the matter or machine sentience even more complex. Would it make sense to build a robot with doubts, fears, regrets, and the rest? That would make it less efficient, but if you want to make AI like us, you have to imbue it with all the qualities that make us human.

Imagine someone places a plate of chocolate chip cookies in front of you. On the one hand, you want to eat them, but another part of your brain holds back because you know they're unhealthy. Then the first part of your brain strikes a bargain, promising that you'll go to the gym if you eat them. These internal conflicts are crucial to what we see as sentience and yet they seem like drawbacks were they to be programmed into a machine.

"When [I was] talking with Jonah and Lisa, it wasn't clear whether these AI hosts had the same sort of thing, but then in fact we realized that's exactly what's going on with these guys," he said. "For example, when Maeve gets in the train at the end to leave the park, she has freedom tugging on her in one direction, which is what she really wants. But she also has this memory of her daughter tugging on her in the other direction, and finally she gets off the train. But she's conflicted and it's very human to be conflicted. And it's not how we program AI currently. I mean, no one ever builds AI that says 'Ah, I'm really torn, should I do this or should I do that? That was kind of a cool, that the machines are built [with conflict].'"

What's so hard about replicating the human brain?

"Challenge one is that the brain is the most complicated thing that we've ever found on the planet," said the neuroscientist, "and number two, is that we do not, at this point, have an understanding of how it works. There are lots of smaller pieces and parts and sub-series that we're all working on, but if anyone tells you that they understand how the human brain works, they're lying."

To run a working simulation of the mind, you'd have to scan an actual brain at the microscopic level and then reassemble the entire thing. Sounds simple enough, right? Here's the catch, according to Eagleman: "It turns out that that takes a zetabyte of computational capacity, which is equivalent to the total of capacity of the planet right now."

Again, there are different camps of thought on the best way to emulate the way a human brain thinks. If we can understand what makes it tick at the most fundamental level, a Westworld might be opening near you soon. Eagleman compared this to how mankind discovered the airplane. If you look at a bird and understand the principles behind aerodynamics, you don't have to build feathery wings. Rather, you can build something that follows the same scientific principles of a bird's flight. A similar approach is being applied to advancing our current AI.

"The AI race, in some sense, is trying to figure out can we just circumvent all that wet biological gushy stuff and just figure out what the principles of intelligence are and put it together that way and that's had a lot of hope for the last 60 years, but so far, it hasn't worked ... There's still a massive gap between how human brains work and even what our best AI can do at the moment."

Westworld Season 2

Credit: HBO



How accurate do the showrunners want the show's scientific concepts to be?

As it turns out, pretty damn accurate, given the fact that Eagleman had a six-hour debate with Nolan, Joy, and the writers' room about robots having free will.

Similar to what Spielberg did before shooting Minority Report back in the early 2000s, the showrunners really wanted the technology on display to be viable things we could actually achieve over the next three decades.

"One of the questions is what do we make the brains of the hosts look like? Because obviously their brains aren't made out of meaty stuff. And so, that's something for which one needs to make up a new futuristic technology that doesn't yet exist, which should be tied to what would be realistic ... The show takes place 30 years in the future and so, they're always looking for things that are sort of really futuristic tech that can serve a purpose."

Indeed, Eagleman invented something in his lab, which will is being used in Season 2. He couldn't expound upon it, for fear of giving away spoilers, but he did promise that it has something to do with the military contractors brought in by Delos to crush the android insurrection.

"I think it would be ok for me to say, it's something worn by the private military contractors, a piece of technology that allows them to function better and when we were envisioning what military types would wear 30 years in the future, this is the answer."

Westworld Season 2

Credit: HBO

What's one of the more far-fetched elements on the show?

Funnily enough, the thing that Eagleman finds the most preposterous part of the show from a scientific standpoint has nothing to do with brains or sentience; it actually has to do with music.

"There's one tiny flaw which I don't think they mind me saying is the player piano," he said. "In the Old West, these were actually powered by somebody who would sit there and press on the pedals because, of course, there was no electricity. The goofy part is that the player piano plays by itself in the Old West."

Last season, the saloon's piano player ragtime renditions of more recent songs like "Black Hole Sun" (Soungarden), "No Surprises" (Radiohead), and "Paint it Black" (The Rolling Stones) just to let us know that we were in the future.

Creator/writer Ed Brubaker talks with Blastr about his Eisner-winning graphic novel, The Fade Out, and his brand-new title, Kill or Be Killed