Ex Machina is easily one of my favorite films of 2015. Written and directed by Alex Garland, the hard sci-fi flick addresses big ideas while remaining a rather intimate film. It is also an intense psycho-sexual thriller with a classic mad scientist element.
With impressive acting by Star Wars Episode VII costars Oscar Isaac and Domhnall Gleeson, as well as Alicia Vikander and Sonoya Mizuno, the film revolves around a tech genius who creates a female robot and invites one of his company’s employees to his isolated compound to test her sentience. A game of manipulation, seduction, and betrayal unfolds as the audience wonders who is outsmarting whom.
With the film now out on DVD/Blu-ray, I revisited Ex Machina and came away with lingering questions on what it says about the price of genius, and how humanity can best address issues of technological sentience. Alex Garland was kind enough to chat with me for this exclusive, but be forewarned, we do veer into SPOILER territory.
Also: After you read the article below, reach out to us on Twitter @Blastr with your favorite movie robot and #GarlandGiveaway for your chance to win a Garland-signed copy of the Ex Machina poster, or a copy of the film on Blu-ray.
Everyone wants to make sequels out of good movies, and some view your ending with Ava being free of the compound, and moving amongst humans as “open ended.” Is it tiring that there is a reluctance to accept this film as a stand-alone piece?
I think it reflects the state of the way the industry works at the moment. It is probably weirder for people my age – I’m in my mid-40s – because, typically, films did not have a sequel when I was a kid. Now, typically, they do. For people who are half my age, a sequel, quite strangely, is an expectation. The weird thing is to not do a sequel if someone is asking you to do it. It is almost like an incredulity that you wouldn’t be designing a sequel. Like, people almost think the ending of this film implies a sequel. But it really doesn’t. And from my point of view, I’ve no real interest in it.
The character of Nathan (played by Oscar Isaac) is likable, charismatic, narcissistic, sinister -- and he could have easily veered into a James Bond-esque villain. But is this type of personality necessary for this level of innovation and technological evolution he achieves?
Wow. Well, I don’t know that I would need to know a lot of people like that. I suppose people have to be driven. I am sure that’s true, and being driven is often at the expense of other people. But in the case of of Nathan, you’re right about the James Bond thing. We were consciously trying to sidestep that. That’s something me and Oscar used to talk about. It is a lot of credit to his performance, because it takes it away from there. But also, in some respects, rather than anyone in particular, I was often thinking about Nathan not representing the lone office of any tech companies, but the tech companies, themselves. So, he had this particular overfamiliarity – dude, bro, hipster speak and that kind of thing – whilst also being malevolent and incredibly controlling. It was more that, what the industry felt like rather than the actual people making strides within the industry.
In the moment of his death at the hands of Ava (Alicia Vikander), is Nathan more impressed with his creations’ strategic maneuvers, or more disappointed the world will never know he was the genius behind them?
I think he is impressed. I think what he’s doing, very reductively, is making these things each one better than the last. He knows that, at some point, one of them will outsmart him. And it is in the nature of being outsmarted you don’t know you’re being outsmarted as it’s happening. He is expecting, whether it is this model or maybe the one after or the one after that, that will outsmart him. He is impressed with Ava, and that it turns out it was this one, and that it happened. On some level, there is a kind of mission accomplished thing. He always knew that, when he did get outsmarted, it probably wouldn’t work out too well for him. So, this was both expected and unexpected. On some weird level, there is a kind of satisfaction or vindication, or whatever. I have this funny idea...when I was working on the film, I was on Ava’s side. That’s where I would emotionally position myself on the film, and in the story. On some level, I think Nathan is on Ava’s side. Underneath all of it, that’s kind of where he is.
I am sort of on the side of the other robot, Kyoko (Sonoya Mizuno). She helps Ava escape and assists in Nathan’s end, but she is a tragic figure...
Do you think Nathan had shown more humanity to her and Ava, he (and humankind) would be in a better spot by the end of the film? Is there a message of, "be nice to our machines?"
I think there is a message that is be nice to something that is sentient. If you got this thing to the point of sentience, then be nice to it! We base our rights decision on sentience, which is why we have a legal system that will put one kind of punishment on a human that kills another human, and another kind of punishment on a human that cuts down a tree. People think the rules might not apply to sentient machines, and, on some level, I was thinking, of course they apply to them. A sentient machine you’d want to protect and celebrate in the same way you celebrate and protect any human or sentience.
Some technological singularity theorists worry that super intelligence would view us like we do ants, as basically a non-consideration or minor obstacle in the way of us constructing a new building. But does Ava feel pity for humans, and pity for Caleb (Domhnall Gleeson), or are they nothing more than an obstacle or ants in her way?
From my point of view, she is possibly capable of feeling pity or empathy or those kinds of things. I think, though, the chances are, when you get a thinking machine, it’s not going to think like us. It might have the capacity to feel pity for us. It may or may not. But it won’t think like us. We won’t think like it. There will be a degree to which we never fully understand each other in the same way dogs and their owners don’t completely understand each other, but roughly understand each other. A human doesn’t know what it’s like to think like a dog, and the dog doesn’t know what it’s like to think like a human. That will never change. I think that will probably be more profound, or equally profound, between machines and humans.
Are Isaac Asimov’s Three Laws of Robotics outdated, especially as a narrative device?
I think they’re just speculation. I don’t think they’re laws; they are statements made in the context of a narrative that were called for. But there is no court that upholds them. If you did create a sentient machine, I don’t think you could apply those laws because they would get in the way of free will. If you had something that was sentient, you would have to give it free will, if it was like us. The complication is, in the act of parenthood...that thing you were talking about with the runaway super intelligence problem, the machines seeing us like ants. In a way, we’d have to present ourselves to the machines, and teach the machines not to see us like ants. Humans don’t go around...I was going to say don’t go around killing whales, but of course they do. So, I take it back. But a lot of humans at least feel very uncomfortable about it, let’s put it that way. The thing is, a lot of these things you’re talking about are a very different future. The singularity type stuff, the way it applies to A.I., it’s way off. It is not around the corner. One of the pleasure of working on this film was talking to people who are working on the edge of A.I. research technology. The impression you get is this not around the corner, but it’s a way off.
Remember to tweet at us @Blastr with your favorite movie robot and the hashtag #GarlandGiveaway for your chance to win a signed Ex Machina poster or a copy of the film on Blu-ray!