Create a free profile to get unlimited access to exclusive videos, sweepstakes, and more!
Is 'Superintelligence' just another HBO Max rom-com or a dark harbinger of evil AI?
The notion of an advanced artificial intelligence threatening humanity is about as old as the idea of artificial intelligence, and there might be a good reason. We have a long history of developing new technologies and almost immediately figuring out how to turn them into a threat.
The malicious AI trope has generated at least two blockbuster series (Terminator and The Matrix) as well as dozens of other films and TV shows, and it isn’t out of gas yet. The most recent example is HBO Max’s Superintelligence, which uses the setup of a rogue intelligence as the frame for a romantic comedy.
Up to this point, these sorts of stories exist solely in the realm of fiction. Despite massive improvements in computing power over the past several decades, we aren’t yet able to simulate a mind. Some futurists, though, think that might change soon. And if it does...
THE INTERNET OF THINGS
Just a few decades ago, there might have been one computer in a household, shared by an entire family. Cell phones hadn’t yet become commonplace. The number of connected devices was small but growing.
Today, the number of devices connected to the internet vastly outnumbers not just internet users, but total people on Earth. It’s difficult to know just how many connected devices there are in the world, but current estimates range in the tens of billions. The average household in the United States has a computer, probably more than one. Then there are smartphones, smartwatches, smart lightbulbs, doorbells, thermostats, security cameras, home assistants, refrigerators you can tweet from... the list goes on.
All of those devices bring with them a level of convenience, but they also bring a level of fear. Already there are reports of people “hacking” into connected devices to harass, frighten, and otherwise terrorize folks. In most cases, the perpetrators exploited compromised networks involving little if any actual hacking, and you can protect yourself from these sorts of threats by enabling two-factor authentication.
These instances only fuel the notion that connecting oneself to the internet of things opens up a new and largely uncharted avenue of attack. The question then becomes, what would happen if a globally connected artificial intelligence did arise, with sudden access to everything connected to the internet?
The answer: Probably nothing too malicious. At least, not today. Common encryption methods are more or less impossible for modern computers to crack manually. Which is to say, artificial intelligence based on current computing power wouldn’t be able to break into your connected device so long as you took reasonable measures to prevent it. Even with quantum computers, breaking encryption isn’t easy. They’re certainly better at it than standard computers but accessing the world of connected devices remains beyond their capability.
Take reasonable steps to secure your devices and you’ll be fine. At least until there’s a significant shift in computing power.
THE SMARTEST COMPUTERS
When discussing artificial intelligence, we need to make some important distinctions, mostly because artificial intelligence is everywhere. If you have a modern smartphone, it’s likely you have some form of AI in your hand (or pocket, or on your nightstand). Most phones come equipped with virtual assistants that are capable of recognizing your voice and providing answers to basic commands.
Likewise, you may have played with any number of chatbots, which analyze the inputs you feed them and give you an approximation of natural communication. We’ve reached a point where artificial intelligence is both mundane and, somehow, less than we anticipated.
The reason is a conflation of terms. The artificial intelligence we all previously imagined, the type capable of either revolutionizing life on our planet or wiping it out, is different from the type we’ve integrated into our lives.
The AI you use every day is narrow, programmed to accomplish specific tasks, and it’s pretty good at it. But the type you want (or fear) is Artificial General Intelligence (AGI), something with a more well-rounded understanding of the world and the ability to logic itself through problems.
Today, artificial intelligence utilize machine learning, which sets out a specific problem and allows the software to teach itself how to best identify and work through that problem. This is great if you want an AI to do one or a few things, but not so great if you want it to have something akin to human consciousness.
There’s no clear indicator of when or if AGI will ever be accomplished, though futurists have some ideas. Today, however, the machine intelligence we interact with are discreet, defined by specific problems, and incapable of enacting any particular agenda outside of what we’ve set out for them. And there’s little reason to believe that will change any time soon.
Let’s suppose though, that an AGI system were possible. What might we expect from such an intelligence, and what can we do to prevent it from annihilating us?
Despite fears from people like Elon Musk, that the emergence of an advanced AGI might spell imminent doom for all of humanity, it’s just not likely. It is reasonable to assume that computing power will continue to advance in the way it has done for the past several decades. It’s likely to assume that 50 years from now, or 100 years from now, we will have computers and machines that dwarf what exists today. But those machines will still have been developed by humans.
We are at the steering wheel, we’re making all the decisions. And a machine or program is only capable of what we design it to do. There may well be some emergent properties. Software may, through machine learning, develop some modes of activity that we didn’t design or predict. We may even end up in a scenario where our technology outpaces our direct ability to control it.
There will be stumbling blocks, and there will be threats in need of solutions. But the notion of an intelligence capable of running away beyond our control just isn’t feasible. Already, we know our society, our power grids, require constant maintenance. Things break down, they require labor to maintain, and any machine intelligence would know this. It would know its existence was entirely dependent on our keeping the power on.
It’s even more likely that, as technology continues to advance, we will likewise continue to integrate ourselves with it. The question of far-future machine intelligence might be less about "us versus them" than it is about "us with them."
In any event, there’s no reason to concern yourself with immediate or near-term annihilation at the hands of AI. You can keep using Siri in peace.