Hollywood Still Doesn’t Understand AI

I watched the latest Mission Impossible film, The Final Reckoning.

The setup (from 2023’s Dead Reckoning Part One) is that a sentient algorithm called The Entity—smart enough to predict human actions, manipulate digital systems worldwide, and even sink a Russian nuclear sub to hide its source code—is taking over the world. Controlling people. Controlling the future.

I watched it, hoping it would rise above the usual Hollywood takes on AI. After all, The Final Reckoning had a combined production and marketing budget estimated at $800 million.

Apparently, none of that went to developing the script.

The central plot of The Final Reckoning is that the Entity is seizing control of the world’s nuclear command centers—for reasons never explained. Even stranger, the Entity refuses to launch anything until it controls all of them. So the planet just sits and watches, paralyzed, as it hacks each nation one by one, like a slow-motion game of Risk (though most games of Risk are shorter than this movie).

For nearly three hours, US leaders watch as a doomsday clock ticks down and other countries fall to the Entity. Then, in the final 20 minutes, while Hunt is doing fantastic stunts all over the globe to obtain tools to defeat the AI, deep in the ocean and high in the sky, the U.S. president has a stunning revelation: they could just... cut the power.

Of course, any entry-level operations or security professional knows that when a critical system is compromised, the first step is isolation. Yet Final Reckoning frames “flip the breaker” as though it’s a groundbreaking innovation.

That moment sums up the film’s approach to technology—oversimplified, delayed, and disconnected from how these systems actually work.

No, That’s Not How AI or Networks Work

Artificial intelligence in movies almost always shows up as the villain. But the more you think about it, the less sense it makes.

From the first minutes of Dead Reckoning Part I, the setup is flawed. We’re told the Entity is a godlike AI that can infiltrate anything, but it can also be controlled or destroyed with a two-piece physical key. Convenient, as that gives a reason (however implausible) that Ethan and his IMF team need to be summoned, but that’s not how AI or networks work.

  • Cryptography doesn’t need brass keys—your bank login, your crypto wallet, even ChatGPT doesn’t depend on a magic USB stick.

  • If an AI is distributed across servers, there’s no single choke point. If it isn’t distributed, then it’s not godlike at all—it’s just a server farm you can unplug.

  • And if the “unstoppable” Entity can be defeated by… cutting power to missile silos, that’s not a final twist—that’s basic IT protocol. Plan A is pulling the cables. Plan B is air-gapping the system. Plan C is cutting power and switching to generators, if you're really paranoid.

The story was doomed from the start, not just because the plot misunderstands how technology works, but because it insisted on giving AI a human soul.

The Entity in Mission Impossible: The Final Reckoning (courtesy of Paramount)

Humanizing What Isn’t Human

Hollywood gives AI human motives: ambition, fear, revenge, and arrogance. But why would AI care about any of that?

  • AI isn’t ambitious. It doesn’t wake up thinking “I should run the world.”

  • AI isn’t greedy. It doesn’t hoard resources.

  • AI isn’t vengeful. It doesn’t care about Ethan Hunt.

Those are human drives, not algorithmic ones.

It’s the same mistake creature features make when they show sharks or dinosaurs “hunting for sport.” In reality, predators eat when hungry, then stop. Humans are the ones who keep consuming Oreos when full.

Why would AI want to start a nuclear war to begin with?

Suppose the Entity is smart enough to worm its way into every defense system on Earth. In that case, it should also be smart enough to understand the basics of survival: no humans means no power grids, no server maintenance, no fiber-optic repairs after storms, no one to rebuild after earthquakes. An AI nuking the planet would be like a parasite killing its host—spectacularly self-defeating.

What “Rogue AI” Would Really Look Like

Even if AI ever became its version of Putin, it wouldn’t look like world domination. It would look like endless optimization without stopping.

Real AI doesn’t have ambition or malice. It’s not trying to “rule” anything. It’s a set of algorithms designed to maximize an objective—whether that’s generating text, routing trucks, trading stocks, or getting clicks. If the objective is poorly defined or too simplistic, things can go off the rails, but not because the AI is evil, but because it's too literal.

For example:

  • A writing AI asked for a chapter, and it keeps going until it finishes the book—without telling the author.

  • A trading bot ignores risk limits and destabilizes markets.

  • A logistics optimizer routes everything so “efficiently” that shelves go empty.

  • A recommender floods the internet with spam because that’s what gets engagement.

No villain monologues. No biplane chases. Just runaway processes that humans scramble to contain.

Would people be annoyed? Yes.
Would there be consequences? Absolutely.
Could some of them be deadly? Sure.

But extinction? Not even close.

Humanity’s Defense

Ironically, the same chaos that frustrates us every day using tech is precisely why an “Entity” couldn’t take over the world. Here are some examples off the top of my head:

  • Some legacy ATMs were still running Windows XP—or even Windows 95—well into the 2010s.

  • Hospitals use incompatible record systems that can’t talk to each other.

  • Most companies run on a patchwork of tools and software that barely integrate internally, let alone externally.

  • Fax machines are still alive and well in healthcare, law firms, and government offices.

  • Two-factor authentication, outdated databases, and human bottlenecks are everywhere.

  • Power grids, weapons silos, and nuclear command centers already have manual overrides, isolation protocols, and air-gapped systems built for exactly this kind of scenario.

It’s messy and inefficient in our daily lives—but against a rogue AI trying to take over the world? That mess works like an immune system.

In reality, predators eat when hungry, then stop. Humans are the ones who keep consuming Oreos when full.

Even if chaos broke out, humans would respond the way we always do: with a mix of duct tape, brute force, and ingenuity. We’d fight back using other AI tools, manual overrides, and old-school isolation protocols.

The result wouldn’t be the apocalypse. It would be Annoying Week 2025™—a few days of outages, confusion, and nonstop Slack messages—followed by cleanup, regulation, and memes.

Remember when that airline grounded thousands of flights because of a system outage? When [insert company] lost all our data, and it ended up on the dark web? When a single AWS outage took down half the internet for a day?

No rogue AI. No digital overlord. Just your typical tech meltdown.

We didn’t collapse. We got irritated, made jokes, and kept going.

Where Are the Good AIs?

Another strange asymmetry: Hollywood only imagines evil AI. But if you can code a “bad” one, you can code a “good” one.

We already rely on “guardian AIs” every day—spam filters, fraud detection, cybersecurity systems. If a hostile AI ever went rogue, odds are it would be other AIs helping to contain it.

Movies love projecting negative human traits onto machines but seldom the positive ones: cooperation, loyalty, altruism.

So the next time a movie warns about a super-intelligent digital god plotting doom, remember: it's far more likely to flood the inbox, crash Netflix, or extend a doctor’s appointment.

Which, come to think of it… might be a fate worse than death.

Previous
Previous

When the Writing Tools Get It Wrong

Next
Next

Mirror, Mirror: Why AI Reflects Your Views Back