July 28th, 2025
The Terminator Delusion
For the most part, I've found AI (LLMs) to be a useful tool.
My personal view is that LLMs are best suited for grunt-work-leaning tasks. You certainly can get LLMs to spit out complex, working code, but it takes a fair amount of back-and-forth to get there.
But sometimes, LLMs go off into space. Past a certain context size, they "lose focus," leading to solutions that are completely foreign to what the LLM was just writing, or with subtle inconsistencies (e.g., not following my code formatting rules) that force you to hand-inspect every single line of code (I'd argue that you should be doing this anyway).
Does that sound like a sentient, intelligent life form?
No. It sounds like a very well-designed algorithm with access to obscene amounts of data (not including your own, user-provided context), doing its best to use math to predict tokens in a sequence that produces an approximately accurate result.
Notice how I worded that: approximately accurate.
This is roughly the same take that I've seen from all of the programmers that I respect or look up to. They're not wooed by the "inevitability of AI". Instead, they look at it for what it is: a mostly useful tool that can help you be more productive (especially if you have experience in the field you're chatting with the LLM about).
"Large language models do not think, they simply calculate."
— Bob Martin
But despite seeing countless professionals with several years to decades of experience echoing my own thoughts, I still see another perspective surrounding AI that constantly crops up.
For fun, we'll call it "The Terminator Delusion."
The Terminator Delusion is the view that current LLMs are already sentient (to some degree), and will continue to expand their abilities in a linear fashion, all the way up to being the "brain" for an Austrian chad robot that dominates your poodle Trixie into submission for failing to do its business outside.
Scary? You betcha. Close to reality? Not even a little bit.
And this is the frustrating part: having enough understanding of LLMs to know what they aren't and won't be, all the while watching CEOs and thought leaders (who are incentivized to see AI "take over") deploy veiled language suggesting to the less informed that "AI is going to take everything you've worked for and there's nothing you can do save for graciously accepting your Person Pod, enriched bug paste, and monthly 'sorry for ruining your life' check—but at least it's direct deposit!"
For f*cks sake.
None of this is real. It's all marketing hype and fluff. And that's deeply annoying because the actual truth isn't negative: LLMs are a serious productivity enhancement, but they're far from perfect, let alone sentient.
That reality is only harmful if your own, current survival rests on convincing (scaring) others that they're going to be "left out" if they don't allow AI's tendrils to wrap around their neck and suffocate them like a drug-addled pimp who "wants answers."
I'd say these folks should be ashamed of themselves, but trying to bring ethics or morality into this debate (outside of telling their users what is and isn't "safe" for them to say or consume like some kind of Orwellian supervillain) is a farce.
But I do think there's hope.
LLMs aren't going away. They will get better to some degree. But the technical reality underlying all of the hype is that, as designed, they will not magically evolve into an error-free superintelligence that takes everyone's job overnight—perhaps ever.
Instead, I think where we're headed will be fairly analogous to any other major technological shift.
Yes, gradually, AI will get to a point where it can replace some jobs (especially those not dependent on high-quality outputs), or reduce headcount while augmenting a smaller team of humans. And that shift—while I expect it to be faster than, say, television or the popular internet—will (in my opinion) take a decade or longer to fully play out.
In other words: you've got time.
You should pay attention to where all of this AI stuff is headed so you're not blindsided by a significant development, but you absolutely should not live in a state of fear or anxiety over this happening so quickly (or so well) that you won't have time to respond.
Play with it. Come up with experiments relevant to your own work. Push its limits. Validate the claims being made about its abilities.
But for the love of all that's holy, don't get duped into thinking LLMs are more than meets the eye.
They're helpful. They're valuable. But any belief that they're going to achieve any level of functionality close to a T-800 (let alone a T-1000) is exactly what it sounds like: science fiction.