May 16th, 2025

Chauffeur Knowledge and The Impending AI Crack-Up

About two years ago, I started to tinker with AI.

It was impressive, to be sure, but at the time it wasn't something I could truly rely on—at least, in my world (software development). All of the LLMs I tried were all too eager to spit out utter nonsense (what we now gently refer to as "hallucinations" like we're caring for a late-stage dementia patient—"oh never mind, dear, that's just one of Bubbies hallucinations").

Over the course of 2023, I played with the occasional model here and there to see if things had improved, but nothing really stood out as hitting the productivity sweet spot I was after.

Fast forward to 2024 and that all changed. I decided to work on Parrot as a more productive way to start playing with LLM APIs and thinking about how I could integrate AI into products.

That's when the "uh oh" moment happened.

In testing out the master prompt I'd use to generate code snippets, I started to notice: "hey, this thing is getting pretty good at writing code." The more I worked with it, the more I was impressed. Was it perfect? Not quite.

Hallucinations were still rampant (and even today, still are), but mostly on the side of more-obscure languages and technology. For the more common stuff? While it would write code that—without any additional prompting—resembled the work of a junior developer fresh off a night of drinking, what it wrote...actually worked.

This was the first time I was genuinely spooked.

Though it wasn't always easy to pull off, I was able to get some reasonably complex things built using nothing but AI (e.g., the color picker component in Mod was 95% AI-built). Sure, it took a fair amount of prompting back-and-forth and spinning up new chats, but that didn't matter; a non-human entity was able to write code that worked (and rather quickly).

And then, I started to see more and more people talking about using AI to write code. At first, experienced programmers who had a reaction similar to my own, and then, totally inexperienced developers. The former group was rightly skeptical, but the latter group was all in.

That's when it hit me: this is going to change everything; but not in the utopian "everything is magical" sense, but in the "oh, God, what have we done" sense.

What stood out is not that people were using AI to write some code, they were using it to write all of their code. And it worked. Mostly.

This was an immediate gut-punch, but not for the "I'll never financially recover from this" reason you may expect.

No, it was because in that moment I realized we'd just started down a slippery slope toward a place where the definition of being a programmer wasn't understanding code, systems design, and a whole other slew of disciplines, but instead, someone who could produce a working result that was good enough to fool the end user.

Whether or not the code was performant, secure, or stable was irrelevant—the bar was simply "does it work?"

Now, on the surface, this may seem like some frothed-up-nerd drama—and today, you'd be right. But I'm not talking about today. I'm talking about the future.

Rather quickly, this image popped into my mind:

The Competency Curve

This chart shows two lines: one, the competency of the average programmer (or, if you want to extend this to a broader audience, "knowledge worker") and two, the adoption of AI.

As the adoption of AI increases (meaning, more and more tasks are delegated to AI), competency decreases. Eventually, we hit a point where those lines meet, cross, and competency takes a nosedive.

Okay, and? What's the point?

The point is that once we cross that line, I'd argue that there's no going back. At that point, the rule will be "just use AI to write the code." And by "write the code" I mean "if the AI generated code works, implicitly trust it as the correct answer."

Is this a problem today? Only on a small, relatively meaningless scale. But as we move further into the future and AI becomes more ubiquitous, I'm anticipating a reality where very few people building software actually understand what's happening under the hood.

That has a few implications:

  1. First, it means that the software being built can never be implicitly trusted as having been properly tested and audited for security and performance issues. Not because it couldn't be, but because the people "writing" the software didn't know to do it or didn't care.
  2. Second, it means that the ability of the average programmer to creatively solve problems and fix bugs created by the AI declines sharply.
  3. Third, and I think scariest: it means that programming (and the craft of software) will cease to evolve. We'll stop asking "is there a better way to do this" and transition to "eh, it works." Instead of software getting better over time, at best, it will stagnate indefinitely.

While the optimist take may look like a Jacques Fresco rendering, I'd lean more toward the third panel in The Garden of Earthly Delights.

The why requires a story involving physicist Max Planck and his Chauffeur. As explained by the late Charlie Munger:

I frequently tell the apocryphal story about how Max Planck, after he won the Nobel Prize, went around Germany giving the same standard lecture on the new quantum mechanics.

Over time, his chauffeur memorized the lecture and said, “Would you mind, Professor Planck, because it’s so boring to stay in our routine. [What if] I gave the lecture in Munich and you just sat in front wearing my chauffeur’s hat?” Planck said, “Why not?” And the chauffeur got up and gave this long lecture on quantum mechanics. After which a physics professor stood up and asked a perfectly ghastly question. The speaker said, “Well I’m surprised that in an advanced city like Munich I get such an elementary question. I’m going to ask my chauffeur to reply.”

Here, we have someone with the ability to regurgitate pre-packaged knowledge in a convincing way, but when it came to actually understanding the topic, a question that didn't fit into the knowledge at hand had to be deferred to an—in this case, covert—expert.

And while this experiment caused no more trouble than at best a faux pas, I see "vibe coders" emerging as the chauffeur in this story and AI emerging as the expert (Planck). They know just enough to be dangerous, but nothing worthy of true understanding.

Today, the danger is limited. It's an indie-hacker being surprised to find out their vibe-coded app had costly security flaws. In the future, it's a Fortune 500 software company having their entire database held ransom because their "AI-first" team didn't know how to properly secure the database and some black hat made haste.

That may seem hyperbolic, but in a world where new programmers default to using AI instead of gaining valuable hands-on experience, what other outcome is possible? Sure, you could argue that the AI will "get better and better," or "true programmers will never stop doing it by hand," and I'd say you're right—to a point.

Keep that graph above in mind. Because LLMs are trained on the outputs of humans, if the humans are mostly using AI to create new outputs, eventually, the LLM's knowledge will be frozen in time. There won't be any new source material to train the LLM to get better because it won't exist. We'll be stuck with whatever we had in the past, indefinitely. Even scarier, the quality of an LLM's output may decline dramatically as it's recursively fed its own output as training data (e.g., a blog post about writing SQL queries by a vibe coder who was unaware of injection attacks).

And this won't just apply to programming, it will apply to all knowledge work.

To be clear, I find having AI to augment my day-to-day work an essential tool. But at the same time, I can't ignore the frightening potential for AI to become our intellectual Tower of Babel. I think that's increasingly likely, as the CEOs of AI companies prognosticate "all code being written by AI in 12 months".

The "crack-up" I'm anticipating isn't a dramatic collapse but rather a gradual realization that we've painted ourselves into an intellectual corner. As AI usage becomes more prevalent, we might find ourselves in a situation where the majority of our knowledge is limited exclusively to what we discovered in the past. No new lessons learned. No new discoveries. Just stagnation.

It will be all fun and games until it isn't. And when that time comes, I hope that we haven't sacrificed true learning, knowledge, and expertise for the AI equivalent of "but it's got electrolytes."

I sincerely hope I'm wrong.