The Supposed Weakness of ChatGPT is a Strength for Programmers

This recent quote from the great Ted Chiang got me thinking:

“A lot of times, the world calls upon us to generate a lot of bullshit text, and if you had a tool that would handle that, that’d be great,” he said. “Or, I mean, it’s not great. The problem is that the world insists that we generate all this sort of bullshit text. So having a tool that does that for you … that is arguably of some utility.”

Why do a lot of AI skeptics criticize ChatGPT and other LLMs for being “bullsh_t text generators” while at the same time a lot of software developers think they’re the greatest thing since the programming editor?

In fact, everybody’s right!

There used to be (probably still is) an oft-cited meme at Google which went something like, “Google hires the smartest people in the world just to move protobufs around.” In other words, you may have performed groundbreaking research on your way to an advanced degree in Computer Science with dreams of changing the world, but instead of solving the hardest problems, what you actually end up doing all day goes something like:

  • Read some data from a datastore.

  • Transform it.

  • Ship it out over the network.

  • Receive said data.

  • Transform it again.

  • And write it to a different datastore.

This is, of course, an oversimplification, but like all good oversimplifications, there’s an element of truth to it. In fact, Google developers and their counterparts elsewhere in the tech industry, well, write a lot of “bullsh_t code”.

Maybe if LLMs are “bullsh_t text generators”, that’s exactly why programmers find them so darn useful!

After all, at some level what is bullsh_t code if not bullsh_t text?

Let’s say 80% of the code that a software developer writes at Google is this kind of bullsh_t code and thus subject to being written faster and perhaps better by AI. (I don’t have any thoughts on what the actual productivity boost from current-generation AI is for software developers, and I’ve seen numbers cited all over the map. Just throwing that out there for this argument.)

Further say that it takes the developer 20% of what it would have normally taken to write just to validate and integrate AI-generated code into whatever massive system they happen to be working on.

By my bad math, that means an entire year at Google could be boiled down into about six months through the use of LLMs. That’s a lot of extra time to enjoy Sushi Day, or at least it could have been back when they still had Sushi Days.

Naturally, no sane business would let a software developer use all that time not spent writing bullsh_t code to more leisurely eat sushi. As one class of AI doomers likes to point out, they might be more likely to lay off half their software developers instead.

Maybe you can even extrapolate that from software developers to occupations in the broader economy. After all, there was a popular blog post turned best-seller predating ChatGPT that pointed out just how much of modern knowledge work is, in fact, bullsh_t.

Only the most short-cited tech companies will take that approach. More forward-thinking executives — provided their company still has a growth story at all — will allow software developers to leverage that AI productivity boost into an opportunity to operate with more slack.

It’s hard to do great creative work when bogged down with bullsh_t code. By taking much of that work off software developers’ hands, AI will permit them to more easily enter a headspace where the best ideas and truly inspired code comes from.

So let’s not worry about ChatGPT taking software developers’ jobs, but rather celebrate how it will make their jobs so much better!

Previous
Previous

Tearing Down Code Siloes

Next
Next

GenAI and the Golden Age of Legacy Code