Tech Interviews Aren’t “Broken” Per Se

Obsolete != Broken

I bought a new phone last weekend. So what? Well, for me this is pretty monumental because typically I use things until they’re broken. The phone I replaced — a OnePlus 6 — was not in fact broken. It still works fine. But I finally had to concede that it was showing its age and maybe not meeting modern purposes as well as it should anymore.

Tech Interviews Aren’t Relevant

A lot of tech influencers have made a nice and presumably lucrative online content career out of how to hack a career in Big Tech, and the fact that that’s even “a thing” should tell you that something is wrong in general in this space. And if I had a dime for every time I heard or read one of these folks say that tech interviews are “broken”, I would have lot of dimes. And true enough, who doesn’t hate LeetCode-style coding (and design and architecture) interviews that test your ability to do things that will likely never come up in the job you’re interviewing for?

But if tech interviews don’t check for job-relevant skills — the most common complaint — does that necessarily mean that they are broken? I would suggest the answer is “no”, and to understand why, we need a short history lesson.

But They Aren’t Designed to Be

The modern tech interview has its origins at 1980’s-1990’s Microsoft. By the admission of anybody who remembers those days, Microsoft interviews were never intended to test for “job-relevant” skills. This might have been for no other reason than at the time, almost nobody had any job-relevant skills. In a lot of universities, a “Computer Science” major as we know it was a relatively new thing, and maybe not a very high-quality education at that. At my university in the early ‘90’s, they were still teaching “advanced” classes on VAX, and this was long after it was obvious that other computing form factors were the future. (That’s why I majored in Math, because it got me easy access to their NeXT Workstations, which were far superior to anything the so-called “Computer Science” department was running.)

Therefore it made sense that Microsoft was a lot less picky about hiring CS majors at that time. I recall Electrical Engineering and Physics degrees being especially popular targets for recruiting. And that’s just university candidates. For industry candidates, it’s more likely they would have a background programming on mainframes than on PCs. The really well-qualified and lucky ones might have worked on UNIX, which has at least some commonality to PC programming, but it’s still worth appreciating how big a leap that was back then.

Puzzles => “Creativity”?

But that presents a dilemma: If you’re Microsoft (or Lotus or a handful of other companies in those days), how do you give a skills-based interview to somebody who has probably never programmed in the large in a PC environment? The answer is, you don’t, and what they came up with instead was a whiteboard-based puzzle interview system that (allegedly) evaluated “potential” and “creativity”. It’s not my purpose in this post to venture into the fraught territory of discussing whether this style of interviewing actually accomplished that or whether it was just a tool to apply implicit biases. But in the zeitgeist of the era, the public largely bought into that stuff. You can see this in the Amazon description blurb for the contemporary classic book “How Would You Move Mt. Fuji?”, which is evidentally still in print and therefore well worth your time to read:

“From Wall Street to Silicon Valley, employers are using tough and tricky questions to gauge job candidates' intelligence, imagination, and problem-solving ability”

The structure of the whiteboard interview as applied at Microsoft — three one-hour sessions followed by the “as-appropriate” manager interview — would look familiar to tech workers of today, but the questions were mostly brainteasers — for example, the fastest way to get a rock band consisting of Bono, Adam, Larry, and The Edge across a bridge at night with only one flashlight. (I prefer the form of this question that uses Ace, Paul, Peter, and Gene.) There was quite possibly as few as a single LeetCode-y programming question.

And On It Goes

This was, of course, when Microsoft was the hot tech company (the first time), and passing one of these interviews generated a certain amount of geek cred in some circles. And the process was largely considered successful, as evidenced by Microsoft’s stock price in that era. Therefore when Google came along a few years later, they saw fit to mostly copy this interview process, adding a few new wrinkles like questionably relevant distributed systems questions. After all, what would early Google have been without geek cred. Just a bunch of posers in propeller hats, I suppose. In the 2000’s and into the 2010’s, this style was perpetuated throughout the entire industry for a couple of reasons: First, what company doesn’t want to be just like ‘90’s Microsoft or ‘00’s Google, and second, after some point nobody in tech knew of any other way to conduct an interview — a situation that largely persists to this day.

Doubling Down on Coding Puzzles

Somewhere along the way, though, as an industry we collectively realized that asking job candidates the most efficient way to weigh eleven balls with a beam balance might not necessarily be the best way to evaluate them, especially as the industry matured and folks came in with more relevant skills and experience. So we ditched the brainteaser questions. But (and here’s the key point):

Instead of rethinking the form and purpose of the tech interview, we backfilled puzzle questions with even more irrelevant coding questions.

So here we sit today. When an engineer laments that tech interviews don’t include questions that assess relevant skills, they need to realize that modern tech interviews were designed to assess more amorphous and controversial qualities and (yes) haven’t kept up with the times. Tech interviews aren’t broken, they are merely obsolete, 30 years or so past their prime. I would suggest that most interviewers don’t even realize this, and when someone does poorly on their LeetCode interview, they incorrectly assume that person has poor engineering skills, even if their resume and references provide substantial evidence to the contrary. This leads folks to assume they have to “hack” their way into a job by spending all their free time practicing LeetCode. The entire situation is beyond sad, it’s almost certainly discriminatory, and it’s bad for the long-term health of our industry.

Now What?

Fixing this problem is hard, so I have a great deal of appreciation for the tech leaders and companies who are trying new things, like (hopefully focused and time-boxed) take-home exams and giving more weight to the body of work on a resume or on GitHub. What’s your company doing for tech interviews these days? Reach out if you want to discuss ideas.

Previous
Previous

Software Engineering the Geopolitical Landscape

Next
Next

Migrate Your Engineering Team to the Cloud