The programming interview is completely broken in 2025
A few weeks ago, I got invited to a screening interview that was completely automated. No human interaction. Just a timer and a problem. Fine—it’s screening, I get it. This is common now.
I open it up and the problem is simple: merge and sort two arrays.
Okay, that’s easy. Two lines of code. I was using PHP, but it’d be the same in any language.
Then I kept reading:
“You can’t use any built-in language methods.”
The timer was already running. I had about 22 minutes left.
Ugh.
I’ve been writing production code for 15 years. I’ve built companies. I’ve scaled systems with real users. But I couldn’t remember merge sort. I haven’t written merge sort by hand since Obama’s first term. Why would I? If you’re writing stuff like that by hand in 2025 for an actual job, I’d like to know where you work.
I remembered that merge sort was a two-pointer problem. I knew the trick.
I wrote a first pass. Five of the tests passed, one didn’t.
Shit.
What did I miss?
Five minutes left.
Panic setting in.
A little anger bubbling up—at myself or at the absurdity of being tested on this, I’m not sure.
Time kept ticking faster.
Finally, I just commented out all my code and wrote:
$finalArray = array_merge($arrayOne, $arrayTwo);
sort($finalArray, SORT_NUMERIC);
return $finalArray;
All the tests passed. Submit.
I didn’t care.
I wouldn’t want to work for a company that tests like that anyway.
This was the moment I realized:
Programming interviews are completely broken in 2025.
The old paradigm is dead
And the break didn’t happen recently. It’s been building for years.
But it hit an irreversible point the first time I watched AI write 200 lines of boilerplate faster than I could take a sip of coffee.
My immediate thought was the same one a lot of engineers had:
“Oh. The entire interview process just became pointless.”
Because what are we actually grading anymore?
Your ability to remember syntax nobody should remember?
Your ability to produce code slower and less reliably than an LLM?
If AI is doing 80% of the typing, then the actual job is the remaining 20%:
knowing what to build, how to structure it, and where the landmines are.
All the things AI can’t do for you.
And yet… interviews haven’t changed at all.
We’re still treating coding like a memory test.
Everyone says interviews are broken—but no one offers solutions
It feels like every week I see another post about how the interview process is broken.
And they’re not wrong.
But almost no one is proposing what to do instead.
People say things like,
“Give me 10 minutes pair programming, I’ll know.”
Okay… then what?
How do you make that fair?
How do you standardize it?
How do you evaluate reasoning without evaluating memorization?
How do you scale that for every candidate?
Everyone identifies the problem.
Almost no one offers a solution.
So here’s mine.
The real job isn’t about memorizing—it’s about understanding
Real engineers—the ones who’ve actually built systems that have been yelled at by real users—aren’t walking encyclopedias. They’re problem framers. Decision-makers. Simplifiers. Refactorers. People who understand the system the code is actually being written for.
And by “system,” I don’t mean architecture diagrams—I mean the real system: the organizational mess behind every feature.
The middle manager who insists on a requirement that makes no sense.
The PM who drops a half-formed idea on you because their boss wants it.
The conflicting priorities.
The unspoken constraints.
The decisions made two levels above you in a meeting you weren’t in.
Senior engineers know how to navigate that system.
That’s a very different skill than knowing a design pattern by name.
So how do you test engineers in 2025?
I’ve wrestled with this question for years. Every company. Every candidate. Every interview.
It always feels like we’re evaluating the least important part of the job.
And then years ago, I came across a take-home assignment that was almost insultingly simple:
“Build a CRUD API for blog posts with tags and soft deletes.”
That was it. No tricks. No cleverness. No algorithms.
Honestly, I thought it was too basic—until I started reviewing submissions.
And that’s when it hit me:
Simple problems expose everything.
You can instantly tell who has been in the real world.
Who’s dealt with soft deletes before.
Who has been burned by normalization decisions.
Who instinctively thinks about edge cases because they’ve lived through the pain of ignoring them.
Who has restored an item on a Friday at 4:59 PM and knows what can go wrong.
The simplest problem reveals the deepest experience.
And that’s when I started questioning why we ever stopped doing this.
Ask better questions
When someone turns in that tiny blog API, you don’t grade the code.
You grade the thinking.
Ask:
- How did you handle soft deletes with tags?
- If you restore a post, what happens to the tags?
- How did you normalize them?
- Did you add indexes? Why those indexes?
- What did you intentionally leave out?
These questions tell you infinitely more than watching someone reverse a linked list on the spot.
Because the code creation isn’t even the point anymore.
AI can write boilerplate better than anyone.
We should be evaluating judgment, not trivia.
Experience, not recall.
Taste, not mechanical correctness.
The real test: can they put the pieces together?
The difference between a junior engineer and a senior one isn’t vocabulary.
It’s whether they’ve been burned enough times to know what matters and what doesn’t:
- When to abstract and when to wait.
- When two similar pieces of code are actually different.
- How to structure data so renames don’t silently break everything.
- How to think about deletion, restoration, and state.
- How to reason clearly even when the code is AI-generated.
That’s the part of engineering AI can’t replace.
So if you want to evaluate engineers properly in 2025…
Stop testing their short-term memory.
Let AI handle the memorization—that’s its job now.
Test whether they can put the pieces together.
Because that’s how you know someone has actually been around the block.