System design interviews used to be my favorite part of the full-stack engineering hiring process. They were the one place where senior engineers—the ones who actually understood how the internet worked, how data flowed, how systems behaved under pressure—could shine.

And the best part was always the format:
just two engineers talking through a problem, asking questions, exploring constraints, and stress-testing each other’s ideas.

Exactly how real system design happens in the real world.

Because no major architectural decision in a functioning engineering team is ever made by one person drawing boxes alone on a whiteboard.
Real systems are designed collaboratively—with back-and-forth, with tradeoffs, with “wait, what happens if…?”
It’s a team sport.

And for a long time, system design interviews actually reflected that reality.

Then, somewhere between 2022 and 2025, something quietly broke.


The best system design interview I ever had (and I didn’t even get the job)

Back in 2022, Rindle—my startup—was shutting down.
I needed a job. Stressful period, uncertain period, all of it.

And in the middle of that, I had one of the most enjoyable technical interviews of my life.

It started normally enough: introductions, background, a little small talk.
Then the interviewer said:

“Do you want to design something like Twitter, or a viral URL shortener?”

Two options.
Two legitimately interesting paths.

I almost picked Twitter, but I knew exactly where that would go—straight into their timeline fan-out model, one of those famous distributed-systems problems people love to study but rarely ever need in real life. And if you ever did need it, you certainly wouldn’t be solving it for a real use case in 40 minutes.

So I picked the URL shortener.

I’d built one before, and I understood the practical details of generating compact, collision-resistant URLs. And I could talk about it honestly—without pretending to be the kind of distributed-systems purist who studies timeline fan-out models for fun.

We started with the basics:

We started with the basics—how to generate unique URLs, what character set keeps them short without introducing ambiguity, whether they should be case-sensitive, and whether to hash, encode, or use a bigint. We talked through general table structure, indexing strategies, and even which database to use—and why a relational database is basically perfect for this kind of workload. Simple schema, strong consistency, extremely fast reads, and rock-solid indexing. You don’t need anything exotic.

Then the conversation naturally moved into scale: what happens when millions of people hit the same link at once, where the system breaks first, how you’d shard it, what you’d use as the shard key, and how to place read replicas in different regions. Edge servers were barely a thing at the time, and honestly with how strong internal caching is on a relational database, we weren’t even sure an additional caching layer would help that much outside of truly extreme cases.

From there we drifted into the fun stuff—storing analytics efficiently, how to handle click tracking without blowing up your database, how to store or batch page-view counts, how YouTube reconciles view counters later, and whether to batch or stream updates. Real architectural back-and-forth, real tradeoffs, none of it rehearsed, none of it templated.

It was a conversation.
A technical one.
A fun one.

Two seasoned engineers going back and forth, pushing on each other’s ideas, talking through real tradeoffs and real constraints.

And I know I did well because afterward my friend who worked there messaged me:

“He really enjoyed the conversation.”

I didn’t get the job, but I left the interview feeling energized.
Like: “Yeah, this is what engineering is supposed to feel like.”


Fast-forward to 2025. Something broke.

I don’t know exactly when it happened, but system design interviews today feel nothing like that.

Every single one looks identical:

Step 1

You’re dropped into some awkward collaborative drawing tool nobody actually uses in real life.

Step 2

You’re given a broad, vague prompt like:

“How would you extract this one feature from our monolith into a microservice?”

And the frustrating part is that the real answer is usually:

Don’t. Don’t do that.
There’s no business case for it, no scaling issue, no complexity reduction—it’s just architecture theater.

But that answer isn’t on the rubric.

Step 3

The interviewer goes completely silent.
No curiosity.
No redirection.
No “Interesting, walk me through that.”
Just someone watching you draw boxes while they mentally check off items:

  • Mentioned retries ✔
  • Mentioned monitoring ✔
  • Mentioned load balancing ✔
  • Mentioned caching ✔

You can feel the checklist.
I swear at one point, when I mentioned SLI and SLO, I watched the interviewer glance down and literally check something off. Not a smile, not a question, not even a “good point.” Just a quiet little tick on a box like we were taking the SAT.

And it’s ridiculous:

It’s not testing architecture.
It’s testing whether you studied the right YouTube videos.


Why this happened

It’s not malice.
It’s pressure.
And insecurity.


1. AI panic

Companies are terrified of hiring someone who just outsources everything to ChatGPT.
I honestly don’t know why. Most companies openly encourage AI use. Half their “productivity initiatives” basically amount to: “Go use AI more.”

And the funny part?
AI is actually pretty good at the thing these interviews claim to measure.
It can brainstorm.
It can poke holes.
It can explain constraints and tradeoffs.
It can have a deeper architectural discussion than some of these interviewers.

So what are they really scared of?

Probably their own jobs.

And instead of confronting that insecurity, they respond with rigid, procedural rubrics—something to cling to, something that feels “objective,” even when it delivers the opposite.


2. Inexperienced interviewers running senior interviews

Now?
It’s often someone who has never designed anything.

I’m absolutely baffled by the number of engineering managers I come across—the people running these interviews—whose LinkedIns show zero actual engineering experience.
They’ve been “Engineering Manager” since day one.
They’ve never built a system, never owned a service, never touched distributed anything.

It’s almost embarrassing.

So instead of relying on judgment they don’t have, they reach for:

  • Rubrics
  • Checklists
  • Scripts

The safest possible structure for the least experienced possible interviewer.


3. Companies want “consistency”

Consistency sounds noble.
Who doesn’t want a fair interview process?

But you can’t standardize human technical judgment without flattening everything into trivia.

And I learned this the hard way recently.

I had a system design interview where the prompt was to break out a billing component from a monolith into a microservice. Straightforward enough. I walked through how I’d move the existing billing logic over first as a lift-and-shift, and then use the strangler pattern to gradually route traffic so nothing breaks during the transition.

About halfway through, the interviewer interrupted:

“What about the database?”

I said, “Wait—the database is still on the same server as the monolith?”

He said yes.

So I responded with what I still believe is the correct architectural call:

“Then the database needs to be separated from the monolith first—moved into its own standalone database instance. That’s a separate project, and it has to happen before you even think about building the microservice.”

This is textbook sequencing.
Moving the database at the same time as splitting the service isn’t just risky—it’s how you take production down in one shot.

I thought that covered me.

It didn’t.

They rejected me and the feedback—which I honestly wish they hadn’t given—was that my “system design skills were lacking.”

At the time it really got to me. It made me question myself more than I should have. But looking back, it’s pretty clear: if they couldn’t see I was making the right call, then the interview wasn’t measuring the thing I’m actually good at.

That’s the danger of chasing consistency:
they weren’t evaluating judgment—they were evaluating whether I followed their expected script.

When you flatten architecture into a checklist, you stop measuring thinking and start measuring compliance.


4. Study-guide architecture culture

We now live in a world where system design interviews are so templated, so predictable, so stripped of real context, that there are entire courses teaching people how to “pass” them.

Dozens of courses.

When your industry has spawned a cottage business around “how to pass your system design interview,” you should be worried—because it means the test has become more important than the skill.

And here’s the problem with study-guide culture:

If a system design interview can be passed by memorizing patterns, you’re not hiring builders.
You’re hiring memorizers.

And anyone who has ever taken a high school exam knows exactly how this works:

You cram.
You take the test.
You get the score.
And you forget everything by next Tuesday.

Because you never learned the material.
You learned the performance of the material.

Companies are accidentally optimizing for the performers.

  • Reciting “CQRS” even when the problem doesn’t need commands or queries.
  • Dropping the phrase “eventual consistency” like it’s a spell that unlocks bonus points.
  • Drawing the same five-box diagram every YouTube tutorial draws—load balancer, API gateway, service layer, database, cache.
  • Saying “we’ll shard by user ID” because that’s what every blog post says, even when the dataset doesn’t even have users.
  • Proposing Kafka for no reason other than they know Kafka exists.
  • Adding Redis “for caching” without being able to explain what would actually be cached or why.
  • Suggesting a message queue because it’s in their flashcards, not because the system needs one.
  • Talking about “horizontal scalability” long before the MVP has even one real user.

None of that tells you whether they can actually design anything.

It tells you whether they know the script.

And when your hiring process selects for people who know the script…
don’t be surprised when you end up with a team that can’t deliver.


The hard truth nobody wants to say

A good system design interview requires the interviewer to actually be able to design complex systems themselves.

Not just talk about them.
Not diagram them.
Actually build them.

And that’s not always easy to find internally.

So instead of raising the bar for interviewers, companies lowered the bar for the format—reducing it to something almost anyone can administer.

And instead of questioning whether the process is producing bad hires, they’ve decided the real problem must be AI.


So what should a real system design interview look like?

1. A conversation, not a performance

Two engineers talking through a real problem.

2. Prompts with multiple valid solutions

URL shortener.
Notification system.
Feature flags system.

Anything where the “right” answer is a reasoning process, not a specific design.

3. Allow the candidate to say “don’t do that”

Sometimes the best architecture decision is choosing not to build something.
I’d like to think that somewhere, an interviewer actually wants a candidate to say this—so they can ask, “Why?”

And the “why” behind not doing the thing tells you more about a senior engineer than any diagram ever will.

It reveals judgment.
It reveals pragmatism.
It reveals whether they understand the cost of complexity, the tradeoffs, the impact on the team, the product, the infrastructure, and the future.

There’s no memorized pattern for that.
There’s no cheat sheet.
It’s pure reasoning.

And ironically, it’s the one answer most system design interview formats don’t even make room for.


4. Explore real experience

But skip those awful “What’s the hardest system you’ve ever built?” questions.
Good engineers don’t keep a highlight reel of war stories—we solve problems, ship the fix, and move on.

A better prompt is:

“What’s the last real thing you built—work, side project, whatever?”

Then have an actual conversation about how it could scale, what would break, and where the tradeoffs live.
(Which, again, requires an interviewer who knows how to talk about scaling.)

That shows you everything worth measuring.
Not storytelling ability—actual engineering judgment.

5. Use rubrics to assess thinking, not keywords

Rubrics should capture:

  • tradeoff reasoning
  • clarity of mental model
  • constraints awareness
  • communication

Not whether they said “eventual consistency.”
Which, honestly, I couldn’t even define if you asked me—it was just the first buzzword that came to mind.


The real point

When system design interviews are conversations, both people leave energized.
Candidates feel respected.
Interviewers actually learn something.
Teams get a sense of how someone thinks and what experiences they’ve actually had—not what they memorized.

When they devolve into box-ticking SAT exams?

Nobody wins.

The company hires people who studied the right checklist.
The candidate feels like they were judged on buzzwords.
And we get worse at evaluating what actually matters:
real engineering judgment.

You’re supposed to be hiring for experience, not memorization.
And experience can’t be extracted from someone through a checklist.