Tech keeps adding more surveillance to interviews instead of asking a simpler question:
Why are we defending a broken system in the first place?

People keep saying the solution to cheating is simple.

Bring everyone back into the office.
Interview in person.
Stare at the candidate.
Control the environment.

As if geography magically fixes a broken process.

Or worse, record their screen, record their face, record their audio, and make them sign a sworn affidavit that they are not using outside help.

All of this just to police the use of AI, instead of thinking about what the interview is actually supposed to be measuring.

Because here is the uncomfortable truth:

AI can solve your interview question, but that isn’t the point.
You’re not supposed to be measuring the answer—you’re supposed to be measuring the engineer.
Their judgment.
Their thinking.
Their ability to solve real problems with whatever tools are needed.

Everyone is obsessed with stopping people from “cheating” with AI.
The amount of effort being spent on solving this imaginary problem is almost comical.
But almost no one is willing to admit the real issue:

The hiring process is outdated and evaluates the wrong things.

And honestly, it is starting to make me angry.

Because the solution is not complicated.

Let people work the way real engineers work.
Use AI.
Use search.
Write messy drafts.
Think in private.
Then discuss their decisions afterward.

We’ve lived through this before.
When I first started interviewing, everything was done on paper.
Then interviews moved to computers, and for years you still were not allowed to use Google.
You couldn’t look up a function.
You couldn’t search documentation.
You were expected to have everything memorized.

Eventually the industry adjusted.
Everyone realized that real engineers use Google constantly, so banning it made no sense.

What is happening with AI today is the exact same thing.
We are treating a normal, everyday engineering tool like a threat instead of embracing what it enables.

And yet companies cling to the old system like it is some sacred ritual.
Timed puzzles.
A stranger watching you debug.
No tools.
No references.
No thinking time.
No realism at all.

And for what?

To stop “cheating”?

The minute you get hired, you will use AI, Google, StackOverflow, the docs—all of it.
The company will encourage it.
They will expect it.
They will rely on it.

But if you use those tools in the interview, suddenly you are dishonest.

What are we doing here.

Let me give you an example.

Years ago an interviewer kept saying IC. I had never heard the term before. Every time he said it, I felt like I had missed something obvious. Eventually I asked.
“Individual contributor.”
He said it casually. No big deal.

But in that moment I felt stupid.
Not because I lacked skill, but because interviews reward tiny pieces of insider culture that have nothing to do with the work.

It’s a good example of how interviews often gatekeep with things that don’t really matter. I don’t like qualifying through acronyms.

And it’s not just jargon that creates these moments. Sometimes the structure of the interview itself works against how real engineers think.

Here is another example.

I had two back-to-back SQL interviews recently. My query was clearly returning the wrong results, so I said so and started debugging. The interviewer stopped me and asked,
“What are the results telling you?”

I said, “That they’re wrong,” with a small, awkward laugh.

He pressed again.
“No, what does your gut tell you is wrong?”

My gut was telling me I didn’t get the job.
But in reality, I was just trying to think.
Trying to solve a problem that probably looked trivial to someone who had seen it a dozen times.
Quietly.
Because real problem-solving is internal, not performative.

That is the whole point.
You cannot articulate real thought on command.
And you shouldn’t have to.

Interviews have become performances.
Stage plays.
Simulations created by people who forgot what real engineering signal looks like.

And AI just made something obvious that we used to be able to pretend wasn’t there.

The ironic part is this:

Everyone keeps blaming AI for breaking interviews.
But AI didn’t break anything.
We’ve known for years these formats were broken.
Fermi problems have never been good signals.
LeetCode has never hired great engineers.
And memorizing git commands has never made anyone a better engineer.

So here is the simple path forward.

Stop banning the tools people actually use.
Stop designing interviews around panic and memorization.
Stop mistaking pressure for signal.

Start evaluating the things that matter:

  • Judgment.
  • Reasoning.
  • Tradeoffs.
  • Quality of thought.
  • How someone approaches a real problem—not a staged one.

Real engineering does not look like a whiteboard puzzle.
It never has.
It never will.

The sooner we stop interviewing for performances,
the sooner we can start hiring actual engineers.