“How do you approach testing?”

I’ve been asked this in every interview I’ve had—and my answer tends to surprise people.

Here’s my honest answer—and it’s shaped by years of shipping real products, not theory.

I don’t believe in 100% test coverage as a universal rule.

In theory, it’s a great goal. In practice, it can be difficult to manage without creating bad incentives.

When teams mandate it, they often end up optimizing for passing tests, not useful tests. You get coverage numbers that look great, but tests that catch nothing and protect nothing.

That doesn’t mean testing isn’t important.
It means where and when you test matters more than raw percentages.

Where near-100% coverage does make sense

  • Libraries
  • Frameworks
  • Core abstractions

When I evaluate an open-source dependency, I look at:

  1. How active it is
  2. How many unresolved issues it has
  3. What its tests look like

Minimal or nonexistent tests in a foundational library are a major red flag.

Application code is different

Early on, large portions of the product change rapidly by necessity. Heavy test suites at this stage often slow learning and force constant rewrites.

Over time, the code that needs tests reveals itself:

  • Edge cases
  • Complex business logic
  • Critical paths that keep resurfacing bugs
  • Code that’s hard to manually test (background jobs, async workflows)

A general rule I like to follow:

Get it working. Clean it up. Ship it.
If you have to touch it again because of a bug, write a test.

Critical flows matter most

One more thing I always emphasize:

Revenue-generating flows must be tested on every release.

I’ve seen this more than once: a deploy goes out, signups silently break, and no one notices for days. That’s catastrophic.

Even if you’re eventually able to figure out who attempted to sign up through logs, that follow-up email is painfully awkward—and almost no one ever returns.

You’ve permanently lost their trust.

Manual testing still matters

Early-stage testing doesn’t need to be fancy.

Some of it should be manual:

  • Can someone sign up?
  • Can they sign in?
  • Can they pay?

Simple checklists.
Clear ownership.
Every release.

Testing is a tool for managing risk, not eliminating it.

Bonus: how AI is changing this

AI makes tests much faster to write—especially when code is properly abstracted and follows first principles.

But there’s a trap I’ve seen firsthand:

AI reaches for mocks far too often.

Mocks are necessary at the edges—networks, time, third-party services. But overuse them and you can make almost any test pass, even when the underlying logic is wrong.

At that point, the test is only proving that your mocks behave the way you told them to.
Not that the system actually works.

One more thing that matters just as much:

All AI-generated code must be reviewed. Every line, including tests.

AI isn’t validating correctness—it’s pattern matching. Used lazily, it just helps you ship bugs faster.

If your tests don’t increase confidence in a release, they’re not doing their job.