← Back to Blog
AIDecisionOpsFuture of WorkPhilosophy

"The Problem Is Inevitable" — and That's Why AI Won't Replace You

Nicolas Izcovich15 March 20254 min read
"The Problem Is Inevitable" — and That's Why AI Won't Replace You

There's an old idea from the British Enlightenment that sounds pessimistic at first, but it's actually brimming with optimism:

The problem is inevitable.

Not a problem. The problem.

The idea is simple: progress doesn't eliminate problems. It changes their shape.

This idea quietly powered the Enlightenment. And today, it explains something most AI conversations miss entirely.


Why this matters right now

We're bombarded with hype that AI is going to:

  • Replace analysts
  • Replace developers
  • Replace designers
  • Replace decision-makers

Because it "learns."

But let's get real for a second:

AI only works in worlds where the problem is already defined.

And the real world doesn't work like that.


Supervised learning reveals the limit

Let's talk about supervised learning, because it's the cleanest example.

Supervised learning assumes:

  • A defined objective
  • A known label
  • A stable feedback loop
  • A past that meaningfully represents the future

In other words: the model doesn't discover the problem. It inherits it.

For the tech folks: When "training models" we're always tweaking to avoid "overfitting," which means we're admitting upfront that we're molding the thing to fit what we already have.

And here's that Enlightenment gem staring us in the face:

The moment the model gets good at the task, the real problem has already moved.


The "Inevitable Problem Loop"

This is the framework most AI discussions are missing:

  1. Humans solve a problem
  2. That solution reshapes the environment
  3. New constraints, incentives, and failure modes appear
  4. A new problem emerges — one that was invisible before
  5. Humans must reinterpret reality again

AI fits neatly into step 2.

Humans live in steps 3–5.


Why this can't be automated away

AI is instrumental. It optimizes within a frame. It does not question:

  • whether the frame is still valid
  • whether the metric still means what it did last year
  • whether the old solution created new problems

These are not compute problems. They're not data problems. They're interpretation problems.

And interpretation is inherently human.


Why AI feels threatening anyway

AI spooks people because it's killer at the stuff that used to pass for deep thinking.

But honestly, the real juice in any job was never the grunt work. It was in:

  • Framing the right question
  • Knowing when a metric stopped meaning what it used to mean
  • Understanding why something worked — so you could adapt when it didn't
  • Explaining trade-offs to other humans

AI accelerates execution. It amplifies the rate at which new problems appear.


The wall AI keeps hitting

AI doesn't fail because it's weak.

It fails because: progress creates novelty. Novelty breaks models. Every time.

This is the epistemological barrier — not a compute problem, not a data problem, but a knowledge problem.


A letter of hope (and responsibility)

If you're a professional reading this — analyst, engineer, PM, founder — here's the good news:

You are not competing with AI.

You are operating upstream from it.

Your job is not to be faster at answers. Your job is to:

  • Decide which questions are worth asking
  • Notice when yesterday's solution became today's problem
  • Navigate trade-offs that no dataset can label yet

AI will make this more important, not less.

The future doesn't belong to people who "out-compute" machines.

It belongs to people who can reinterpret reality when the problem inevitably changes.

And it always will.