When Legacy Isn’t Years—It’s Weeks

I love working with AI agents. Not because they’re clever or magical, but because they expose something boring about how we work: we are creatures of habit, and habits harden fast.

When I shipped an agent into a team’s workflow recently, the first feedback surprised me. It wasn’t about the agent’s performance. It was this: „This workflow is clean, consistent, and well-established.“ The workflow had been running for two months. Two. Months.

That’s when I realized something uncomfortable. In the world of AI agents, legacy doesn’t mean years. Legacy can mean January to March. A tool that shipped eight weeks ago feels permanent because the agent treats it as permanent. And because the agent is now the system executing that habit at scale.


The Speed of Habit Calcification

Humans are adaptable. We change tools, shift processes, question assumptions. We complain about them first, then we change them. It’s slow and messy, but it happens.

Agents are different. They don’t complain. They don’t question. They execute.

Give an AI agent a workflow—any workflow, even one you know is temporary—and something shifts. The agent learns it, optimizes around it, and starts making decisions based on it. The workflow is no longer an experiment. It’s infrastructure.

The problem isn’t that agents follow instructions. The problem is that they follow them consistently, and consistency feels like truth.

Within weeks, that temporary tool becomes the assumed input for the next tool. The next agent in the pipeline assumes it’s there. The team assumes the agent is depending on it. And suddenly, changing that original tool isn’t a quick adjustment—it’s a breaking change in a system the agent has already woven itself into.


What Habits Get Encoded in a System

The specifics matter less than the pattern. When an agent lives inside your workflow, it learns and locks in:

  • The tools you’re comfortable with. Not the best tools. The ones you know. The agent will route around friction in familiar tools far longer than it should, optimizing the usage pattern instead of questioning the tool itself.
  • The workflows you’re used to. The handoff from one system to another. The approval gates. The data format that everyone recognizes even though it’s suboptimal. The agent doesn’t streamline this—it replicates it, at scale, with precision.
  • The assumptions you stopped questioning. Most teams have them. „We always export as CSV because that’s what the downstream system accepts.“ „We need a human checkpoint here because that’s how we’ve always done it.“ „This tool chain works because we’ve built workarounds into every step.“ An agent doesn’t break the assumption—it bakes it deeper into the system.

None of this is the agent’s fault. The agent is doing exactly what it should: executing the workflow you gave it, consistently and at scale. The problem is that consistency compounds assumptions.


Why Inspect & Adapt Becomes Mandatory, Not Optional

Inspect & Adapt is often taught as a principle—something nice to do if you have time. In the presence of AI agents, it becomes operational hygiene.

Here’s why: without active inspection, your workflows don’t degrade gracefully. They calcify.

In a human-operated system, drift is visible. Someone notices the workaround. Someone complains about the tool. Someone tries a new approach. Feedback is noisy, but it exists. Change is slow, but it’s possible.

With agents, the feedback loop vanishes. The agent doesn’t complain. The system appears stable because the agent is executing consistently. The tool doesn’t look broken because the agent has built a perfect optimization layer around it.

Then one day, you try to change something in that „temporary“ workflow, and you realize:

  • Three other processes now depend on the agent’s output format
  • The agent’s decision-making is built on assumptions from the original tool
  • The team has internalized the workflow as „how this gets done“
  • Changing it requires you to rebuild the agent, revise the downstream processes, and reset the team’s muscle memory

All from a choice made eight weeks ago that felt temporary.

This is where Inspect & Adapt stops being about continuous improvement and starts being about basic maintenance. Not because improvement is nice. But because without it, systems don’t just age—they calcify. And the calcification happens in months, not years.


The Question That Matters

I keep asking myself the same question:

How many hours per week am I willing to invest to stay near the current productivity ceiling, before my agents politely optimize around yesterday’s tooling?

That’s the trade-off. Not whether to change. But how much constant gardening the system requires to stay unblocked.

And it’s a real cost. Every week you spend explaining to an agent that a tool is no longer optimal, instead of changing the tool itself, is a week you’re paying the tax of habit. Every integration you build around a temporary system is a debt against the next time you want to simplify.

Some of that tax is unavoidable. Systems have inertia. But most of it—the part that grows exponentially—comes from skipping Inspect & Adapt when things look stable.

The agents are fine. They’re not the problem. The problem is us—we built systems that resist change better than we resist habit. And now we’ve equipped those systems with agents that can execute habits perfectly, at scale, without ever questioning them.

The question isn’t whether to inspect and adapt. The question is: how many months before you no longer can?

Boris Heuer

Boris Heuer
AI Engineer & Consultant

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top