top of page

Stop chasing AI drift. It’s time to shift from "human in the loop" to "human in control."

  • Writer: Jonathan Gordon
    Jonathan Gordon
  • Feb 3
  • 4 min read

Updated: Mar 10

The word "DRIFT" in white on a black background. The word looks like it's being blown away left to right like smoke












Takeaways

  • Most teams don't have a code quality problem — they have a drift problem.

  • AI accelerates drift because it has no access to a system's history, reasoning, or intent.

  • The answer isn't better inspection after drift happens — it's governance that prevents it structurally.

  • "Human in the loop" means chasing drift; "human in control" means expressing intent through constraints that keep AI within bounds.


When we create a document and print it, we expect that what comes out of the printer will match what we intended. And that usually happens. In the world of software, however, design (what you want) and code (what you get) live in different worlds created by different individuals who are evolving the product at different speeds.

The gap between what was designed and what gets built grows quietly, invisibly — until suddenly, it’s a problem.

Those who are using AI to design and build need to be able to control problems where they originate, not simply solve problems down the line after they've happened.

We need "human in control" instead of just "human in the loop."

How we build and evolve AI tools in this moment is critical. Can we empower humans with better AI tools that ensure better outcomes?


The drift problem

Design systems define how interfaces should look and behave. Code implements those interfaces. When the code no longer reflects the design — that’s drift, the gap between what was designed and what gets built .

Drift accumulates gradually across hundreds of small decisions:

  • A designer updates button padding in the design system.

  • The code doesn’t update.

  • A developer adds a variant for an edge case.

  • The design system doesn’t reflect it.

Multiply this across components and team members over time. By the time someone says, “That’s not what we designed,” or “This doesn’t match the library,” finding the source of the gap can be like archaeology — tracing changes, reconciling competing truths, and deciding which is authoritative.


The looming drift crisis

AI generates working code in minutes — sometimes seconds. Amazing. But working code and correct code aren’t the same. Functionality doesn’t guarantee alignment with the original design intent.

When AI makes implementation decisions independently, it doesn’t have access to the system’s evolution, architectural decisions, or reasoning behind specific patterns. Misinterpretations happen. Mistakes get made. AI rapidly accelerates the volume of drift that occurs. Humans become drift chasers — catching divergence only after it happens.

Inspection is expensive because drift is subtle. You’re looking for inconsistencies in output that looks mostly right.

Chasing drift once it happens is the wrong approach entirely. The answer isn’t better inspection — it’s tools that prevent drift from forming in the first place.

Better governance tools deliver better outcomes

Better tools don’t just make us more productive, they can also affect what can go wrong. And developer tool evolution can teach us lessons.

Version control gave us governance over parallel work. Multiple developers could work simultaneously because the system maintained coherence. Drift between developer understanding and code reality became structurally impossible.

Type systems gave us governance over code contracts. Systems enforced alignment between what functions promised and delivered. Drift between interface and implementation became detectable at compile time.

Linters gave us governance over style. Systems maintained conventions automatically. Drift between team standards and actual code became continuously corrected.

Governance through continuous structural enforcement form the foundation of better software outcomes, not periodic inspection.

Governing drift in design systems

Design systems and codebases should represent the same component architecture. They are in effect, two views of the same truth, but they live in separate tools. Keeping them synchronized requires manual effort. Exporting tokens, translating changes, propagating updates, and reconciling divergence are the norm.

Each manual step is an opportunity for drift. Each context switch breaks synchronization, and each translation allows the intent and implementation to diverge.

Tools that govern coherence maintain alignment as design and code evolve. When a designer updates the design system, changes should flow to code without translation, and when a developer refactors code, the design system should reflect the change without documentation lag.

The work doesn’t change. Designers still design; developers still code.

Better tooling governs coherence continuously rather than requiring periodic reconciliation.

Why this matters now

Drift compounds as systems grow, as teams scale, as time passes, gaps widen.

Current approaches treat drift as inevitable:

  • Regular audits to find what drifted

  • Documentation reminding people to stay synchronized

  • Code review to catch drift before merge

  • Inspection of AI output to verify it matches specs

These are downstream responses to drift that already happened.

Better inspection isn’t the opportunity. Governance is.

The shift

Human-in-the-loop means continually running after something that’s already dispersing—and review volume scales with AI speed. The cognitive load isn’t “Did this work?” It’s “Did AI interpret intent correctly?” That’s harder than expressing intent directly. By the time you’re reviewing output, you’re debugging AI’s interpretation rather than validating your own direction.

Human-in-control provides for the ability to maintain coherence at scale. To express intent through constraints that ensure AI executes within those bounds. When constraints change, effects propagate automatically. Cognitive energy goes to direction, not verification. You’re governing the properties that matter — coherence becomes structural, not something you chase.

Drift is inevitable in systems where tools don’t govern coherence.

AI can generate code. No doubt about it. What we choose to build around that powerful capability determines whether humans maintain coherence or chase after it.

--------------------------

Jonathan Gordon is the founder/CEO of ReWeaver AI. He has worked as a user-focused software designer leading design and engineering teams at Google, Microsoft, Oracle, Facebook, SAP, and others. ​​​​​​​​​​​​​​​​

 
 
 

Comments


bottom of page