top of page

We're building fast with AI in the wrong direction: Why AI development without users fails at scale

  • Writer: Jonathan Gordon
    Jonathan Gordon
  • Feb 3
  • 4 min read

Updated: Mar 10

Eye-level view of a computer screen displaying code and AI tools
Convex And Concave, MC Escher, 1955

Takeaways

  • AI lets you build the wrong thing at industrial scale — speed without user understanding is just faster failure.

  • The invisible work of observing real users can't be prompted, automated, or skipped.

  • We need infrastructure that encodes user needs to guide AI — the same way compilers and linters guide code.

  • The choice isn't speed vs. quality; it's speed with foundational guidance vs. speed without it.


I started my career as a software developer. I built what I was sure was amazing software that people would love. So proud of it that I went to watch users actually use it.

What I found was humbling: they couldn't even see the feature I'd spent months on. It was hidden behind sticky notes plastered across their screens — workarounds for constant workflow interruptions that nobody had told me about. Because I had never asked.


That moment shaped everything that came after. The lesson was simple and brutal: before trying to build better software, understand the actual pain first.

I've spent the decades since lobbying to keep users "in the room" where software gets made. Now, as the industry races toward AI-first autonomous development, I see the same blind spot on a much larger scale. We're not asking: are we building the right thing?

The gap between design thinking (what is needed) and software building (what you get) has always been the hardest problem in our industry. AI doesn't close that gap. It widens it.

AI promises software built 24/7. Design variations generated overnight. Ship faster. Iterate endlessly. Let the algorithms optimize everything.

What this actually means: we can now build the wrong thing at industrial scale. Without real users in mind, the gap between what people need and what they get doesn't shrink. It compounds.

 

The problem that created the discipline

In the early days of software, engineers built what made technical sense: clean architectures, elegant algorithms, systems that worked beautifully from the inside. Users couldn't figure out how to use them.

The fix wasn't better engineering or faster coding. It was stopping to understand what users actually needed before writing a single line.

We learned to observe real work environments. We discovered that expert assumptions about "obvious" workflows were routinely wrong. Understanding what users need required careful observation, not better requirements documents. Practitioners needed to feel the pain.

Software that ignored real user needs failed, no matter how technically impressive it was.

Now we have AI agents generating variations at scale. Multiple design options produced while you sleep sounds productive.; but if those variations aren't grounded in actual user context—how people really work, what frustrates them, what they're actually trying to accomplish—you've just created multiple polished solutions to the wrong problem.

Speed without direction isn't progress. It's motion. And if you're building the wrong thing, building it faster just means you fail faster.

 

The invisible work that makes software actually work

When you use a product that "just works," you don't see what went into it:

  • The designer who noticed users consistently misinterpreting a label.

  • The research session that revealed a workflow assumption was backwards.

  • The usability test that caught a critical error state before launch.

  • The contextual observation that uncovered a need that the requirements never mentioned.


There's no commit message for "Understood the actual problem." But without this work, you end up with software that's technically sound but practically unusable.

AI agents don't do this invisible work. They optimize what you tell them to optimize. They execute the requirements you give them. They can't tell you when your understanding of the problem is fundamentally wrong.

An agent that perfectly executes a misunderstood requirement is worse than slow manual development that catches the misunderstanding early. You've automated building the wrong thing — at scale, with confidence.

Understanding users requires human judgment: what matters versus what doesn't, real friction versus surface complaints, and what patterns mean in context. It requires recognizing when your own assumptions are wrong—a kind of humility that doesn't compile.

You can't prompt your way out of not understanding your users.

Features still need to solve real problems, not imagined ones. Interfaces still need to match how people actually think — not how we assume they do. You can ship software with zero bugs and still fail to solve the real problem. You can have beautiful code that implements a beautiful misunderstanding.

Faster tools don't eliminate the need to understand what you're building and why. They just make it more tempting to skip that step.

 

The path forward

None of this means we shouldn't use AI tools — the speed is real, the capability is real. But speed is only valuable when pointed in the right direction.

The solution isn't to slow AI down. It's to build the infrastructure that keeps it aimed at the right outcomes.

Think about how we build software today. We don't just write code and hope it works. We have compilers that enforce syntax rules, linters that catch common mistakes, type systems that prevent entire classes of errors, and testing frameworks that validate behavior. These systems don't slow developers down; they guide them toward correct implementations.

We need the same kind of infrastructure for user alignment. Not prompts that ask AI to "consider usability." Not checklists that agents follow mechanically. Foundational systems that encode what we know about user needs, enforce alignment between intent and outcome, and catch drift between design decisions and user reality before it compounds.

This infrastructure sits between human understanding and AI execution — ensuring that when AI generates at scale, it's guided by actual user needs, not just aesthetic optimization.

It's governance, not gatekeeping.

Conclusion

The methodologies we developed over forty years — careful observation, validation with real users, healthy skepticism about our own assumptions — were not process overhead. They were how we learned to build what actually works.

We can move faster. We have better tools. We can automate more. But we can't automate understanding.

What we can do is translate human understanding into constraints that guide AI execution — encoding user needs in ways that shape autonomous behavior, making user-centered development the path of least resistance rather than an extra step.

The choice isn't between speed and quality. It's between speed with foundational guidance and speed without it.

One builds the future. The other just builds faster — and breaks things.

----------------

Jonathan Gordon is the founder/CEO of ReWeaver AI. He has worked as a user-focused software designer leading design and engineering teams at Google, Microsoft, Oracle, Facebook, SAP, and others. 

 
 
 

Comments


bottom of page