For decades, we’ve repeated a warning in software development:
The later you make a change, the more expensive it becomes.
Many of us learned this from Barry Boehm’s famous cost-of-change curve. Based on research from the 1970s and 1980s, it showed a steep rise: changes were cheap early in a project but dramatically more expensive later, especially after release.
In that era, the curve made perfect sense.
Software was expensive to build. Tools were primitive. Testing was manual. Integration was painful. A late requirement change could mean months of redesign, recoding, and retesting.
Late change really was disastrous.
But here’s the issue: We’ve treated that curve as timeless.
It wasn’t a law of physics. It was a snapshot of how software worked at the time.
And the economics of software have changed.
The Curve Has Been Flattening for Decades
Long before agile became mainstream, the cost of change was already falling.
• IDEs made coding and debugging dramatically faster. • Automated tests and CI reduced the fear of breaking things. • Modular architectures reduced ripple effects across systems.
Each of these flattened the curve a little.
Then agile came along.
Agile didn’t magically make change cheap. It recognized that change was becoming affordable—and built a way of working that leveraged it.
Kent Beck captured this perfectly with the subtitle of his book, Extreme Programming Explained: “Embrace Change.”
That wasn’t motivational fluff. It was an economic claim: if we can keep the cost of change low, we can learn continuously.
Short iterations. Customer collaboration. Refactoring. Backlogs instead of fixed master plans.
All of these reduced the penalty for being wrong.
AI Is Flattening the Curve Again
Now we’re seeing another shift.
AI-assisted development tools are reducing the time required for humans to write and revise code.
That may sound simple. It isn’t.
When coding gets faster, experimentation gets cheaper. Trying an idea is cheaper. Revising an idea is cheaper. Throwing away an approach and trying another is cheaper.
AI is making code feel less like construction and more like revision.
And when that happens, the bottleneck shifts.
The cost of change becomes less about writing code and more about waiting for feedback.
In other words, the primary cost of change increasingly becomes feedback delay, rather than development effort.
What Hasn’t Gotten Cheaper
Understanding users is still hard.
Product discovery is still hard.
Organizational decision-making is still hard.
The hard part hasn’t disappeared—it’s moved.
For decades, we acted as if coding was the hardest part of software development. That’s no longer true. Today, the hardest part is often figuring out what to build and learning whether it works.
But once we have feedback, acting on it is cheaper than ever.
That’s the key economic shift.
What This Means
Many organizations still operate with approval processes designed for a world where late change was catastrophic.
In many companies, the slowest part of delivery isn’t coding.
It’s waiting for permission.
If the cost of change has fallen—and it has—then the biggest risk isn’t changing too late.
The biggest risk is learning too late.
Requirements don’t need to be perfect. They need to be revisable.
Locking them down early may reduce change—but it increases the risk of building the wrong thing.
So yes, do discovery. Think carefully. Understand users.
But don’t over-invest in trying to get everything right up front.
Modern software development rewards adaptability more than accuracy.
If you still fear change, you may be managing like it’s 2005.
Stop trying to perfect requirements.
Instead, perfect your feedback loop in order to succeed with agile,