You’re a CTO in a leadership meeting. Someone asks about your AI coding strategy. You say something measured about “exploring the space” and “evaluating tools.” You sound responsible. You sound prudent.
You’re also falling behind.
I’ve been building and leading engineering teams for years. I’ve seen hype cycles come and go. I was skeptical about AI coding tools - dismissed them as glorified autocomplete. Useful for boilerplate, maybe. Not for real systems.
Then I actually tried agentic coding. Not chat. Not Copilot autocomplete. Not Cursor tab-tab-tab. Agents. Agents running wild inside my project, planning their approach, executing, iterating.
I was wrong about this one. And if you’re a CTO who hasn’t personally spent serious time with Claude Code or similar tools, you’re making strategic decisions with incomplete information.
Here’s what changed my mind.
Chat and autocomplete are step-by-step. You prompt, it responds, you copy-paste. The human is still doing the work.
Agents are different. They work autonomously, iterate, and come back with results. You define the goal and provide context. They plan and execute.
It’s the difference between micromanaging and delegating.
One developer with three or four agents feels like a small team. That’s not marketing copy. That’s what it actually feels like when you use it.
There’s something almost magical about having teammates who are genuinely eager to help. No motivation problems. No calendar conflicts. No “that’s not my job.” Just: what do you want to build? At ~2am~ 11pm when you should be sleeping, you find yourself thinking “just one more iteration.”
It’s been a long time since coding felt like this.
Chris Loy nailed the framing:
large language models are a steam engine moment for software. They collapse the cost of a class of work previously dependent on scarce human labor. That unlocks extraordinary acceleration in output.
But here’s what the hype cycle misses. Jason Gorman’s counterpoint is equally true: the hard part of programming was never expressing what we want in code. The hard part is turning human thinking-with all its wooliness, ambiguity, contradictions-into computational thinking that’s logically precise. Knowing exactly what to ask for. That was true when programmers punched cards. It was true when they typed COBOL. It’s true when they prompt language models.
The steam engine doesn’t eliminate the need for engineers who understand thermodynamics. It changes what they spend their time on.
Now think about what this means for your org.
Your bottleneck is about to move. When coding is no longer the constraint, pressure transfers to product management, design, and QA. They become the critical path. Are they ready for that? Is your org structure? Your roadmap?
Here’s an uncomfortable question: how many of your engineers can explain why they’re building a feature, not just how? A whole generation learned to execute tickets without engaging with the bigger picture. When AI handles the “how,” the “why” becomes everything.
Your hiring profile needs to evolve. You’re no longer just hiring people who can write code. You’re hiring people who can direct AI, manage context, and critically evaluate generated output.
Think about what this means for interviews. Candidates should be solving problems with AI tools-that’s how they’ll actually work. But you also need to see them think without the crutch. Can they reason about architecture? Do they understand what they’re asking for? Or are they just prompt-and-pray?
The engineers who thrive will be the ones who can have a critical conversation with a codebase. Who can finally apply all those “boring” practices we’ve been meaning to implement-better instrumentation, consistent code rules, proper documentation. Agents make this stuff cheap. The question is whether your people know what good looks like.
Your architecture decisions carry different weight now. Modularity matters more than ever-not just for human teams, but because agents work better with clear boundaries. Conway’s Law still applies, but now your “teams” include AI. Design for parallel work. Keep interfaces stable. Make it possible to delegate chunks of the system to agents without them colliding.
And here’s something nobody talks about: agents might actually solve the tooling complexity problem we’ve created for ourselves. Who on your team truly understands git? Really understands it? What about your CI pipeline? Your deployment config? We’ve accumulated layers of accidental complexity that humans struggle to hold in their heads. Agents don’t struggle the same way. They can work with the mess we’ve made.
There’s a finding from DORA’s 2024 report that should give you pause. Every 25% increase in GenAI adoption correlates with 7% worse stability and 1.5% slower throughput.
Read that again. More AI adoption, worse outcomes.
But here’s the thing: that’s measuring teams who adopted AI without changing their practices. They bolted AI onto existing workflows. They didn’t adapt their feedback loops, their review processes, their architecture.
Economist Paul David studied exactly this pattern with electrification. The lightbulb was invented in 1879. Forty years later, electric motors still accounted for less than half of factory power. The productivity explosion everyone expected? It didn’t show up until the 1920s and then it accounted for half of all manufacturing productivity growth in that decade.
What took so long? Factory owners just swapped their steam engines for electric dynamos and kept everything else the same. Same building layouts. Same workflows. Same management. They overlaid new technology on old processes.
The real gains came when a new generation of managers redesigned everything - factory floors, workflows, organizational structures around what electricity actually enabled. Not better steam power. A different way of working.
The DORA numbers might be measuring our “bolt-on” phase. The gains come when teams reorganize around the technology, not when they just adopt it.
So why don’t teams reorganize?
Now let me tell you what nobody in leadership wants to hear. There are natural obstacles to this reorganization. If AI lets smaller teams do more, some managers will have fewer people to manage. That means less organizational weight. Less power. Less budget. Don’t underestimate how much this will slow adoption in ways that have nothing to do with technology.
I keep thinking about Escoffier’s brigade de cuisine - the kitchen system that revolutionized how restaurants operate. Before Escoffier, kitchens were chaos. After, clear roles, clear handoffs, parallel work coordinated through hierarchy and protocol.
We need the equivalent for AI-augmented engineering. Not everyone doing everything with AI. Specialized roles emerging: Agent Experts who build and maintain domain-specific agents. Apex Builders who harden vibe-coded prototypes into production systems. Fleet Supervisors who oversee multiple AI systems working in parallel.
Your org chart is going to look different in two years. The question is whether you design it intentionally or let it emerge chaotically.
Here’s what I’m telling my peers.
Stop evaluating. Start experimenting. AI is fundamentally experiential - you cannot think your way through it. The CTOs who are reading blog posts (khm!?) and attending panels while waiting for “the right moment” are the ones who’ll be caught flat-footed.
Yes, we’re in an early phase. Yes, there are errors. Yes, the tools are rough. But the fundamental shift in approach is real. Waiting for polish means missing the learning curve.
Your best engineers should be spending real time with these tools. Not on toy projects. On actual work. Let them struggle with context management and feedback loops. That’s where the intuition builds.
Create space for failure. The DORA numbers are real - there will be stumbles. But the alternative is letting your competitors figure it out first.
There’s a quote that’s been stuck in my head: “AI won’t replace people, but people who use AI will replace people who don’t.”
I’d adapt it for CTOs: Companies won’t be replaced by AI. They’ll be replaced by companies whose teams figured out how to use it.
Roadmaps might give way to experiments. Bolder ideas might get built by smaller teams. The engineers who’ve been dreaming about projects “too big to attempt” might suddenly attempt them.
If you wait until the technology stabilizes, you’re already behind. The stabilization you’re waiting for isn’t coming. This is the new normal-fast iteration, constant change, perpetual adaptation.
I don’t have a five-year roadmap for this. Nobody does. But I know the engineering leaders who are hands-on right now are building intuitions that will compound.
The ones who are “monitoring the space” are going to wake up one day and realize the space moved without them.
The flywheel is spinning. Your call.