#005: AI Is Shrinking Dev Teams—Here's Why That Matters for Your Career
“AI won’t replace software engineers, but an engineer using AI will replace one who doesn’t.”
Hi friends, this is Edo with the 5th issue of the Full-Stack AI Engineer Newsletter
TLDR: AI coding tools are enabling smaller teams to ship what larger teams used to build, but the industry hasn’t figured out how to price this new reality. Your market value increasingly depends on becoming a force multiplier rather than a headcount unit—and the communication patterns for AI-augmented teams are being invented right now, which means you have a window to shape them.
The Uncomfortable Math
Here’s something I’ve been thinking about lately. Companies like Generali are rolling out GitHub Copilot across their development teams and reporting accelerated delivery speeds. Microsoft has documented over a thousand customer transformation stories involving AI-powered development workflows.
The uncomfortable implication? A team of four engineers with strong AI-augmented workflows might now deliver what a team of eight did two years ago.
I’m not saying this to be alarmist. I’m saying it because if you’re not thinking about what this means for your career, someone else is thinking about what it means for your headcount.
The Maturity Gap Nobody Talks About
Here’s where it gets interesting. McKinsey research shows that almost every company is investing in AI right now. But only about 1% believe they’ve reached maturity in their AI adoption.
Think about what that means. Everyone’s buying the tools. Almost nobody knows how to use them well.
This creates a strange situation. Companies are seeing productivity gains from AI assistants, but they haven’t figured out how those gains should translate into compensation, team structure, or hiring decisions. The playbook is being written in real-time.
I’ve seen this play out in conversations with engineering managers. They know their teams are shipping faster with Copilot and Claude. But they don’t know if that means they should hire fewer people, pay existing people more, or just expect more output for the same salary.
Most are defaulting to “expect more output.” Which brings us to the bifurcation problem.
The Two-Track Market
There’s a phrase that keeps circulating in engineering forums and it captures something real: “AI won’t replace software engineers, but an engineer using AI will replace one who doesn’t.”
I used to think this was hyperbole. Now I’m not so sure.
What I’m seeing is a market splitting into two tracks. On one track, you have engineers who’ve integrated AI tools deeply into their workflows—not just autocomplete, but using LLMs for architecture decisions, debugging, documentation, code review, and test generation. These engineers are becoming genuine force multipliers.
On the other track, you have engineers who either resist AI tools or use them superficially. They’re still productive. They’re still valuable. But the gap is widening.
The tricky part is that companies are still figuring out how to identify which track you’re on. Your resume doesn’t show it. Your years of experience don’t show it. And most interview processes haven’t adapted to evaluate AI-augmented productivity.
The Communication Paradox
Smaller teams should mean less communication overhead, right? Fewer standups, fewer sync meetings, fewer Slack threads about who’s working on what.
In practice, I’m seeing something more complicated.
Research on AI-based development and hybrid human-AI teams is still in its infancy. We don’t have established patterns for how AI-augmented teams should communicate. Should you share your prompts with teammates? How do you code review something that was 60% AI-generated? What does “ownership” mean when Claude wrote the first draft?
These questions don’t have standard answers yet. The teams figuring them out are inventing best practices in real-time.
I’ve been experimenting with sharing my AI conversation logs in pull request descriptions. It’s awkward. It’s verbose. But it gives reviewers context they wouldn’t otherwise have. Is this the right approach? I genuinely don’t know. But I’d rather be experimenting than waiting for someone else to figure it out.
The Outsourcing Economics Shift
Here’s something that doesn’t get discussed enough. AI-enhanced applications are being built without hiring large engineering teams. This is fundamentally changing the economics of software development.
A startup that would have needed to outsource to a team of twelve can now build their MVP with three engineers and aggressive AI tooling. An enterprise that would have hired a consulting firm for a six-month project can now staff it internally with a smaller, AI-augmented team.
This isn’t theoretical. It’s happening now, and it’s reshaping what “senior engineer” means in the market.
The engineers who thrive in this environment aren’t necessarily the ones who write the most code. They’re the ones who can architect systems, evaluate AI-generated solutions critically, and ship reliable software with smaller teams. They’re the ones who understand that AI tools are powerful but need human judgment for production-grade work.
What This Means For Your Market Value
Let me be direct about something. If your value proposition is “I can write a lot of code,” you’re competing against tools that can write code faster than you.
If your value proposition is “I can build reliable systems, make good architectural decisions, and ship software that works in production”—and you can do that while leveraging AI tools effectively—you’re in a different category entirely.
The market hasn’t fully priced this distinction yet. But it will.
The engineers I know who are positioning themselves well are doing a few things. They’re getting genuinely good at AI-augmented workflows, not just using autocomplete but understanding how to prompt effectively, how to evaluate AI output critically, and how to integrate AI into their entire development process.
They’re also developing skills that AI tools are bad at: system design, debugging complex production issues, understanding business context, and making tradeoffs under uncertainty.
The Actionable Takeaway
Here’s what I’d suggest. Spend the next month deliberately experimenting with AI tools in parts of your workflow where you haven’t used them before. Not just code generation—try using Claude or GPT for architecture reviews, debugging sessions, documentation, or test case generation.
Document what works and what doesn’t. Share your findings with your team. Start building the muscle memory for AI-augmented development before it becomes table stakes.
The window for being “early” to this shift is closing. But it’s not closed yet.
The engineers who figure out how to be force multipliers—and can demonstrate that value—will have leverage in a market that’s still figuring
out how to price this new reality. The ones who wait for the playbook to be written will be following someone else’s rules.
I know which position I’d rather be in.



Great article...I am fascinated with the hybrid work model...with so much in motion, and so much possibility, the "fringe" knowledge-based work activities have a lot of ambiguity (meetings, standups, exceptions, etc). How this is governed, whats ethical and unethical, whats risky and not risky from a quality standpoint, all of these questions become fundamental in this wild west envrironment. Each company will need strong operating model governance procedures around their poilicies and positions on these questions.