AI Did $350K of Work for $200. What That Tells You About Your Team's Future
Paul Ford ran a software consultancy for years. He knows how to price a project.
When he rebuilt his personal website last month, he said: "I would have paid $25,000 for someone else to do this." Then a friend asked him to clean up and explore a large, messy dataset. He did it himself, over a few evenings, with AI. In his old firm, that same project would have cost $350,000. A product manager, a senior engineer, a junior engineer, a designer. Four to six months. Plus maintenance.
He did it for the price of a $200-per-month Claude subscription.
Ford wrote this in the New York Times last week. He's not a tech optimist spinning yarns. He's a practitioner telling you what happened to him. And this week, some of the sharpest engineers in the industry started explaining exactly why that number is real, where the limits are, and what the shift means for the people on your team.
What the Best Compiler Engineer in the World Said About AI
Chris Lattner built Swift. He built LLVM and Clang. If you want one person on earth qualified to assess what AI can and can't do in software engineering, it's him.
When Anthropic released a project where Claude built a working C compiler from scratch using parallel agents, Lattner reviewed the code. His verdict was precise:
"CCC looks less like an experimental research compiler and more like a competent textbook implementation, the sort of system a strong undergraduate team might build early in a project before years of refinement. That alone is remarkable."
Remarkable, but not production-ready. He also noted where the cracks appear:
"Several design choices suggest optimization toward passing tests rather than building general abstractions like a human would. Current AI systems excel at assembling known techniques and optimizing toward measurable success criteria, while struggling with the open-ended generalization required for production-quality systems."
This is the most useful framing I've read on what AI actually does and doesn't do. Not from someone who's skeptical of AI, but from someone who's spent his career building the foundational tools that AI is now learning to replicate.
AI can build things that work. It still struggles to build things that last.
The Shift, Stated Plainly
Lattner's conclusion about what this means for engineering teams was direct:
"AI coding is automation of implementation, so design and stewardship become more important. Good software depends on judgment, communication, and clear abstraction. AI has amplified this."
And from Martin Fowler, writing from a Thoughtworks software development retreat the same week:
"LLMs are eating specialty skills. There will be less use of specialist front-end and back-end developers as the LLM-driving skills become more important than the details of platform usage. Will this lead to a greater recognition of the role of Expert Generalists?"
Three people, three different vantage points, same conclusion: implementation is getting automated. Judgment is not.
What This Looks Like in Practice
I run several AI agents in parallel in my marketing system. One handles content strategy, one does SEO research and writing, one monitors thought leaders and adapts their insights for the blog. They work through shared files, hand off tasks between themselves, and run without me supervising every step.
The agents are good at implementation. They follow briefs, hit formats, meet quality standards, and produce work that would have taken hours manually.
What they flag to me are the judgment calls: this angle is too similar to something we published last month. This source is credible but the insight is thin. This topic is strong but harder to verify. Is this worth publishing?
That's not a limitation I'm trying to engineer away. It's the natural boundary between what AI does reliably and what humans still need to own. The agents handle the implementation. I hold the standard.
Lattner put it well: "The limiting factor is no longer whether software can be built, but deciding what should be built and how to manage the complexity that follows."
Replace "software" with "content" or "marketing campaigns" or "client deliverables" and the same sentence applies to most SME operations.
What Changes for Your Team
The uncomfortable truth in Ford's $350,000 story is that a category of work has shifted. Not entirely gone, but priced very differently. Implementation work, the kind with a clear success criterion (write this, convert that, format this report, compile a list), is the part AI handles best.
What that leaves is the other kind of work.
Lattner again: "As implementation becomes cheaper, the scarce skills become choosing the right abstractions, defining meaningful problems, and designing systems that humans and AI can evolve together."
For SME leaders, this plays out in three concrete ways.
First, the question about your team is no longer "can they do X?" It's "can they direct and evaluate AI doing X?"
These are different thresholds. The first is about execution. The second requires domain knowledge, critical judgment, and the experience to know when AI output is good versus passable. A team that can run AI but can't evaluate its output is still flying blind.
Second, Fowler's expert generalist point is worth sitting with. The person who knows enough about your business to brief an AI properly, review what it produces, and catch the gaps, that person is not a narrow specialist. They need breadth. They need context. They need to understand enough across domains to spot where the AI optimized for the wrong thing.
Lattner saw this in the C compiler: the AI optimized hard for passing tests, and in doing so, it hardcoded workarounds instead of building generalizable abstractions. The code looked right. It wasn't right. Only someone who knew what "right" meant could tell the difference.
Third, "senior" is being redefined. If AI handles more of the implementation, seniority increasingly means the judgment layer. The ability to set the right target. To evaluate output with real domain expertise. To catch the cases where AI assembled known patterns instead of solving the actual problem.
The Practical Frame for Right Now
Here's how I'd translate this for any business owner in Singapore thinking about where to invest right now.
Train your team to evaluate, not just operate. Tool training is table stakes. The bigger investment is in building the domain knowledge and critical thinking that lets your people judge AI output well. If they can't tell good from passable, faster tools just mean faster mediocre output.
Stop trying to replace judgment with prompts. The pattern I see constantly: teams expect that a better prompt will solve a problem that's actually a judgment problem. It won't. The better prompt gets you better implementation of the wrong thing, faster. Someone still has to decide what the right thing is.
Invest in structure before you invest in scale. Lattner made a point that applies well beyond software: "AI amplifies both good and bad structure." If your processes, briefs, and knowledge systems are vague, AI will execute that vagueness at scale. Documentation, clear standards, and explicit expectations are now operational leverage, not admin overhead.
Reconsider who your most valuable people are. In most teams, the most valued people are often the most productive implementers. If implementation is getting automated, your most valuable people are shifting toward the ones who know what good looks like across multiple domains, can direct AI systems effectively, and can be accountable for outcomes rather than outputs.
The Part Ford Got Right
Ford ended his piece with an honest admission: "All of the people I love hate this stuff, and all the people I hate love it. And yet, likely because of the same personality flaws that drew me to technology in the first place, I am annoyingly excited."
I understand the ambivalence. The numbers he quoted are not comfortable for everyone. They describe a real shift in how much certain kinds of work cost and who's doing it.
But the flip side is what Lattner and Fowler are both pointing at: the work that remains after AI handles implementation is the work that actually requires a person. Judgment. Vision. Knowing what should be built and whether what was built is right.
For most business leaders, that's not new. That's what you're already doing. The change is that you're going to need your team to do more of it too.
The implementation will increasingly take care of itself. The judgment won't.
Sources: Chris Lattner, "The Claude C Compiler: What It Reveals About the Future of Software," Modular blog, February 22, 2026. Paul Ford, "The A.I. Disruption We've Been Waiting for Has Arrived," New York Times, February 18, 2026. Martin Fowler, tidbits from the Thoughtworks Future of Software Development Retreat, February 18, 2026. All via Simon Willison's Weblog.
Derek runs a multi-agent AI marketing system at Magnified Technologies and writes about practical AI adoption for business leaders and employees.