← Back

Your SOPs Are Now AI Infrastructure

·7 min read·AI & Automation

Something became obvious to me around month three of running AI agents in my business.

The agents that worked well had detailed, structured instructions. They had context about who they were talking to, what mattered, and how to handle edge cases. The agents I set up quickly, with vague prompts and no documentation, needed constant supervision and still produced inconsistent results.

The gap had nothing to do with which AI model I was using. It had everything to do with how much I had written down.

This is the thing most AI adoption conversations in Singapore miss. Everyone's focused on which tool to use. Not enough people are asking whether their business is actually ready for AI to work on it.

What a Compiler Engineer Noticed About AI

Chris Lattner built Swift, LLVM, and Clang. That's the infrastructure that powers most modern software development. When he speaks about AI and engineering, it's worth paying attention.

He recently published an analysis of Anthropic's Claude C Compiler project, a milestone experiment where AI agents built a working C compiler from scratch. His take was measured and sharp. But one observation hit differently:

"Architecture documentation has become infrastructure as AI systems amplify well-structured knowledge while punishing undocumented systems."

He was writing about engineering teams. But read that sentence again through the lens of your business.

AI amplifies well-structured knowledge. It punishes undocumented systems.

That's not a technical footnote. That's a strategic dividing line forming right now between businesses that will get genuine leverage from AI and businesses that won't.

Why AI Needs Structure to Thrive

Here's what Lattner's compiler analysis revealed: the AI built an impressive C compiler, but it did so by synthesising decades of existing compiler literature. It reproduced known patterns extraordinarily well. Where it struggled was with anything requiring judgment that hadn't been written down somewhere.

The AI was brilliant at implementation. It was limited where documentation ran out.

The same dynamic plays out in business contexts. Give an AI agent a well-documented process, with clear steps, defined decision criteria, and explicit examples of good and bad outputs, and it will execute that process consistently at scale. Give it tribal knowledge, vague instructions, or nothing at all, and you get a mess that still needs a human to clean up.

This is why implementation costs are falling fast while the value of good judgment and clear design is rising. When AI can handle execution reliably, the humans who define what should be executed become more important, not less.

Lattner puts it directly: "AI coding is automation of implementation, so design and stewardship become more important."

Swap "coding" for "operations" and you have the SME playbook.

What I Learned Running Agents in Production

My content marketing system runs multiple AI agents. A monitoring agent scans thought leaders daily and identifies insights worth writing about. A research agent validates claims. A writer agent drafts content in my voice. Each one hands off to the next.

This system works because every agent has a detailed brief. Not a one-line prompt. A structured document that covers tone, rules, exceptions, examples, what to do when things are unclear, and what the end state should look like. My agents have named files they read at the start of every session. They know my clients, my voice, my no-go zones, and my preferences.

When I ran the same tasks without that structure, the outputs were generic and inconsistent. With it, they're specific and reliable enough to publish.

The lesson I keep learning is that the AI is not the constraint. My ability to articulate what "good" looks like is the constraint.

The Honest Audit Most Businesses Haven't Done

Ask yourself: if you had to hand your five most important business processes to an AI agent right now, could it actually follow them?

Not "could you explain them verbally." Could it follow them from a document?

Most businesses are sitting on a gap. Experienced staff carry critical knowledge in their heads. Client preferences live in email threads. How to handle an unhappy customer is "just how we do it here." The onboarding process exists, roughly, in someone's memory.

This works fine when that person is available. It fails when they leave. And it completely breaks down when you're trying to let AI handle anything substantive.

The businesses getting real leverage from AI right now are not necessarily the ones with the biggest budgets or the most advanced tools. They're the ones with the clearest, most explicit processes. When you give an AI a well-structured SOP, it becomes a highly capable, scalable team member. When you give it ambiguity, you get inconsistency and frustration.

Documentation Is No Longer Just Admin

There's a tendency to treat process documentation as something you do when you have time, which means it rarely gets done. It feels like overhead. It's not revenue-generating. Nobody gets excited about writing SOPs.

That calculation has changed.

Your SOPs are now the scaffolding that AI needs to function. They are the structured knowledge that, in Lattner's words, AI systems will amplify. Without them, you are leaving most of the value on the table and compensating for the gap with human attention that could be doing something else.

Think of it this way. An AI agent is like a highly capable new hire who learns fast and works without sleep. But they need a proper induction. They need to know how you do things, not just what you want done. The companies that have invested in clear onboarding documentation will bring that hire up to speed in days. The ones running on institutional knowledge and vibes will spend weeks correcting mistakes.

The only difference is what you wrote down.

Where to Start

If your goal is to actually use AI in your operations this year, the most valuable thing you can do right now is not to pick a tool or sign up for a platform. It's to document your processes.

Start with the five tasks you're most likely to hand to AI. For each one, write down:

  • What triggers this task
  • What a good outcome looks like, specifically
  • What a bad outcome looks like, and why
  • The steps involved, in order
  • Any exceptions or edge cases that come up regularly
  • What the output format should be

That's not a full SOP. But it's enough structure for an AI agent to start working with, and it's usually enough to surface how much tacit knowledge your team holds that's never been written down.

Lattner's conclusion for engineering teams is worth adapting for yours: "AI, used right, should produce better outcomes, provided humans actually spend more energy on architecture, design, and innovation."

For an SME, that means spending more energy on defining what good looks like, and less time doing repetitive tasks manually. The businesses that do this work will find AI genuinely transformative. The businesses that skip it will find AI expensive and disappointing.

The gap between those two outcomes is mostly a stack of documents.


Derek Chua runs AI agent systems in production at Magnified Technologies. His content marketing system runs five AI agents daily, all of them trained on detailed process documentation.