← Back

Your AI Is Building Things You Don't Understand. That's a Problem.

·6 min read·AI & Automation

I run a multi-agent marketing system. At any given time, there are AI agents writing blog posts, scanning thought leaders, researching keywords, and queuing content.

It works remarkably well. And if I'm being honest, I don't fully understand every part of how it works.

That should worry me. And it does, a little.

Simon Willison, one of the most thoughtful practitioners writing about AI today, gave this problem a name in his Agentic Engineering Patterns series: cognitive debt.

The concept is simple but important. When AI builds or manages things you don't fully understand, you're not just gaining speed. You're taking on a debt. And like financial debt, it's fine in small amounts. Left unchecked, it becomes a real problem.

What Cognitive Debt Actually Means for Your Business

In software development, "technical debt" is the mess that accumulates when you build quickly without thinking about maintainability. Cognitive debt is similar, but it's about understanding rather than code quality.

Willison describes it this way: when the core of your system becomes a black box you don't fully understand, you can no longer confidently reason about it. Planning new things becomes harder. Diagnosing problems becomes guesswork. And eventually, you slow down.

This isn't just a developer problem. It's a business problem.

Think about your own AI adoption. Maybe you've set up an AI-powered customer service workflow. Or an AI that handles your social media scheduling. Or, like me, a system where agents are doing your content research and writing.

How much of that do you actually understand?

Can you explain, step by step, how the AI decides what to post? What happens when it gets something wrong? If something breaks quietly in the background, would you even notice?

If the answer is "not really," you've got cognitive debt.

Why It Gets Worse Over Time

Cognitive debt compounds.

When AI builds a system you don't fully understand, any new additions inherit the same opacity. You're building on a shaky foundation. The system gets more capable on the surface, but your ability to understand, control, and correct it diminishes.

I've seen this in my own work. My content agents evolved from a simple monitor to a multi-step pipeline: scan sources, assess quality, draft articles, score them, auto-publish if above threshold, queue if below. At each step, I added functionality without always pausing to deeply understand the previous layer.

The result is a system that mostly works. But "mostly works" and "I can confidently reason about this and fix anything that goes wrong" are not the same thing.

The Fix: Don't Just Build, Understand

Willison's solution is elegant: use AI to help you understand what AI built.

His specific technique involves building interactive explanations, animated walkthroughs, and visual demonstrations of how a system works. When his AI built a word cloud tool using algorithms he didn't understand, he asked the AI to create an animated explanation of how the algorithm works. Not just documentation. An interactive demonstration he could watch and interact with until the concept clicked.

For technical systems, this is powerful. But the principle extends to any AI-built process.

If AI is running part of your business operations, make it explain itself. Ask it to walk you through the logic. Ask it what could go wrong and how you'd know. Ask it what decisions it's making that you haven't explicitly approved.

This isn't about distrusting AI. It's about maintaining the understanding you need to stay in control.

What This Looks Like in Practice

Here's how I think about managing cognitive debt:

Define your "I need to understand this" threshold before deploying. For low-stakes, reversible actions (drafting social posts, for example), a black box is fine. For anything that touches customers, finances, or public-facing communications, you need enough understanding to confidently diagnose problems.

Build in "explain to me" checkpoints. When an AI agent does something unexpected, don't just fix the output. Understand the process. Ask the AI to walk you through its reasoning. Document it.

Review your AI systems the way you'd review an employee's work. Not every day, not every task. But regularly enough that you could explain to someone else how the system works and what its failure modes are.

Write down the cognitive debt you're carrying. If there are parts of your AI setup you don't fully understand, note it. "I know the customer routing logic works, but I couldn't explain exactly how" is important institutional knowledge. It marks where you need to invest in understanding before something goes wrong.

The Human-in-the-Loop Isn't Just About Oversight

There's a popular framing of "human in the loop" as a safety mechanism. The human reviews outputs before they go live. That's valuable.

But cognitive debt points to a deeper kind of human involvement: not just reviewing outputs, but maintaining real understanding of the system itself.

A system you understand is one you can improve. A system you don't understand is one you're dependent on. And when something goes wrong, you're far better positioned to fix something you understand than to debug a black box.

This is the real meaning of "AI + humans > AI alone." Not just that humans check AI outputs. But that humans maintain genuine comprehension of what the AI is doing and why, so the overall system stays intelligent and correctable, not just fast.

A Simple Audit

Pick one AI system or process you're running right now. Answer these questions:

  1. Can you describe, in your own words, the logic it follows?
  2. What are the three most likely ways it could fail silently?
  3. If it produced wrong output today, how would you know?
  4. Could you rebuild or replace it if you needed to?

If you can't answer all four with confidence, you've got cognitive debt. Not a crisis. But worth addressing before it compounds.

The goal isn't to understand every technical detail. But you do need to understand your system well enough to stay in control of it.

That's the deal with AI. It can build fast. Your job is to make sure you understand what it's building.


I run an AI-powered content and marketing system for businesses across Asia. If you're deploying AI agents and want to think through the governance side, I'm happy to compare notes.