← Back

The AI Playbook Nobody Tells You to Build

·5 min read·AI & Automation

Simon Willison has over a thousand GitHub repositories. He runs two blogs. He has a dedicated site just for single-page tools he's built with AI help.

Most people look at that and think: "Wow, he's prolific."

What they're missing is the strategy behind it.

In a piece he published this week, Simon laid out one of the most practically useful ideas I've read on working with AI agents: hoard things you know how to do.

The concept sounds obvious. It isn't.

What "hoarding" actually means

Simon's argument is that a big part of being effective with AI agents is having a library of working examples to feed them.

Not documentation. Not notes that say "this might work." Actual working code, actual working prompts, actual solved problems that you can hand to an AI and say: "combine these two things."

He gives a concrete example. In early 2024, he wanted a browser-based tool to OCR scanned PDF documents. He had two pieces already in his library: a JavaScript snippet that rendered PDF pages to images, and a snippet that ran OCR on an image using Tesseract. Neither did what he needed on its own.

He fed both to Claude with a description of what he wanted. The tool worked flawlessly. Total time: a few minutes.

The AI did the combination work. Simon did the work of knowing what to combine.

The part that applies to everyone

Here's where it gets interesting for non-developers.

Simon is a programmer, so his library is full of code. But the underlying principle has nothing to do with code. It's about building a personal knowledge base of "things that work."

Think about how this applies to a marketing team using AI.

You spend an hour getting the right prompt to research a competitor. It produces genuinely useful output. You close the tab. Six weeks later, you need to do the same thing. You spend another hour getting back to the same place.

That's not a productivity problem. That's a library problem.

The people who compound their AI advantage fastest are not the ones who use the most powerful models. They're the ones who document every working workflow, every prompt that produces good output, every template that saves time. Each win goes somewhere it can be retrieved and built on.

This is exactly how our multi-agent marketing system works

I've been running a multi-agent content system for a few months now. A CMO agent, an SEO agent, a monitoring agent, a publishing pipeline. The whole thing.

The most painful early lesson was precisely this: when something worked, I often didn't write it down. A prompt that produced great article structure. A research workflow that surfaced the right insights. A content brief format that made the writing agent actually produce usable drafts.

When those agents restarted with a fresh session, the working knowledge was gone. I'd rediscover it, then lose it again.

The fix was boring and obvious: every working prompt goes into a file. Every successful workflow gets documented as a template. Every agent gets a set of examples showing it what "good output" looks like. The agents don't just have instructions. They have a library.

The difference in output quality was significant. Not because the models changed. Because the inputs got richer.

The compounding effect

Simon makes another point worth sitting with. His library of working examples doesn't just help him do the same things faster. It helps him spot opportunities.

When you've solved enough problems, you start to recognise when a new problem is actually just a combination of two old ones. You see solutions before other people do because you have more patterns to draw from.

This is true whether you're a developer, a marketer, a finance analyst, or a business owner. The person who has documented 50 working AI workflows has a fundamentally different relationship with AI than someone who starts fresh every time.

It compounds. Every week you invest in your library, every future task gets easier.

What to actually do

Start small. Pick one category of work where you regularly use AI and create a simple document or folder for it.

Every time you get genuinely good output from a prompt, write the prompt down. Note what made it work. If you produce a template or a workflow that saves time, save it somewhere you'll actually find it again.

You don't need a fancy system. Simon uses blog posts, GitHub repos, and a tools site. A Notion page and a folder in Google Drive would do the same job.

The goal is not to have perfect documentation. The goal is that next time you need to do something, you're not starting from zero.

Over time, that library becomes one of the most valuable things in your business. Not because it tells you about AI. Because it captures the intersection of AI capability and your specific expertise, your specific context, your specific problems.

That combination is genuinely hard to replicate. And unlike the AI models themselves, which keep changing, your library just keeps growing.


Inspired by Simon Willison's "Hoard things you know how to do" from his Agentic Engineering Patterns guide, published February 26, 2026.