The Composition Principle: Building Agent Skills That Compound
Why granular, composable skills beat monolithic system prompts—and how JIT context unlocks the creative power of LLMs.
I've been building with AI agents for a while now, and I kept running into the same wall. My system prompts grew into sprawling documents—thousands of tokens of conventions, guidelines, and preferences—all loaded into every conversation whether relevant or not.
It felt wasteful. The model was filtering through database migration conventions when I just wanted it to write a utility function. Attention diluted across context that didn't matter for the task at hand.
Then Claude Code introduced skills, and something clicked.
A skill is just a markdown file. It describes when it should activate and includes the context relevant to that situation. The concept is simple, but the shift it represents—from system prompts to just-in-time context—changed how I think about customizing AI behavior entirely.
Instead of front-loading everything an agent might need, you provide exactly what it needs, when it needs it. Ask Claude to write a migration, and it loads the migration skill. Not the API conventions. Not the deployment checklist. Just migration context, right now.
This is how human experts work. A senior engineer doesn't hold every reference doc in memory. They know when to pull up the relevant one. Skills give agents that same capability.
The insight that took me longer to grasp was that the real power isn't just in timing. It's in granularity.
My first instinct was to build comprehensive skills. A Backend Development skill covering everything—data models, API design, testing, deployment. One skill, complete coverage, maximum efficiency. This turned out to be wrong.
When you write a monolithic skill, you're constraining the model before it even starts working. Every detail you prescribe is a path you've closed off. You're paying for a creative engine and using it as a lookup table.
The models I work with are generative systems. They excel at exploring solution spaces, synthesizing ideas, making connections I didn't anticipate. When I stuff a skill full of prescriptive instructions, I'm fighting that capability instead of leveraging it.
Granular skills compose. A skill for commit message format. A skill for error handling philosophy. A skill for testing patterns. Each one focused on a single concern. Each one small enough to combine with others.
When the agent loads three small skills together, something interesting happens. Capabilities emerge that I never explicitly described. The workflow arises from composition, not prescription. I didn't tell the agent how to submit code—I told it what good commits look like, what good reviews look like, what good code looks like. The rest followed.
This changed how I write skills. I stopped prescribing solutions and started describing constraints. Instead of telling the model exactly which HTTP status codes to return for different error types, I describe what distinguishes good error responses from bad ones and trust it to figure out the implementation.
The distinction sounds subtle but isn't. One approach treats the agent as a template engine. The other treats it as a thinking partner that happens to need context about my specific preferences.
Now I build skills with three qualities in mind. Tight scope, where each skill addresses one concern. Dense signal, where every sentence should shift behavior without obvious filler. And room to breathe, where I provide constraints instead of instructions and values instead of steps.
The result is an agent that exercises judgment instead of following instructions. It makes decisions I'd agree with—not because I told it exactly what to do, but because I told it what matters. And by keeping skills granular and composable, I preserve the creative exploration that makes working with LLMs valuable in the first place.
System prompts configure agents. Skills equip them. Understanding that distinction changed how I build.
More in AI Engineering
[ph] Why I Stopped Chasing Framework Perfection
The best architecture is the one your team can ship with. Here's how I learned to stop optimizing and start delivering.
[ph] Prompting Is the New Programming
We're witnessing a fundamental shift in how we interact with computers. Natural language is becoming the new interface.