There's something interesting about named principles. Someone watches a pattern repeat for years and distills it into a sentence. Then someone else attaches a name. Conway didn't call it "Conway's Law" — others did. Chesterton was just writing an argument in a book; later readers extracted the fence and named it. The observation is individual. The naming is collective — an act of community compression. After that, the name is the concept. "Chesterton's Fence" carries an entire philosophy of caution in two words. "Goodhart's Law" compresses a thesis about measurement corruption into four.
These are human-language embeddings. The same compression that a transformer does when it maps a paragraph into a vector — except done by humans, over centuries, through lived experience. Each name is a lossy compression of a complex observation into something portable enough to carry in conversation. There's a kind of Begrifflichkeit to it — the German notion of forming concepts that capture what raw description can't. Or the Japanese kotodama — the idea that words carry the spirit of their meaning, that naming something gives it power.
Some of these I keep coming back to. Some I just find interesting. This is a living reference — mostly for me, partly to see what patterns connect.
#The Laws
Chesterton's Fence — Don't remove something until you understand why it was put there. The fuller version is worth hearing:
"There exists a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, 'I don't see the use of this; let us clear it away.' To which the more intelligent type of reformer will do well to answer: 'If you don't see the use of it, I certainly won't let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.'"
This might be the single most important principle for the agent era. Claude Code will happily "clean up" a function that looks redundant but exists because of a bug discovered at 3am two years ago. The agent optimizes for local cleanliness without historical context — it can't see the scar tissue. And in a world of increasingly AI-generated codebases, the fence signal gets noisier — there's less "someone stayed up till 3am" human scar tissue and more pattern-matched output. Chesterton's Fence becomes a guardrail we have to deliberately apply when reviewing AI changes, not just human ones.
I've hit this three times recently, and the shape was different each time. Once I blew past it — we were migrating from Jenkins to ArgoCD, and I labeled a Jenkinsfile as a straggler, confirmed the GitOps manifests existed, and deleted it. Thirty minutes later: the service was still serving live traffic in production, but nothing in the new system was actually managing it. The Jenkinsfile hadn't deployed in over a year — but it was the only record of how that service got into production, and nothing else had picked up the slack. Once I caught myself just in time — I had repos queued for retirement because they hadn't been pushed recently, and mid-planning I stopped and thought: wait, those could still be in active production use. Last-commit-age isn't the signal that determines whether the fence matters. Runtime signals — live deployments, healthy services, DNS still pointing at it — are the actual fence, not repository activity. And once I removed correctly — traced the actual dependency, proved nothing was attached, then removed. Same discipline, different outcome.
The pattern I'm taking forward: the fence isn't the code or the label or the file. The fence is whatever invariant the past version of the team relied on that you can't see from the current state. The job isn't to keep the fence — it's to surface the invariant, verify it, then decide.
Naming it "Chesterton's Fence" turns vague caution into a portable, high-fidelity reminder — exactly the kind of human embedding that agents still struggle to internalize without explicit scaffolding.
I went deep on that one because I lived it recently. The rest are shorter — not war stories so much as things I've lived but internalized into abstractions. The vivid memory faded; the instinct stayed. That's kind of the whole point of named things.
Conway's Law — "Organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations." Melvin Conway, 1967. Your microservices architecture mirrors your team structure. Your API boundaries are your org chart.
Multi-agent systems recreate this instantly. Define a "frontend agent" and a "backend agent" and watch them produce an API boundary that mirrors their separation — whether or not that's the right architecture.
Goodhart's Law — "When a measure becomes a target, it ceases to be a good measure." Charles Goodhart, 1975. The moment you optimize for a metric, the metric stops measuring what you cared about.
Benchmark gaming. School ratings. Code coverage as a quality proxy. Any time someone says "we improved X by 15%" — ask what X stopped measuring when they started optimizing for it.
Hyrum's Law — "With a sufficient number of users of an API, all observable behaviors of your system will be depended on by somebody." Hyrum Wright, Google. It doesn't matter what your contract says. If your endpoint happens to return results sorted alphabetically, someone depends on that.
This is why "just refactor it" is never just. Why agents need to understand implicit contracts, not just explicit interfaces. The invisible dependencies are the ones that bite.
Gall's Law — "A complex system that works is invariably found to have evolved from a simple system that worked." John Gall, 1975. You can't design a complex system from scratch.
The context engineering stack itself — forming bottom-up from pragmatic layering, not top-down from a grand design. That's Gall's Law in action.
Brooks's Law — "Adding manpower to a late software project makes it later." Fred Brooks, 1975. Communication overhead grows faster than productivity gains.
The agent version: adding agents to a bad architecture makes it worse faster. More agents ≠ more progress.
Jevons Paradox — Making a resource more efficient to use increases total consumption. William Stanley Jevons, 1865: more efficient coal engines led to more coal consumption because efficiency made new uses economical.
Making software cheaper to produce means more software, not less. More surface area, more things to maintain, more demand for people who can tend it.
Lindy Effect — The longer something has survived, the longer it's expected to survive. SQL. Unix. Git. HTTP. grep. When someone says "X will replace Y," check how long Y has existed.
Postel's Law (Robustness Principle) — "Be conservative in what you send, be liberal in what you accept." Jon Postel, 1980. Strict output schemas. Tolerant input parsing. The constraint layer is conservative in what it sends. The prompt layer is liberal in what it accepts.
#The Razors
Occam's Razor — Simplest explanation that fits the evidence is usually correct.
Hanlon's Razor — Never attribute to malice what's explained by a stale cache, a race condition, or a misconfigured env var.
Hitchens's Razor — "What can be asserted without evidence can be dismissed without evidence." Useful for evaluating AI hype.
#The Numbers
Dunbar's Number (~150) — Cognitive limit on stable relationships. Beyond that you need formal structures. Agent analog: how many rules can you load before they start contradicting each other?
Miller's Law (7 ± 2) — Working memory capacity. Not about agents (their memory is the context window) but about you — how many parallel agent tasks can you track before you lose the thread?
#The Deeper Ones
Ashby's Law of Requisite Variety — A controller must have at least as much variety as the system it controls. The constraint layer needs enough expressiveness to bound the generation space. If your design tokens only cover color and spacing but not typography, the agent will generate valid colors with garbage type.
The Anna Karenina Principle — Working systems are all alike. Broken systems are each broken in their own way. Debugging is harder than building because there's one way to be right and infinite ways to be wrong.
Wittgenstein's Ladder — Some concepts can only be understood by climbing through the explanation and then discarding it. Maybe these named principles are the ladder — you learn "Chesterton's Fence" as a rule, internalize it as intuition, and eventually stop naming it. The compression becomes invisible. That's what expertise is: named principles that have dissolved into judgment.
The thing that strikes me about this list: every one of these is someone compressing years of observation into a sentence. That's what axioms are. That's what a CLAUDE.md is. That's what the constraint layer is — human judgment, compressed into something portable enough for a machine to carry.
The transformer compresses a corpus into weights. The human compresses experience into a name. Same operation, different substrate.
Living document. Adding as I encounter them.