The Second Law of Software
Simplicity is prerequisite for reliability.
— Edsger W. Dijkstra, EWD498, How Do We Tell Truths That Might Hurt? (1975)
Many years ago when I was still a junior, I opened a codebase I had written six months earlier. I remembered it as clean. It was not. High cyclomatic complexity, long methods, god classes, a helper that used to do one thing now did four, and a config file explained three different decisions in three different ways. Not one of those commits had been a bad call at the time. And yet the whole thing had drifted.
This chapter is about that drift. The claim is strong: it is not an accident of that particular codebase, or that team, or that year. It is a regularity — a law — and naming it as a law is the first honest step toward fighting it.
I call it the second law because the name comes from two lineages a century and an industry apart: thermodynamics’ second law (the physical one) and Lehman’s second law of software evolution (the empirical one). Same phenomenon, two vocabularies.
1.1 The Claim
The law in one sentence: In any system that keeps changing, complexity tends to increase unless you spend energy against it.
That is the whole thesis of the book, and every chapter after this one is about a piece of it.
The law is older than software. Thermodynamics named its version of it in the nineteenth century: the entropy of a closed system does not spontaneously decrease. Manny Lehman named the software version in 1980, in a paper called Programs, Life Cycles, and Laws of Software Evolution:
As a program is evolved, its complexity increases unless work is done to maintain or reduce it.
Lehman was not reaching for a metaphor. He was reporting on decades of empirical study of how real production systems behave as they are modified over time. Working systems get more complex. They do not get simpler. Not on their own.
What does it mean to call this a “law”? Not an axiom, and not a prediction in the physicist’s sense. A law here is a regularity so consistent, across so many teams and systems and decades, that building a practice around it outperforms denying it. You do not need the law to be exception-free to take it seriously. You need it to be true often enough that behaving as if it is true beats behaving as if it isn’t. Lehman’s data was good enough. Your own career is probably good enough too, if you’re honest with yourself about the trajectory of every codebase you’ve worked on for more than two years.
We are not the first to apply this framing. Hunt and Thomas used “software entropy” as a chapter-sized metaphor in The Pragmatic Programmer to motivate their broken-windows argument. The Advanced Principles of Team Development book closes with a chapter titled Systems Entropy. This book picks up what they left as a summary and organizes a whole thesis around it. The framing is not new. Treating it as the spine is.
1.2 Why Entropy, Not Gravity
I looked at other metaphors before settling on entropy, and I think the choice matters enough to show my work.
Gravity is attractive. It is constant, it is unavoidable, and every engineer already has an intuition for it. The trouble is that gravity is symmetric and static. It pulls, but it does not grow. Entropy is the opposite: it is a statement about direction. Left alone, a system’s disorder increases. That directional asymmetry is the thing software actually does, and it is what the rest of this book has to explain. Gravity as a metaphor belongs in a narrower role, which we will come back to in Chapter 2 when we talk about dead code.
A sharper pushback: isn’t this just Fred Brooks’ essential vs. accidental complexity restated? It is not, and the distinction is worth spelling out. Brooks’ 1986 essay No Silver Bullet gave us the categories:
The complexity of software is an essential property, not an accidental one.
That sentence named the two kinds of complexity. It did not describe a dynamic. Brooks was answering the question where does complexity come from? The second law answers a different question: what does complexity do over time? Brooks’ categories are a taxonomy. The law is a gradient. You need both.
Rich Hickey’s simple vs. easy distinction, from his 2011 talk Simple Made Easy, is closer to what this book is doing, and it is worth holding carefully. Hickey’s point was that simple is about the thing (“one fold, one braid, not interleaved”) while easy is about the person (“near at hand, familiar”). The two are often confused, and the confusion is expensive. The second law complements that distinction by asking a further question: why does easy beat simple so reliably in real projects? The answer, which the rest of this chapter builds toward, is that the cost gradient in any working system points at easy. Simple does not lose because engineers don’t value it. Simple loses because nobody pays for it by default.
1.3 The Asymmetry at the Heart of the Law
The second law is not a mystery. It is an accounting fact about how cost is distributed across the life of a system.
Adding is local, cheap, and legible. A new feature lives in a pull request, ships in a deploy, and earns a line in the changelog. Whoever wrote it can point at it. The work is visible, the credit is traceable, and the cost appears to end at merge time.
Removing is global, scary, and invisible. A deletion has to prove a negative: that nothing, anywhere, quietly depends on what you are about to remove. That proof is slow, the risk is concentrated on the person who does the work, and the reward is “the system does the same thing, with less of it.” Nobody has ever been promoted for making the codebase smaller. I wish this were a joke.1
The cost gradient at every scale, for every rational actor, points toward more. That is the asymmetry. It is not a character flaw. It is not a generational problem. It is the default pressure on the system, and it applies to you just as much as to the person whose code you are currently judging.

**Covers:** Figure 1.1 (cost-gradient diagram) placed after the "cost
gradient points toward _more_" paragraph in section 1.3.
**Action:** the image file `images/fig-1-1-cost-gradient.png` does not
exist yet. See `drafting/ch-01-figure-1-spec.md` for the design spec.
Commission or draft the image before publish. Placeholder renders as a
broken-image marker until the file is added.
Kent Beck makes the economic mechanism explicit in Tidy First? (Ch. 30). He calls it Constantine’s Equivalence:
cost(software) ≈ cost(change) ≈ cost(big changes) ≈ coupling
Read the chain left to right. The cost of software is the cost of changing it. The cost of changes is dominated by the big ones. And big changes are expensive because they cross coupling. Coupling is what turns a local addition into a global cost later. Every time you add without paying the coupling cost properly, you are borrowing against future work — and unlike financial debt, the interest rate is hidden until the bill arrives.
Martin Fowler said it at the practitioner level decades earlier, in the first edition of Refactoring:
Without refactoring, the design of the program will decay. Loss of the structure of code has a cumulative effect. The harder it is to see the design in the code, the harder it is to preserve it, and the more rapidly it decays.
Cumulative is the load-bearing word. The decay is not a one-time event you clean up once — it is compound interest on unpaid disorder. We’ll come back to the mechanics of that compounding in Chapter 15. For now, the point is narrow: the law is not a cultural complaint about engineers. It is an economic fact about how addition and subtraction are priced.
1.4 The Physics Vocabulary
The rest of the book uses five physics words, and uses them precisely. I want to introduce them here, once, so we share a vocabulary by the time we leave this chapter. Each one gets a paragraph. After that, I promise not to mix analogies — one per paragraph, from now on, and never as decoration.
Entropy is the central law. It is the tendency of an evolving system’s disorder to increase unless you spend energy to maintain or reduce it. When this book talks about “the second law,” this is what it means. Entropy does not describe a single bad commit. It describes a direction across thousands of reasonable ones.
Gravity is what dead mass does to the living parts of a system. An unused module still shapes the changes in its environment — reviewers skim past it, new features quietly work around its shape, and nobody wants to be the person who deletes it and finds out what breaks. Gravity is not pressure toward more. It is the cost of carrying what is already there. We’ll come back to it in Chapter 2 (as one of the seven sources of disorder) and in Chapter 15 (where unpaid gravity compounds).
Inertia is the cost of redirecting a drifting system. Once a codebase has drifted in a direction for a year, you are not paying to make a change — you are paying to change and to overcome the drift you’re changing away from. Inertia is why the second year of a rewrite costs more than the first, not less.
Friction is the per-change cost added by unpaid entropy. Every small disorder — an unclear name, a wrong boundary, a duplicated rule — adds a tax to every future change that touches it. Friction is invisible on any single commit and obvious across a quarter.
Half-life is how long a given simplification holds before the system drifts enough that the same work needs to be done again. A renamed function stays well-named until the domain around it shifts. A clean boundary stays clean until the next three features quietly lean on it. Half-life is the reason simplification is a practice, not a project — which is the whole argument of Chapter 16.
That’s the vocabulary. If a later chapter reaches for a sixth physics word, you should suspect that chapter of skipping the argument — including this one.
1.5 The Law in the Age of AI
A fair objection at this point is: the world changed in late 2022, and Lehman’s law predates ChatGPT by more than forty years. Does it still hold?
It does, and the honest way to defend that claim is to say exactly how AI changes the picture, rather than pretending nothing is different.
The law is not about how code gets written. It is about what happens to code over time in a system that keeps changing. That mechanism is indifferent to who — or what — made the change. An LLM-authored pull request is still a change the system has to carry forward. The accounting from Asymmetry (section 1.3) doesn’t care about the author’s biology.
What AI does do is press on the law in three specific places, and each one deserves its own chapter rather than a quick aside here:
- Comprehension gap. AI widens the distance between how much code exists in a system and how much any human has read, named, or pushed back on. That gap is entropy, measured in the currency that matters. Chapter 3 takes this up.
- Shifted energy budget. Writing code used to be the expensive part. Writing is now super-cheap. But reading, reviewing, and naming have not gotten cheaper. The energy budget for simplification has to move to match. Chapter 4 makes that case.
- Deletion problem. When a human writes a module, somebody knows what it was for. When generated code accumulates faster than it is pruned, nobody does. Deletion gets harder exactly when it should be easier. Chapter 9 takes this one.
There is no standalone AI chapter in this book, and the choice is deliberate. AI is a stressor on the law, not a new law. Chapters that timestamp current tooling age badly while laws age well. More often than not, when a book reaches for a tool-of-the-year chapter, it is borrowing relevance it hasn’t earned. This one won’t.
1.6 Worked Example: Thirteen Reasonable Commits
Let me make the compounding concrete, because the argument so far is abstract enough to feel like a critic’s complaint.
I once worked on a method that did one of the most important jobs in the system — registering a person for a training program, roughly. It started its life with two parameters. Over the years, more engineers than I can count — myself included — added to it, one reasonable requirement at a time. By the time I stepped back and really read it, it had thirteen. I counted twice.
None of the thirteen had been a mistake. Each one arrived with a new requirement: a license plate number for parking, food allergies and preferences, language preference, the secretary’s phone number. Every addition had a reviewer who signed off, a ticket that justified it, and a deploy that made it real. The commit history read as a sequence of reasonable decisions because that is exactly what it was.
What the commit history did not show was the compound cost. Reading the method required holding more than ten concepts at once. Changing one of them could break invariants nobody had written down. The method became the part of the system everyone worked around — which made every change that had to touch it harder, not easier.
The visible cost was thirteen parameters. The invisible cost was that the cost of change — which Asymmetry (section 1.3) just named as the thing that actually matters — had multiplied. Every one of the thirteen additions followed the same pattern: local, legible, cheap to ship. The cost distributed across every future reader, every new teammate, every incident responder.
That is what the second law looks like when you open a file. Not a catastrophe — a sequence of more than a dozen commits, each one reasonable in isolation, that quietly traded one unit of today’s work for many units of tomorrow’s. Somebody, eventually, had to pay that bill. Part II is about that payment.
Law in the Wild: Lehman’s Laws of Software Evolution. Lehman didn’t just name one law. Starting in 1974 and refined through 1980 and 1996, he and Belady catalogued eight laws of software evolution, derived from longitudinal studies of real systems (originally IBM’s OS/360, later dozens of others). The one this book organizes itself around is the second — Increasing Complexity — but two others are worth flagging here, because you’ll see them under different names later in the book:
- Law I: Continuing Change. A system that is used must continue to be changed, or it becomes progressively less useful. Freezing is not an option that survives contact with production.
- Law II: Increasing Complexity. The one above. Used here as the thesis.
- Law VII: Declining Quality. Unless rigorously maintained and adapted, systems will show declining quality over time. The cousin of Law II — essentially Law II reported from the user’s side of the screen.
Appendix B has the full list, together with the information-theory grounding (Shannon, Kolmogorov) that keeps the framing honest rather than decorative. If Lehman’s laws are new to you, the 1980 paper is short, readable, and freely available online. It reads better than most things published last week.
1.7 Where We Go From Here
The rest of Part I earns the frame.
Chapter 2 names the seven specific sources that the law feeds on: unclear boundaries, premature generalization, coupling by convenience, fear of deletion, copy-paste drift (with a careful cut between knowledge duplication and accidental duplication), organizational shape, and project-thinking itself. If the second law is the direction, the seven sources are the fuel.
Chapter 3 explains why the law is invisible even to experienced teams: why addition reads as progress, why senior engineers over-engineer more, why organizations reward features and not reductions, and why the reasonable-at- the-time trap hides the compound cost until it is too large to ignore.
Chapter 4 bridges to Part II by naming what “energy” means when we say energy spent against entropy. Spoiler: it is not heroics. It is attention, reading, review, naming, and willingness to delete — scarce, compounding, and political.
By the end of Part I, the goal is that software drifts toward complexity unless you spend energy against it reads not as a slogan but as a regularity you recognize in every codebase you’ve ever worked on. Once that lands, the rest of the book is about where to spend the energy, how to spend it well, and what happens when you don’t.
In a nutshell: the law is real, the cost gradient points the wrong way by default, and the only thing that beats the gradient is deliberate work. The name for that work is simplification. Let’s go figure out what it looks like.