Writing more code does not make a system better. Poorly designed systems become harder to change with every line added. A senior engineer who spends a week on architecture decisions before writing a single line is not procrastinating; they are preventing months of rework. The quality of a system’s design determines how expensive every future change will be.
Analysis Briefing
- Topic: System design, architecture decisions, and engineering leverage
- Analyst: Mike D (@MrComputerScience)
- Context: A back-and-forth with Claude Sonnet 4.6 that went deeper than expected
- Source: Pithy Cyborg | AI News Made Simple
- Key Question: Why does a junior engineer writing twice as many lines of code often produce less value than a senior engineer writing half as many?
How Bad Design Compounds Over Time
A system is a collection of components with boundaries and contracts between them. When those boundaries are poorly defined, components become entangled. Changing one component requires understanding and modifying others. The cost of change grows with the surface area of entanglement.
This is technical debt in its most concrete form. An early decision to store business logic directly in database stored procedures makes it impossible to switch databases later without rewriting the application. An early decision to hardcode configuration values in application code makes every environment change a code change and a deploy. An early decision to skip a message queue and call services synchronously makes the entire system fragile to any single service being slow.
None of these decisions are obviously wrong in week one when the system is small. They become painfully expensive in month twelve when the team is large, the codebase is 200,000 lines, and every change requires touching ten files to accomplish what should be a one-line update.
The Design Decisions That Matter Most Early
Boundaries between components. Where one service ends and another begins determines who owns what, how changes propagate, and what can be scaled or replaced independently. Drawing those boundaries correctly requires understanding which parts of the system will change together and which will change independently. Microservices that are too fine-grained create distributed monolith problems: services that must be deployed together because their contracts are too tightly coupled.
Data ownership. Which component is the authoritative source for each piece of data? A system where multiple services can write to the same database table without coordination will eventually produce inconsistencies that are expensive to detect and nearly impossible to fix retroactively. Defining data ownership upfront prevents the entire class of “who changed this and when” debugging sessions.
Failure modes. What happens when a downstream service is slow or unavailable? A system designed with this question answered upfront (circuit breakers, timeouts, fallback behavior, graceful degradation) handles failures predictably. A system designed without it handles failures randomly, and random failure behavior in production is the most expensive kind.
What Senior Engineers Do Instead of Writing More Code
Senior engineers write less code per feature than junior engineers because they spend time on decisions that reduce total code. They extract shared patterns into abstractions that eliminate duplication. They identify requirements that sound complex but map to a much simpler design. They push back on features that add complexity without proportional value.
The question a senior engineer asks before implementing anything is: “what is the simplest design that correctly solves this problem?” Not the cleverest, not the most general, not the most impressive. The simplest. Complexity is a debt paid on every future change to the system.
Code review from a senior engineer is often about design, not syntax. “Why does this service need to know about the existence of this other service?” and “what happens to this queue if the consumer is down for two hours?” are design questions that catch expensive mistakes before they become permanent.
The accumulated judgment from seeing multiple systems fail at scale is what makes system design experience valuable. Distributed systems failures follow recognizable patterns: the database that was not indexed for the query pattern that emerged at scale, the synchronous integration that became a cascading failure, the shared mutable state that produced race conditions under load. Recognizing those patterns before they happen is worth more than fast typing.
What This Means For You
- Write an architecture decision record (ADR) for any significant design choice, capturing what you decided, what alternatives you considered, and why you chose what you chose, because the context behind decisions is always lost within months and re-litigating decided questions wastes more time than documenting them upfront.
- Explicitly define data ownership for every entity in your system before writing service code, because disputes over which service is authoritative for a given piece of data are among the most expensive coordination failures in distributed systems and they are entirely preventable.
- Design for the failure case first, not last, because a feature that works perfectly when everything is healthy but produces incorrect behavior when a downstream service is slow will always be found by users at the worst possible time.
- Measure lines of code deleted as a positive metric, because a senior engineer who ships a refactor that removes 5,000 lines of duplicated logic while preserving behavior has created enormous leverage for every future change to that code.
Enjoyed this deep dive? Join my inner circle:
- Pithy Cyborg | AI News Made Simple → AI news made simple without hype.
