Customs and Practices
Continuous improvement in software is driven by awareness. When we notice what we are doing as we repeat familiar work, we sometimes spot a better way. The value of a “process” document is not to prescribe every small action. It is to retain and transmit hard-won knowledge that helps people do the work well.
Part of The Programmers’ Stone — return to the main guide for the full series and chapter index.
That means the useful parts of a process are the small, practical “treasures” that are worth sharing widely. A clause belongs in a coding standard or style guide when it helps teams avoid recurring mistakes, communicate consistently, or preserve clarity across time. Rules that are overly specific, brittle across platforms, or routinely violated in real builds do not retain knowledge. They create noise.
In mature manufacturing, continuous improvement is led by the people closest to the work. Software should be the same. Improvements should be discovered at the codeface and pushed upward, not imposed as static doctrine from above. When process is treated as untouchable, teams stop learning. When process is treated as coercion, teams stop telling the truth.
Over the past few decades, software tools, languages, and models have evolved dramatically. Yet many coding standards have barely evolved at all. Some of the early debates that produced genuinely useful conventions were frozen into ritual and then copied from one document to another. The result is often a set of inherited rules that are disconnected from modern development environments and modern failure modes.
A modern engineer working in a rich environment gains little from sanctimonious reminders about ancient taboos and superficial formatting rituals. What they need are conventions and practices that support today’s realities: collaboration at scale, rapid iteration, safe change, and long-lived systems.
The Codeface Leads
Process should be a high-visibility channel for preserving practical wisdom. The point is not exhaustive prescription. The point is to capture the techniques that help a team build and maintain systems reliably.
When people closest to the code are allowed to lead improvement, the process becomes a living artifact. When they are not, process easily becomes a mechanism of fear: a way to police compliance and defend past decisions, rather than a way to learn.
One symptom of a failing process culture is that stylistic debate becomes either frozen dogma or pointless “religious war.” In a healthy culture, the team revisits style and convention when new languages, libraries, and architectural patterns create new tradeoffs. The discussion is serious because it affects real systems.
Who Stole My Vole?
This section is about complexity. We will start with a thought experiment involving an imaginary Martian ecology.
On Mars there are rocks. There are also two lifeforms: Martians, who eat voles, and voles, who hide behind rocks and eat them. Martians spend their time watching for voles darting between rocks. Because rocks stretch to the horizon in every direction, Martians evolved four large eyes on stalks, each pointing a different way.
Martian evolution, in this slow environment, progressed almost entirely in the direction of vole spotting. Each eye is backed by a large “visual cortex.” These sub-brains are cross-connected so the Martian can compensate for difficult conditions. Martians do so much processing up front that they do not really have human-like attention. They focus on how input interacts across their different “attentions.”
When a Martian spots a vole, it must sneak up while keeping a rock between itself and the vole. That requires intelligence. Soon after developing intelligence, Martians invented literature. It is scratched on rocks with smaller rocks. It follows Martian grammar and uses four “voices” that roughly separate emotion, action, speech, and circumstance.
In the Martian canon, a great tragedy might be called Who Stole My Vole? It can be represented as a simple grid:
| Emotion | Action | Speech | Circumstance |
|---|---|---|---|
| Grumpy | Sneak | Horrid Martian | Cold, gloomy |
| Determined | Bash | Die! Die! Die! | Old Martian’s cave |
| Ashamed | Steal vole | Dead Martian |
Now imagine a Martian programmer reading a theory that humans labor to make linear and readable. The Martian’s brain is already tuned for understanding complex relationships between independent activities. What humans must translate into a line of symbols is already obvious to the Martian. Yet the very act of squeezing meaning into a single linear rendering can make it harder for a differently-structured mind to apprehend.
The point is simple. Complexity is not absolute. It is relative to the mental structures we use to understand a domain. We do not need aliens to see this difference. People vary widely in what they find complex, and what they find obvious. That is part of why “mental maps” matter.
When you learn a domain well, you do not remove complexity from the world. You gain a structure that lets you perceive it as organized. A novice sees clutter and chaos. An expert sees systems, roles, and patterns. The world has not changed. Their understanding has.
In software, this matters because many workplace habits borrow the logic of “bricks” and apply it to “information.” In physical logistics, simplicity is often enforced by standardizing how we describe and move things. In software, the work is the organization of information itself. If you force the representation to be simple regardless of the domain, you often lose the information you need later. That loss is not free.
The question is not “How do we remove complexity?” The question is “How do we represent the inherent complexity of the problem in a way that is navigable for the people who must build and maintain the system?”
- Some tools and languages feel “easy” because decades of work have gone into stabilizing their idioms.
- They often cover a constrained class of tasks well.
- They may trade simplicity of use for cost elsewhere, such as performance or expressiveness.
There is no absolute measure of complexity. Complexity analysis can be valuable, but it must not become a moral crusade against the inherent structure of the problem. The goal is to make the structure visible and manageable, not to pretend it is not there.
Reviews and Previews
A coercive process culture treats review as enforcement. Do the ritual, or else. The rule must be policed, which means checking after the fact. The implied model is that workers are untrustworthy and must be caught breaking rules.
Reviews do have a legitimate role. They can catch oversights and fix defects before delivery. But a review cannot turn a bad design into a good one late in the cycle. It can polish. It cannot transform.
Many teams over-invest in late review and under-invest in early shared thinking. This leads to a familiar failure mode: by the time the group sees the work, the structure is already set. If the design is wrong, it is too late to change without major cost. So the group silently colludes to avoid confronting big problems and instead argues about minor stylistic issues. This does not improve quality.
A better approach is to shift effort forward. Use previews early, when options are still open. Agree on a direction, key tradeoffs, and risks before the work hardens. Then use review later for what review does well: catching defects, confirming assumptions, and ensuring the delivered work matches the shared intent.
Code Inspections and Step Checks
Code inspections often exist for a sensible reason: when you have done the job, look at what you have done and check it is OK. But inspections also carry historical baggage.
In earlier eras, code was written on sheets, transcribed onto cards, and compiled at great expense and delay. Teams learned to examine code carefully before the compiler run because a single attempt could cost a week. Today, we have instant compilation, tests, linters, and debuggers. The original economics are gone, but the ritual sometimes remains.
Inspection is costly. It should focus on what machines are bad at and humans are good at: assumptions, intent, boundary cases, and design correctness. It should not devolve into manual syntax checking, which compilers do better, faster, and more reliably.
A practical way to improve the value of inspection is to split it into two phases.
- Individual step checks. The author single-steps logic with a debugger (or equivalent tooling) and verifies each branch and state transition. This focuses attention and catches many defects early.
- Group inspection. The group focuses on intent, implicit assumptions, maintainability, and whether the solution matches the design direction agreed earlier.
Done this way, inspections are less likely to collapse into expensive arguments about superficial style. They become a method for sharing understanding and reducing risk.
Coding Standards and Style Guides
Coding standards and style guides are often discussed as if they have one clear purpose. They do not. They are pulled between two motivations.
One motivation is productive: to preserve clarity, reduce friction between people, and retain knowledge across time. The other is disciplinary: to enable policing, blame, and the illusion of control. When these are mixed, standards become confused and sometimes harmful.
If we treat quality as grounded in understanding and control, we can set better goals for standards.
Clarity is not the same as simplicity. A language’s expressive power can improve clarity when used as shared idiom. Dense expression is not inherently worse than verbose expression. Mathematics became easier to communicate when notation improved. Programs can similarly be clearer when they are succinct and structured—provided the team shares the idioms and documents them when needed.
Conventions have a cost. Before adopting a convention, ask what it buys you and what it burdens. Conventions that make code uglier, slow down reading, or force unnatural naming schemes often harm morale and reduce elegance. The goal is not to maximize rule count. The goal is to help the team build a great system.
Prefer guides that teach over rules that constrain. A good guide includes examples, project vocabulary, patterns that work, patterns that fail, and an “example module” that reflects the team’s ideals. This serves better than a list of imperatives that restrict thoughtful design.
Some imperatives are justified. There are tools and functions that are unsafe or historically error-prone. Banning them can be reasonable. But blanket prohibitions can also backfire when they force worse clarity or worse performance.
The right posture is pragmatic. Avoid twisting structure into unnatural shapes purely to satisfy a dogma. Code is communication. The communication must be readable to the expected maintainer, and that often requires balancing elegance, idiom, and explicitness.
Two examples illustrate the point.
Example 1: Tail recursion elimination. In some performance-sensitive cases, a small local jump can remove significant call/return overhead when traversing a structure. This can be a practical optimization when the language/compiler will not do it for you.
Example 2: Restart logic in procedural scripts. When a sequence of steps must restart cleanly after any failure, deeply nested conditionals can obscure intent. A structured alternative may be a short-circuit chain of functions, or (in some low-level scripting contexts) a simple restart label that makes the intent obvious.
The larger lesson is to treat language features as tools. Use them to preserve clarity and reduce accidental complexity, not to satisfy inherited taboos.
Meaningful Metrics
Metrics are expensive. They can also be valuable. The key is to know why you are collecting numbers and how you will use them.
There are three broad motives for measurement. All can be legitimate, but confusion between them leads to waste and perverse incentives.
Descriptive Science
This is exploratory measurement. You collect data to see what patterns exist, without assuming you already know what matters. In science and engineering, rich descriptive observation often precedes useful theory.
Software has suffered from importing factory-style metrics into a knowledge-intensive activity without adapting them to human and cognitive realities. Descriptive work can reveal surprising factors that affect quality and throughput, including seasonality, interruptions, tooling friction, onboarding patterns, and system coupling.
Experimental Science
This is measurement with a change. You adjust one variable, try to hold other factors steady, and see if outcomes match expectations. In software, this can be difficult because cycle times are long and projects differ. Still, it is possible to look for large effects that overwhelm the noise, or to design smaller experiments within a team’s control.
Cybernetic Technology
This is measurement tied to a control loop. Before you measure, you know what the measurement means and what you will adjust in response. This is the ideal: measurement as part of a feedback system that steadily improves outcomes.
In practice, software rarely achieves full cybernetic control. The field is complex and context-dependent. But teams can build partial control loops with strong heuristics, provided they do not pretend that numbers alone confer mastery.
The recurring warning is the same: do not put the cart before the horse. When metrics are collected without a clear model, they collapse into bean-counting. People optimize what is counted rather than what matters. “Bad” statistics become a stick to beat people with instead of a signal to refine the system.
Metrics do not replace responsibility. They support it.
Attitude to Tools
How a designer thinks strongly affects how they relate to tools.
One approach treats tools as machines that do jobs. Used this way, tools are mostly black boxes: you feed in inputs and expect outputs. This can work, but it encourages superficial use. A small amount of time spent understanding compilers, linkers, runtime behavior, and build systems often pays back repeatedly.
Another approach treats tools as mind prosthetics. Tools extend reach and awareness. They support exploration, refactoring, and verification. They are used deliberately, and they are chosen for how well they integrate with other tools and how easily they can be composed.
Be cautious of expensive “do everything” tooling that locks data away, is hard to adapt, and encourages the fantasy that programming can be automated into bureaucracy. Valuable tools exist, but the right response to glossy claims is often: what exactly does this give us, and what could we achieve with smaller, more flexible pieces?
Software Structures are Problem Structures
Software structures tend to mirror problem structures. As designers gain experience, they naturally learn idioms and patterns that make mapping a domain into code easier.
A common mistake is to become ashamed of seeing solutions clearly. Some designers start pretending they cannot see the solution while they speak about it, as if insight itself were suspicious. That posture does not add rigor. It adds confusion.
Skill is an asset. If you can state that solution Y fits problem X and explain why, you help the organization. Rigor matters when it has a purpose, such as maintaining independence from a changing implementation or validating assumptions. But pretending to be less insightful than you are is not a virtue.
Root Cause Analysis
Root cause analysis exists to understand why something went wrong and to reduce the chance it happens again. In healthy engineering cultures, this is normal work. In blame-oriented cultures, it becomes uncomfortable because it challenges the idea that the process is perfect and only individuals fail.
To do root cause analysis well, focus on what happened, not how to translate events into process language. If the story is always told as “someone failed to follow the process,” the conclusion will predictably be blame plus more paperwork. That rarely fixes the underlying system.
Causes can be grouped by how they relate to process:
Unconnected
A factor outside process control. For example, widespread illness disrupts staffing for a period. You cannot “fix the process” to prevent it, but you can plan for resilience and document observations that improve risk management.
Operational
A required action was not performed. Even here, “someone forgot” is not a root cause. Ask why it was missed. Training gaps, unclear responsibility, ambiguous procedures, interruption-heavy roles, or misaligned incentives are common drivers.
Ergonomic
The process makes sense in principle, but its implementation is not viable in real conditions. Interruptions, tool friction, staffing patterns, or workspace constraints can make correct execution unrealistic. Keep the intent, but change the implementation to fit reality.
Procedural
The process itself is wrong. It encodes a flawed assumption and systematically produces incorrect outcomes. In these cases, change the process.
The underlying lesson is to see what is there, not what you are told to see.
Complexity Matching and Incremental Boildown
Interesting systems have operations that increase state complexity and operations that reduce it. Much formal “engineering” attention goes to growth: adding features, layers, abstractions, and artifacts. The opportunities to shrink a system are often taken only when someone chooses to see them.
Obfuscation commonly arrives in pairs: one device that creates complexity and a second device required to undo it. This appears in requirements too, where users request old-system workarounds and the procedural consequences of past limitations. Rebuilding those patterns imports needless complexity into the new system.
Modern object libraries and layered frameworks can hide cost and accumulate invisible currency conversions. The problem may not appear until scale or performance constraints make the hidden cost undeniable. Projects that do not periodically examine class hierarchies, internal representations, and real-world use cases can drift into trouble without warning.
Conceptual integrity remains one of the best defenses against bloat. Compare mental models. Look for mismatched currencies. Simplify where simplification is real, not just displacement.
The Infinite Regress of ‘Software Architectures’
There is a failure mode where teams apply a method or framework that does not fit the problem category, then become trapped seeing the problem only through that method’s lens. The result is an architecture that recreates the problem as a layered abstraction over itself.
In practical terms, this looks like generating glue structures that do little more than call into the very system facility the project was supposed to design. The method can be excellent in its intended domain and still be the wrong tool for this job.
When that happens, the team’s energy shifts from building the system to defending the ceremony. Labels become status markers. “Programming” is treated as beneath the work. The actual work, however, does not go away. It returns as stress, delay, and brittle delivery.
There is a useful caution here for everyone. Be wary of “pass the parcel” designs where complexity seems to vanish by magic. Complexity either becomes simpler because you found a deeper view, or it reappears elsewhere because you moved it. If a hard constraint disappears without explanation, assume it will reappear later.
A concrete reminder is atomicity. Some guarantees cannot be conjured purely by rearranging user-level logic. If you need a truly atomic operation, it must come from the platform primitives (hardware or OS). Good layering hides this reality. Bad layering denies it.
The Quality Audit
In a healthy view, a process is a protocol for communicating and coordinating work. It offers facilities. It supports learning. It is not a weapon.
In a fear-driven view, an audit becomes an ordeal: staff avoid auditors, managers brief teams to say as little as possible, and auditors treat “non-compliance” as personal failure. The process is assumed perfect, and the individual becomes the patsy for systemic flaws. This damages morale and blocks improvement.
A modern audit can be constructive when it follows a different stance.
- Audit the process, not the person. Assume staff are acting in good faith. Repeated issues usually indicate systemic design problems in the process or its implementation.
- Compare process facilities to business needs. A single global rule can be sensible in one domain and pointless in another. The auditor should evaluate local realities and whether the process supports them.
- Treat auditors as specialist colleagues. Done well, auditors bring broad experience across organizations. When teams can speak openly, auditors can help identify practical improvements rather than enforcing ritual.
Quality work is, at its core, an information management problem: keeping the right artifacts, in the right form, findable when needed, without drowning the team in ceremony. When audits focus on that goal, they support engineering rather than undermining it.
Originally written in the late 1990s and refreshed for publication in 2026. Modern companion pages for each section will expand the examples and update the technical references.