Some Weird Stuff

Feynman glowing silhouette and scientific symbols

This page is a modernized edit of an older Reciprocality essay. The title and section structure are preserved, but the wording is refreshed for today. Each section is a pointer to a fuller, modern page that will be developed separately.

Part of The Programmers’ Stone — return to the main guide for the full series and chapter index.

Richard Feynman

For anyone trying to bring a mapper’s strengths into a workplace, Richard Feynman remains a useful model. He had a talent for cutting through status games and “smart-sounding” explanations by insisting on plain facts and simple tests.

One of his best-known lessons is about labels. A child can learn names in multiple languages and still know nothing about the thing itself. The real knowledge comes from watching what the thing does, what changes, what stays the same, and what causes what.

Feynman also showed a rare kind of intellectual honesty. He would say “I don’t know,” then keep digging until the mechanism was clear. That approach is deeply compatible with mapping: reduce the story to what can be checked, then rebuild the explanation from the inside out.

George Spencer-Brown

The Laws of Form is a small book that can trigger large shifts in how you think. It explores how much logic can be generated from a single act of distinction. For programmers, that idea lands immediately. Many “complicated” systems are just a few distinctions repeated, nested, and disguised.

Spencer-Brown also has a recurring theme that matters far beyond mathematics: progress often looks like making the covert overt. You realize you have been following rules you never named. Once you name them, you gain freedom to reframe the problem.

He also describes a subtlety that mappers recognize. Finding a solution is not always about searching for something hidden. Sometimes the decisive fact is already visible, and the work is to see why it matters.

That perspective is also a clean summary of the mapper/packer barrier. Packers can be highly competent inside familiar procedures, yet struggle to notice the assumptions their procedures depend on. Mappers notice the assumptions first, because that is where leverage lives.

In discovery, the difficulty is often not to find what is hidden, but to recognize the relevance of what is already in full view.

The takeaway is simple. Big gains can come from ultimate simplicity, as long as you choose the right distinction and hold it steady.

Physics Textbook as Cultural Construct

People invite us to see the world in particular ways: users with strong beliefs, managers with frameworks, consultants with slogans, and our own habits. Mapping pushes in the opposite direction. It asks what is really going on, then tries to represent it as simply as possible.

Even in physics, the “order of explanation” can shape the student’s model of reality. Some presentations build understanding around deep invariants early. Others build exam-friendly calculation habits first, and only later attempt to correct the underlying picture.

One way to see this is to compare how different great communicators structure the same material. The ordering matters. If you teach special cases before the moving universe that generates them, you can produce students who can calculate, yet still picture reality as static.

Principia

  • Newton’s Three Laws of Motion
  • Orbits in gravitation (including raising and lowering things)
  • Motion in resistive media
  • Hydrostatics
  • Pendulums
  • Motions through fluids

Red Books

  • Energy
  • Time and distance
  • Gravitation
  • Motion
  • Newton’s Three Laws
  • Raising and lowering things
  • Pendulums
  • Hydrostatics and flow

Advanced Level Physics

  • Newton’s Three Laws
  • Pendulums
  • Hydrostatics
  • Gravitation
  • Energy

The question is not “which list is correct.” The question is what kind of mental model the ordering tends to produce. Mappers care about the model, not just the ability to do sums.

Are Electrons Conscious?

This section is not here to argue for a particular answer. It exists to point out how close programming gets to questions that sound like philosophy until you meet them in practice.

Consciousness research sometimes asks whether awareness can emerge from interactions between units that are not themselves conscious. That invites a “how little is enough?” kind of question. But there is another route into the topic that programmers can actually test: how understanding changes the actor.

When you truly understand a pattern like deadlock or livelock, it becomes part of you. You start seeing it outside computing. You start designing your life to avoid the same failure modes.

A deadlock is when two processes each hold a resource and wait forever for the other resource. A livelock is when they keep moving, but keep yielding in a way that prevents progress. Once you recognize these patterns, you spot them in meetings, plans, schedules, and relationships.

Then comes the strange part. If you can take a deep understanding of a domain and embody it in a running system, to what extent is that system a partial copy of your mind at work in the world? Not a metaphorical copy, but an operational one: your compressed judgment made executable.

That is the spirit of the original question. Programming is one of the few crafts where a personal internal model can be externalized into a working mechanism that persists and acts without you.

Teilhard de Chardin and Vernor Vinge

Pierre Teilhard de Chardin framed evolution as a rise in complexity: matter to life, life to consciousness, and then consciousness interacting at scale to create a new layer of organization. Whatever you think of his language, the basic observation is hard to ignore. New layers emerge, and each layer uses the previous one as a platform.

Vernor Vinge, writing in a different register, proposed that technological intelligence could accelerate to a point where the future becomes hard to imagine from the present. Whether one treats this as prediction or provocation, it is useful as a lens: software compresses evolutionary timescales. It makes “new layers” plausible in decades rather than millennia.

You do not need to accept any grand destiny to make this practical. The local point stands. When you work in software you regularly see phase changes: from individuals to teams that “gel,” from manual practice to automation, and from fragmented systems to coherent architectures.

Society of Mind

Marvin Minsky argued that what we call mind may be a coalition of many smaller processes. Even if you do not accept every detail, the model is useful because it highlights an engineering truth: complex behavior can arise from simple agents plus coordination rules.

There is a further parallel with software design. Some systems rely on “hashing” style shortcuts: reduce the world to a key, trigger an action, and hope collisions are rare. This can work in low complexity environments. But as complexity rises, collisions become frequent, and the system starts producing arguments, blame, and confusion instead of decisions.

Object models are different. They try to keep structure “natural” by representing entities and their relationships directly. The shape of the model can grow and change, but retrieval stays intuitive because the relationships carry meaning.

This mirrors packing and mapping. Packing prefers keys and triggers. Mapping prefers coherent models. Past a certain complexity, coherent models are harder to build but dramatically cheaper to live with.

If there is a deeper lesson, it is this. The Information Age does not only demand new tools. It demands the use of the parts of the human mind that are good at modeling, not just obeying procedures.

Mapping and Mysticism

Early in this work, two problem-solving styles were contrasted. Packing accumulates “knowledge packets” that dictate appropriate action, without reworking the relationships between those packets. Mapping invests in building an internal model of reality, then uses deep structure to get leverage.

Many traditions that get labeled “mystical” can be re-read as attempts to describe the subjective experience of mapping in cultures that lacked a clean external validation method like “the program runs.” Their language is often allegorical because they are trying to point to internal states, not external doctrines.

Seen this way, the divide is less about religion and more about cognition. Some traditions emphasize self-observation, reduction of preconceptions, and “right action” based on real structure rather than social performance. Those are mapper skills, even when described in old vocabulary.

This perspective also reframes the “weirdo” problem. If a society primarily rewards packing, then anyone who insists on internal coherence and direct perception can look strange. They may be praised as gifted when they succeed, or condemned as deviant when they disrupt routines.

The practical point for Reciprocality is not to turn engineering into spirituality. It is to notice that the same internal moves appear across domains: attention, modeling, dropping false certainty, and acting from structure rather than ceremony.

Mapping and ADHD

This section is best read cautiously today. The original text was written before current research debates and diagnostic practice. The useful modern point is narrower: some environments punish exploratory cognition and then label the resulting friction as a personal defect.

When a child’s mind is naturally exploratory, rigid instruction can produce repeated conflict. The child may look “difficult” not because they cannot focus, but because the task offers no meaningful model to build. In that setting, compliance is rewarded over understanding, and boredom is misread as pathology.

Whatever your view on diagnosis and treatment, the mapping lens can still help. It asks: is the problem an attention deficit, or an environment that fails to offer real cognitive work? If the environment changes, does the behavior change? That is a testable question.

How The Approach Developed

This work emerged from watching quality systems collide with software reality. Formal process could prevent some spectacular failures, but it did not explain what great programmers actually do when they produce clean systems under pressure.

At first, the investigation tried to translate craft into management language. That repeatedly failed. The words were available, but they did not point to the experience. Over time, the focus shifted to describing the internal act: how programmers hold a model, revise it, and compress it into working code.

Looking inward made progress faster. The “artisan” metaphor arrived early because it fits what teams already know: beginners learn by doing real work under guidance, experienced practitioners refine judgment through practice, and mastery is demonstrated by building something whole, not by reciting procedures.

Several influences then helped: the search for rigorous descriptions, the recognition that language itself can hide a cognitive divide, and the recurring observation that some people simply cannot see what others mean by “understanding” because they treat all thinking as naming, scoring, or compliance.

In the end, the approach was not a new ideology. It was a cleaned-up description of a distinction that kept showing up across sites, teams, and failures: packing versus mapping. Once you see that divide, much of the workplace confusion becomes predictable.

Complexity Cosmology

Mappers repeatedly make high investments in understanding before they have proof it will pay off. Strangely, this often works. Why?

One answer is structural. Complex systems are usually built from simpler layers, so drawing the right boundaries often exposes “complexity cancellation” within those boundaries. What looks tangled becomes simple once the correct structure is recognized.

Another answer is directional. The universe seems friendly to the growth of organized complexity. From atoms to chemistry to life to minds, structure appears again and again. Software may be one more expression of that: complexity added deliberately, but still in alignment with a deep tendency of the world.

This is speculation, but it is disciplined speculation. It is offered as a lens for thinking, not a doctrine to believe.

The Prisoners’ Dilemma, Freeware and Trust

The Prisoners’ Dilemma is often presented as a trap where rational players defect because they cannot guarantee the other person will cooperate. Historically it was used to model high-stakes conflict, where first-strike advantage makes mistrust catastrophic.

Yet real humans sometimes avoid the trap, especially when repeated interaction, reputation, and shared understanding are in play. Under a mapping lens, that makes sense. Mappers can often recognize when another person is capable of seeing the full structure of the game and acting on it.

Software also changes the economics. Copying software is not a zero-sum transfer. Both parties can possess the same artifact without reducing the other’s holdings. That opens a wider space of cooperation: shared standards, open tooling, public reference implementations, and ecosystems where leadership can outperform hoarding.

The practical message is not “everything should be free.” It is that the strategy space is larger than packer instincts assume. Trust, leverage, and shared artifacts can be rational in ways that older scarcity-driven models fail to predict.

Predeterminism

Thomas Kuhn used “paradigm” to describe the background theory a culture treats as reality. One historical paradigm was predeterminism: the belief that lives unfold according to a fixed plan, making effort feel pointless.

When that paradigm declined, people leaned into agency. Progress followed. But a subtler pattern remains: many people act as if outcomes are possible, yet treat understanding as impossible. They work, but they do not model.

The Information Age forces a change. Automation punishes shallow procedure-following and rewards real comprehension. It also gives immediate feedback. When you write code, the system reflects exactly what you told it, not what you meant. That pressure can pull people out of resigned uncertainty and into genuine understanding.

Originally written in the late 1990s and refreshed for publication in 2026. Modern companion pages for each section will expand the examples and update the technical references.