Cognitive Resonance Mapping
Humans as Information Processing Systems
1. Overview: Humans as Information Systems
In Cognitive Resonance Mapping, we treat humans as living information-processing systems embedded in larger living systems.
Our bodies take in signals (sensations, emotions, thoughts, social cues, digital inputs).
Our nervous system filters, amplifies, or blocks those signals.
Our behavior and communication become the outputs that shape families, institutions, and technologies.
When this flow of information is clear and adaptive, we experience:
physiological regulation
flexible thinking
capacity for empathy and collaboration
When it is blocked or distorted, we experience:
chronic stress and trauma patterns: (EMS) expectation-mismatch stress
rigid beliefs and dominance hierarchies
breakdowns in trust and cooperation
This is not a metaphor; it is how modern neuroscience and systems science describe human functioning.
Polyvagal theory shows how the autonomic nervous system shifts between threat states and social engagement, shaping whether we perceive others as allies or dangers.
Predictive processing / active inference models describe the brain as a system constantly updating its best guesses about the world based on incoming signals and past experience.
Mirror neuron and resonance research suggests that our brains are wired to reflect others’ states, creating powerful feedback loops in groups.
Complex systems science shows how local interactions can produce large-scale patterns such as cooperation, polarization, or collapse.
Cognitive Resonance Mapping uses simple diagrams (models) to let learners:
See their own cognitive–emotional patterns as states and transitions, not personal defects.
Understand how somatic grounding changes the “mode” of their operating system.
Recognize how kindness and trust act as protocols that allow information to flow safely between people.
Explore how AI tools amplify whatever patterns they already carry—either rigid dominance or resonant collaboration.
The goal is not to tell people what to believe, but to give them modeling languages for upgrading their internal operating system to handle the complexity of an AI-mediated world.
Why Information-Flow Models Matter for AI Alignment
As AI systems become more autonomous and “agentic,” they stop being simple tools and start behaving more like participants in complex human systems. They make decisions, coordinate with other systems, and shape what information humans see and act on.
That means AI alignment is not only about what happens inside a model. It is increasingly about the flows of information between agents:
between humans and AI,
between multiple AI systems,
and between AI and the organizations or communities in which it is embedded.
If those information flows are one-way, opaque, or distorted, even well-intentioned systems can undermine trust, concentrate power, or amplify existing harms. If those flows are transparent, bidirectional, and accountable, AI can support healthier decision-making and more adaptive organizations.
This is where the holarchy model of information flow becomes useful.
Instead of imagining AI living at “the top” or “the bottom” of a hierarchy, a holarchy treats humans, teams, institutions, and AI agents as nested, interacting holons—each a part and a whole. Information circulates:
within each holon (self-reflection, internal monitoring),
between holons at the same scale (peers, teams, collaborating systems),
and across scales (individual ↔ organization ↔ ecosystem).
This pattern mirrors scale-free information flows in natural systems—from cells in a body to species in an ecosystem. The proposal here is not to idealize nature, but to say:
If we are going to integrate powerful agentic AI systems into human life, we need models of information flow that are at least as adaptive, distributed, and self-correcting as the living systems we depend on.
Historical Structures, Without Picking Sides
For much of recent history, many organizations—governments, companies, and even universities—have been structured around hierarchical, command-and-control models. These structures are not “wrong” in themselves; they evolved under conditions where:
information traveled slowly,
decisions were concentrated at the top,
and security was often framed in terms of defense against threats, including military conflict and economic competition.
Those same structures are now being asked to absorb:
real-time data streams,
global interdependence,
and increasingly autonomous AI systems that can act faster than human oversight.
This creates a tension we're expressing as Expectation-Mismatch Stress (EMS) - we use a trauma-informed framing
The old model is optimized for control, secrecy, and clear chains of command.
The emerging reality demands flexibility, transparency, and multi-directional feedback.
We do not need to vilify the past to recognize that these older structures can struggle to manage agentic AI in ways that are trustworthy, sustainable, and fair.
Why College Students Are Central to This Transition
Today’s college students are in a unique position:
They will be the ones asked to design, deploy, and maintain agentic AI systems in companies, institutions, and public agencies.
At the same time, they are rightly concerned about how these systems may automate away jobs, reshape professions, and alter the social contract.
An honest AI literacy program must acknowledge both realities:
Agentic AI is coming into their workplaces—often faster than governance frameworks can adapt.
Their own livelihoods, wellbeing, and integrity are part of the alignment problem, not a side issue.
By teaching students to think in terms of information flows and holarchic structures, we invite them into a different kind of design question:
How can we architect AI–human systems so that
information flows are transparent and accountable,
human judgment is supported, not sidelined,
and value creation is shared across levels—individual, organizational, and societal?
This is not about taking a political stance. It is about giving future engineers, designers, artists, and policymakers shared modeling languages to:
see where one-way, dominance-style structures break under complexity,
experiment with nested, scale-free architectures inspired by living systems,
and design AI-integrated organizations that support long-term collaboration instead of short-term extraction.
Key Scientific Threads Behind This Overview
Polyvagal Theory & Nervous System Regulation
Porges, S. W. The Polyvagal Theory: Neurophysiological Foundations of Emotions, Attachment, Communication, and Self-Regulation.
Ogden, P., Minton, K., & Pain, C. Trauma and the Body: A Sensorimotor Approach to Psychotherapy.
Predictive Processing / Active Inference (Brain as Inference Machine)
Friston, K. “The free-energy principle: a unified brain theory?” Nature Reviews Neuroscience (2010).
Clark, A. The Experience Machine
Mirror Neurons & Social Resonance
Rizzolatti, G., & Sinigaglia, C. Mirrors in the Brain: How Our Minds Share Actions and Emotions.
Gallese, V. “Embodied simulation: From neurons to phenomenal experience.”
Complex Systems & Multilevel Cooperation
Wilson, D. S. This View of Life: Completing the Darwinian Revolution.
Ostrom, E. Governing the Commons: The Evolution of Institutions for Collective Action.
Toroidal / Field-like Models of Consciousness
Meijer, Dirk and collaborators
Last updated