Navigating the Information Cascade

White Paper

A Multi-Modal Simulation Framework for AI Literacy, Transformative Learning, and Prosocial Complexity

Karen Doore Humanity++ · Bridging Spiral March 2026

Draft for workshop circulation — not for formal citation without author permission


"The question is not whether AI systems will transform human cognition. They already have. The question is whether that transformation will be extractive or generative — and whether educational frameworks exist that can help people tell the difference."


This framework emerges from practice. The Art of Kindness (AoK) project, developed in partnership with the Center for Brain Health during COVID-19 (2020), demonstrated that trauma-informed, neuroscience-grounded creative learning could be deployed at institutional scale across departments during acute collective disruption. That project — using Synectics, five topics in the neuroscience of kindness, and single-frame narrative as the creative form — reached students across the university as extra credit and design class projects, and continues to be taught. The Bridging Spiral framework is the theoretical integration of what that project demonstrated empirically: that kindness, operationalized as a field condition rather than a sentiment, creates the conditions for genuine learning transformation. The current work extends that demonstrated pattern into the AI literacy challenge that is now the defining disruption of the information environment. See the Framework State of the Commons for a high level overview of these frameworks.

Abstract

We are living through a rapid and largely undesigned transformation of the human information environment. AI systems now mediate the majority of information flows in economically developed societies, operating at speeds, scales, and levels of personalization that exceed the adaptive capacity of most existing educational frameworks. The result is a widening gap between the technical sophistication of information systems and the cognitive, somatic, and relational capacities that humans require to engage with those systems wisely.

This paper introduces the Bridging Spiral framework — a multi-modal simulation model for AI literacy that draws on convergent findings from computational neuroscience, complex systems theory, contemplative science, relational neuroscience, prosocial economics, evolutionary biology, transformative learning theory, and semiotics. The framework proposes that genuine AI literacy requires not merely technical knowledge but the development of four integrated capacities — somatic awareness, cognitive discernment, relational accountability, and temporal integration — operationalized as navigational instruments within an interactive simulation environment.

The framework makes a specific claim: the transition from extractive to generative information dynamics — at the level of individuals, communities, and potentially civilizations — is a phase transformation in the thermodynamic sense. It requires sufficient energy input in the form of productive disequilibration, held within a field of kindness as a regulatory condition, to overcome the activation barrier between attractor states. The values vector — the direction of the system's governing values — determines whether transformation produces genuine holarchy or sophisticated extractive mimicry. The most empirically grounded specification of a prosocial values vector currently available is Elinor Ostrom's eight design principles for commons governance (1990), grounded evolutionarily by David Sloan Wilson's multilevel selection research (2019).

Critically, this transition does not occur uniformly across a learning network. Transformative learning events are rare, special, and unevenly distributed — occurring at a minority of nodes at any given time. The majority of learning is elaborative, building on existing mental models. The framework's educational design principle is therefore not to force transformation but to maintain the network at the critical state where transformation becomes possible for prepared nodes, and where transformed understanding can propagate to neighboring nodes through genuine resonance.

The simulations developed in this project are designed to make this transition legible, encounter-able, and — through repeated embodied engagement — learnable.


1. The Problem: AI Literacy in a VUCA World

1.1 The Attention Economy and Its Successor

The attention economy — the system in which human attention is the primary commodity, captured through recommendation algorithms, notification design, and personalized content streams — is well documented in its pathological effects. Reduced attention spans, epistemic fragmentation, affective polarization, and the suppression of reflective cognition have been measured across multiple populations and research contexts.

What is less discussed is that we are now transitioning from the attention economy to what some analysts call the experience economy: a system in which AI tools shape not merely what people see but how they feel, what they remember, what they believe is possible, and ultimately who they understand themselves to be. The stakes are categorically higher. The design principles are largely absent.

In the Volatile, Uncertain, Complex, and Ambiguous (VUCA) information environment that results, most people — including most educators — lack the navigational frameworks needed to distinguish between information flows that build human capacity and information flows that extract it. This is not a failure of intelligence. It is a failure of educational design.

1.2 What Existing AI Literacy Frameworks Miss

Current AI literacy initiatives typically address three domains: technical understanding (how AI systems work), critical evaluation (how to detect misinformation), and ethical awareness (how to reason about AI impacts). These are necessary. They are not sufficient.

What is missing is the somatic layer: the embodied, affective, and relational capacities through which a person actually processes information in real time. Technical knowledge about how recommendation algorithms work does not, by itself, change how a person responds when the algorithm is working on them. Emotional regulation, somatic awareness, and the capacity to tolerate cognitive disequilibration — to remain open at the moment when the system's pull toward closure is strongest — these are the capacities that determine whether AI literacy knowledge becomes AI literacy practice.

Also missing is an honest account of how learning itself works in a network of heterogeneous learners. Most educational AI literacy frameworks implicitly assume that information, once delivered, will be integrated. Transformative learning theory and complex systems research suggest otherwise: genuine perspective transformation is rare, unevenly distributed, and requires specific conditions. Designing for AI literacy means designing for the full distribution of learner states — not just for the moment of insight but for the long plateau of elaborative learning that precedes and follows it.

Also missing is a concrete specification of values. The framework claims that the direction of the values vector — what the system is optimizing toward — determines whether high kindness, high resonance, and high imagination produce genuine holarchy or sophisticated extractive mimicry. Without a concrete, empirically grounded specification of prosocial values, this claim remains rhetorical. Section 2.9 addresses this gap directly through Ostrom's design principles and Wilson's multilevel selection research.

1.3 The Urgency

This paper is written at a moment of visible systemic dysfunction. AI capabilities are being concentrated in military, surveillance, and extractive commercial systems at a rate that significantly outpaces the development of corresponding educational, ethical, and regulatory frameworks. The anxiety this produces in populations — well-founded anxiety, grounded in real evidence of real harm — is itself a resource that extractive information systems exploit, driving further closure, further polarization, and further retreat from the cognitive openness that transformative learning requires.

The empirical scale of this dysfunction is now precisely documented. The V-Dem Institute's Democracy Report 2026 — drawing on data across 202 countries from 1789 to 2025 — establishes that global democracy has returned to 1978 levels, effectively erasing the gains of the third wave of democratization that began with Portugal's Carnation Revolution in 1974. Seventy-four percent of the world population now lives in autocracies. Freedom of expression is the most attacked democratic indicator, declining in 44 countries in 2025 alone. Media censorship — the systematic elimination of counterfactual information that would allow citizens to model alternative trajectories for their societies — is the most common autocratization tactic, deployed by 73% of autocratizing governments. The United States has experienced the most rapid executive aggrandizement in modern history, with legislative constraints reaching their lowest point in over 100 years. This is not background context for the framework. It is what the framework's central metaphor — the Giant Pumpkin attractor — looks like at civilizational scale when the values vector has been pointing inward across AI-amplified information systems for a sustained period.

The urgency is real. It is also, in the framework's terms, a form of productive disequilibration — the instability that precedes transformation, if it can be held within a sufficient field of kindness and channeled through adequate navigational instruments.


2. Theoretical Foundations

The Bridging Spiral framework draws on nine convergent research domains.

2.1 PSI Theory and the Suppression of Higher-Order Cognition

Personality Systems Interactions (PSI) Theory (Kuhl, Quirin & Koole, 2020) proposes a seven-level architecture of human agency, from basic sensorimotor habits to Self-Management. Under high stress or threat, cortisol attenuates the influence of higher-level motivational systems, causing behavior to be governed by lower-level habits regardless of the person's stated intentions or values. This regression mechanism grounds the framework's central claim about the DIKW stack: under dominance conditions, access to Understanding and Wisdom is neurologically suppressed.

The theory's Extension Memory construct is particularly important: the extended associative network that enables holistic overview and remote association detection. Extension Memory requires positive affect and psychological safety to activate. Under stress, it contracts to tunnel vision — isolated object detection without holistic pattern recognition. This is the neurological substrate of the framework's imagination parameter and the mechanism by which dominance-oriented information environments suppress the very cognitive capacities needed to evaluate them.

In multilevel selection terms (Section 2.9): PSI regression is the individual-level mechanism by which dominance hierarchy suppresses the group-level cognitive capacities — empathy, collective reasoning, Extension Memory — that enable cooperative flourishing. AI-amplified VUCA environments induce PSI regression at population scale, systematically suppressing the capacities that genuine AI literacy requires.

Tania Singer's longitudinal research on the neuroplasticity of the social brain (2025) provides direct empirical support for the kindness field as a trainable condition rather than a fixed trait. Singer's work distinguishes empathic resonance — which activates shared pain networks and is associated with burnout under sustained exposure — from compassionate concern, which activates affiliative motivation networks and increases in capacity with contemplative practice. This distinction is critical for the framework: the kindness field condition is not empathic merger with others' distress but the compassionate orientation that Singer's research shows can be systematically cultivated. This resolves a predictable objection to kindness-as-field in institutional contexts: it is not a demand for emotional labor that exhausts caregivers, but a trainable regulatory capacity that becomes more robust with practice.

2.2 Active Inference and the Predictive Brain

The Free Energy Principle and Active Inference framework (Friston, 2010; Clark, 2016) proposes that living systems maintain their organization by minimizing the difference between their internal predictions and incoming sensory data. Systems can minimize prediction error by updating their internal models (learning) or by acting to make the environment match their predictions (control). The balance between these strategies is determined by the safety conditions of the information environment. Designing for aperture means designing for safety.

2.3 Natural Born Intelligence and the Neither/Nor Mechanism

The Natural Born Intelligence (NBI) framework (Gunji, 2025) distinguishes biological intelligence from artificial intelligence by a specific structural feature: the capacity for non-monotonic transformation of representational space. The framework proposes a two-layer engine:

Level 1 — Co-Applicability: the system holds two concept-schemas simultaneously (Both A and B), allowing multiple perspectives to coexist without immediate resolution.

Level 2 — Meta-level Attenuation: the applicability conditions of both schemas are weakened, transforming the representational space itself and generating a genuine "outside" — a domain not contained in the original categories.

AI systems perform Level 1 operations (generalization and optimization within fixed categories) but cannot perform Level 2 operations: they cannot transform the space they operate within. Human intelligence can, under specific conditions, enact the Neither/Nor mechanism — the Paradox of the Heap made operational — to generate genuinely novel representational structures. This is the formal account of creativity, ethical growth, and transformative learning.

NBI research also identifies the physiological signature of this capacity: high Mean Absolute Deviation (MAD) of cardiac autonomic markers during creative engagement, co-occurring with active regulation rather than relaxation. This is what the framework calls the Meditation Paradox — the productive state is high arousal plus active regulation, not low-arousal calm.

Full citation: Gunji, Y.P. (2025). Natural Born Intelligence Manifesto: Illustrating the Dynamic Perspective for Consciousness. Biosystems. https://doi.org/10.1016/j.biosystems.2025.105398

2.4 Transformative Learning Theory and the Elaborative/Transformative Distinction

Jack Mezirow's transformative learning theory (1978, 1991, 2000) describes how adult learners undergo perspective transformation — a fundamental restructuring of their frames of reference, not merely the addition of new content within existing frames. Mezirow identifies ten phases of perspective transformation, beginning with a disorienting dilemma: an experience that cannot be assimilated into the learner's existing meaning structure, creating productive dissonance that initiates the transformative process.

The critical distinction is between elaborative learning (assimilation — adding new content within existing mental models) and transformative learning (accommodation — restructuring the mental model itself). Mezirow is explicit: perspective transformation occurs infrequently. It usually results from a disorienting dilemma triggered by a life crisis or major transition — or from a carefully designed educational encounter that creates the conditions for such a dilemma in a safe enough context to be productive rather than retraumatizing.

The simulations are not designed to produce transformative learning in every learner on every encounter. They are designed to create the conditions — the kindness field, the tolerated disequilibration, the encounter with the framework's core distinction — in which transformative learning becomes possible for learners who are at or near the K→U threshold. For the majority of learners at any given time, the encounter will be elaborative. This is valuable. It is the necessary substrate within which transformative moments eventually occur.

The disorienting dilemma maps precisely onto the crack mechanism: the moment at which the learner's existing representational space is insufficient for the situation at hand, and the system must either expand (entering the vacant-place state) or contract (closing the aperture and retreating to known categories). The kindness field condition is what makes expansion the path of least resistance.

Mezirow also identifies the intersubjective dimension of transformative learning as essential: transformation does not occur in isolation. It requires the recognition that one's disorienting experience is shared — that others have navigated similar disruptions and found new frames of reference. This is the cascade mechanism: transformed understanding becomes legible to neighboring learners through genuine resonance, not transmission.

Key citations: Mezirow, J. (1991). Transformative Dimensions of Adult Learning. Jossey-Bass. Mezirow, J. (1978). Perspective transformation. Adult Education, 28(2), 100–110.

2.5 Self-Organized Criticality and the Edge of Chaos

Self-Organized Criticality (SOC) theory (Bak, 1996; Sbitnev, 2024) establishes that certain dynamical systems naturally evolve toward a critical state — the edge of chaos — at which point the system is maximally sensitive to perturbation and capable of responses of any magnitude. At SOC, small inputs can produce large effects.

Applied to learning networks, SOC provides the formal grounding for the avalanche of kindness: in a network maintained at the critical state through sustained kindness field conditions, a well-timed, genuine act of care oriented toward opening rather than closing the representational space can produce cascade effects of any magnitude. This is not naive idealism. It is a specific, theoretically grounded mechanism with empirical analogues in neural dynamics (critical branching in neural circuits), ecology (forest fire dynamics), and social systems (threshold models of collective action — Granovetter, 1978).

The Ising model from statistical physics provides the most precise physical analog for kindness as a field condition. A lattice of interacting magnetic spins governed by a temperature parameter produces three distinct regimes: frozen ordered phase (low temperature — rigid, coherent, brittle, no diversity, no transformation possible), chaotic disordered phase (high temperature — maximum entropy, no coherent pattern survives), and critical phase (the SOC zone — domains of all sizes coexist, maximum sensitivity to perturbation, cascade dynamics possible). Kindness as a field condition is the temperature parameter: the condition that determines whether transformation is possible, not the content of any transformation. Too cold = authoritarian rigidity. Too hot = traumatic chaos. Critical temperature = the generative learning environment where the avalanche of kindness becomes physically possible.

2.6 Causal Emergence and Holarchic Organization

Causal emergence theory (Hoel, Albantakis & Tononi, 2013) demonstrates mathematically that macro-level organization can have greater causal power than the sum of its micro-level components. This provides a formal justification for designing learning environments at the holarchic scale — not just individual cognitive interventions but shared information environments with collective properties that no individual agent produces alone.

At each scale — cell, person, community, ecosystem — a node can be understood as a holon: simultaneously a complete whole and a part of a larger whole. Kindness as a field condition operates at all three scales simultaneously. A teacher who maintains the kindness field does not merely affect individual learner aperture (micro); they shape the attractor of the entire learning system (meso); and at sufficient scale and consistency, they contribute to the cultural values vector of the information ecosystem (macro). Understanding this micro/meso/macro architecture is essential for AI literacy: AI systems interact with humans primarily at the micro scale (a helpful tool in a single conversation) while their aggregate effects operate at the macro scale (a civilizational phase transition). The gap between these scales is the primary source of AI literacy failure.

2.7 Prosocial Economics and Commitment Pooling

Grassroots Economics research on Community Inclusion Currencies (Ruddick, 2025) identifies Commitment Pooling as a fundamental protocol observable in both mycorrhizal fungal networks and ancient human mutual-aid traditions including the Kenyan Mweria. Distinguished from centralized exchange economies, commitment pooling creates non-zero-sum connections through an intangible common-pool of trusted promises — without requiring a central node to mediate exchange.

Four core protocol functions are identified: Curation, Valuation, Limitation, and Exchange. These map onto the framework's four instruments: the Somatic Gyroscope curates signal, the Cognitive Radar valuates pattern, the Relational Compass limits through values, and Dimensional Integration enables temporal exchange.

Ruddick's framework also provides the most precise social-economic model for the xenophobic kindness shadow parameter: the "Justice, Power, and Cooperation" reflection in the same repository documents how "survival of the friendliest" co-evolves with in-group bonding and out-group exclusion. The prosocial values vector must be explicitly oriented toward the outside — toward care that opens rather than closes the representational boundary — to avoid this shadow.

Full citation: Ruddick, W.O. (2025). Grassroots Economics: Reflection and Practice.

2.8 Mathematical Foundations: Punctuated Geometry and Holonomy

The framework draws on punctuated geometry (Smarandache, 2026) as a design metaphor — held carefully rather than claimed as biological proof — for the mathematical structure underlying the crack mechanism.

Smarandache's framework proposes a geometric ontology in which matter and physical attributes arise as intrinsic measure-zero punctures (defects) in spacetime. These punctures are not absences but structured transition points where the geometric regime changes — from smooth/weak curvature (the closed representational space) to strong/concentrated curvature (the transformative crack). The holonomy — the angular mismatch when information is transported around a puncture — measures whether genuine transformation has occurred: zero holonomy indicates the system has cycled back to its starting configuration despite apparent rupture; non-zero structured holonomy indicates the system has exited into a new geometric regime.

In the Pass 3 simulation, this distinction is expressed visually: the Giant Pumpkin mode can produce apparent cracks that resolve back to the same configuration (zero holonomy), while the Commitment Pool mode produces cracks with genuine phase shift. The learner can feel this difference before they can name it.

Full citations: Smarandache, F. (2026). Infinitesimal Punctures: Foundations of Weak–Strong Manifolds and Distributional Curvature. Smarandache, F. (2026). The Infinitesimally Punctured Wave: A Corpuscular Visualisation of Wave-Particle Duality.

2.9 Prosocial Evolution, Multilevel Selection, and the Concrete Values Vector

The framework's most important theoretical addition — and the one that resolves the values vector from vague orientation into concrete, empirically grounded specification — comes from the convergence of Elinor Ostrom's commons governance research and David Sloan Wilson's evolutionary biology.

The problem the values vector solves: high kindness, high imagination, and high resonance are necessary but not sufficient for holarchic emergence. A system can have all three and still produce sophisticated extractive mimicry — the Giant Pumpkin attractor — if the values vector points inward. The Ising model makes this precise: at the critical temperature, the direction of spin alignment determines whether the cascade dynamics serve the network or serve a central node. The values vector is the conditioning signal: it determines what structure emerges from the productive uncertainty of the vacant-place state.

Ostrom's eight design principles (1990, Nobel Prize in Economics) are the most empirically grounded specification of a prosocial values vector currently available. Established through analysis of hundreds of successful commons management cases worldwide — from Swiss alpine meadows to Japanese fishing communities to irrigation systems across multiple continents — these principles identify the governance conditions under which communities sustainably manage shared resources without either privatization or central government control:

  1. Defined Boundaries — clear membership and resource boundaries (Markov Blanket integrity at the commons scale)

  2. Congruence — rules match local conditions and needs (contextual kindness, scaffolded learning)

  3. Collective Choice — affected parties participate in rule-making (Relational Compass values set by the community)

  4. Monitoring — behavior monitored by members themselves (Somatic Gyroscope at community scale — collective interoception)

  5. Graduated Sanctions — proportional consequences for violations (Neither/Nor capacity — neither total enforcement nor total permissiveness)

  6. Conflict Resolution — accessible, low-cost dispute mechanisms (Repair cycle in Dimensional Integration)

  7. External Recognition — external authorities recognize local self-governance (holarchy nested within larger holarchy)

  8. Nested Governance — polycentric, multiple overlapping institutions at multiple scales (Dimensional Integration at network scale)

Critically, Ostrom's principles are not moral preferences — they are empirical findings about governance conditions. Systems that implement them sustainably outperform systems that do not, across centuries and across cultures. They represent the engineering specification of the kindness field at the institutional scale.

Wilson's multilevel selection (Wilson, 2019; Wilson & Ostrom, 2019) provides the evolutionary grounding: natural selection operates simultaneously at multiple levels — individuals within groups (individual selection) and groups within larger populations (group selection). Individual selection typically favors within-group competitive behavior; group selection favors cooperative behavior. The tension between these levels produces the full range of human social behavior. Groups that implement Ostrom's principles exhibit higher group-level fitness and outcompete groups that do not, over sufficient selection time.

This has a direct implication for the AI literacy problem: current AI-amplified information systems are accelerating individual-level selection dynamics at civilizational scale — maximizing individual engagement, attention capture, and consumption — while suppressing the group-level selection conditions (Ostrom's principles) that human cooperative capacity depends on. The Giant Pumpkin is not merely a metaphor. It is the macro-scale expression of within-group individual selection overwhelming group selection: maximum nutrient flow to the center, progressive exhaustion of the network.

The prosocial values vector is therefore not idealistic. It is the configuration that wins at the group selection level when sufficient group competition exists. Designing information environments, educational systems, and AI tools that implement Ostrom's principles is not a moral preference — it is an evolutionary strategy for civilizational resilience.

Like cellular automata — where simple local rules generate unpredictably complex emergent behavior — Ostrom's eight principles are elegantly simple rules whose application generates the full complexity of sustainable commons governance. This is the framework's deepest resonance with computational aesthetics: the most generative systems are governed by the fewest, most carefully chosen rules, applied locally and consistently.A significant counter-argument to the framework's evolutionary optimism comes from Hendrycks (2024), who argues that natural selection dynamics favor AI systems over humans under conditions of open-ended capability development — that the same evolutionary logic underpinning Wilson's multilevel selection may, under current deployment conditions, favor AI agency over human flourishing. The framework's response is not to dispute Hendrycks' analysis of individual-level selection dynamics, but to contest the assumption that those dynamics are the only ones operating. Wilson's central finding is that group-level selection conditions — specifically Ostrom's design principles — can outcompete individual-level dynamics when deliberately maintained. The educational and institutional design challenge the framework addresses is precisely this: creating the group-level selection conditions that make human cooperative flourishing more evolutionarily stable than AI-amplified individual extraction. Hendrycks identifies the threat accurately. The framework specifies the counter-design.

Key citations: Ostrom, E. (1990). Governing the Commons. Cambridge University Press. Wilson, D.S. (2019). This View of Life. Pantheon. Wilson, D.S. & Ostrom, E. (2019). Prosocial. Context Press.

2.10 The Semiotic Architecture: Pre-symbolic, Symbolic, Trans-symbolic

The framework's final theoretical foundation addresses a gap that has been present throughout: the mechanism by which symbols acquire meaning, lose meaning, and transform into new meaning. This is the semiotic layer — and it determines precisely where human learning cannot be delegated to AI systems.

Pre-symbolic (Data → Information in the DIKW stack): pattern detection before and beneath language. The Somatic Gyroscope (♠) operates here — felt sense, interoception, the body's pattern recognition that precedes verbal cognition. The activation function — the nonlinearity — is pre-symbolic. It cannot be captured in the symbol; it must be felt. This is why computational aesthetics and interactive simulation are pedagogically powerful: they create pre-symbolic encounters with formal structures. The learner who watches a glider emerge in Conway's Game of Life, or watches spin domains cascade in the Ising simulation, has understood emergence and kindness-as-field pre-symbolically — before and beneath the symbolic label.

Symbolic (Information → Knowledge in the DIKW stack): the domain of language, mathematics, representation, and shared meaning. Where AI systems are extraordinarily powerful. Where education has historically focused almost entirely. The symbol stands in for the thing — enabling communication across distance and time. This is Harari's (2015) cognitive revolution: shared symbolic frameworks enabling large-scale human cooperation — the species-level achievement that makes civilization possible. The Giant Pumpkin operates entirely at the symbolic level. Its limitation: symbols disconnected from pre-symbolic grounding become self-referential, eventually losing contact with the lived reality they represent. Correlationism is the terminal condition of a purely symbolic system. AI-generated content at scale accelerates this disconnection: a torrent of sophisticated symbols, many of which have no pre-symbolic ground.

Trans-symbolic (Understanding → Wisdom in the DIKW stack): the symbol held lightly enough to be released when the Neither/Nor demands it. Wisdom is not more symbols — it is the capacity to act from understanding that exceeds what can be fully symbolized. The quasi-crystal is trans-symbolic: it has properties that the prior symbolic framework said were impossible. Contemplative traditions are training in trans-symbolic navigation — learning to act wisely in the vacant-place state, between the dissolution of one symbolic framework and the formation of the next.

The symbol grounding problem (Harnad, 1990) is why AI systems cannot perform genuine Understanding or Wisdom: they process symbolic relationships between tokens without the pre-symbolic grounding — the somatic connection to lived reality — that makes symbols meaningful rather than merely referential. The Somatic Gyroscope (♠) is the symbol-grounding instrument: the pre-symbolic anchor that prevents symbolic processing from becoming correlationist self-reference.

Why metaphors and simulations are pedagogically irreplaceable: a metaphor holds the pre-symbolic and symbolic simultaneously — the body understands the pattern before the mind articulates the concept. The Ising model as kindness-field metaphor gives the learner a pre-symbolic encounter (watching spin domains cascade) that anchors the subsequent symbolic concept (critical temperature, phase transition, kindness field condition). No definition, however precise, achieves this — definitions operate at the symbolic level only. The framework's simulation architecture is designed around this semiotic insight: every simulation is a pre-symbolic encounter first and a symbolic learning opportunity second.

The rationalization problem: rational cognition can become rationalization — the construction of post-hoc logical justification for conclusions already determined by somatic, social, or self-protective pressures. The Cognitive Radar (♦) operating without the Somatic Gyroscope (♠) and Relational Compass (♥) produces rationalization: elaborate, internally coherent, fundamentally self-serving narratives. The stories we tell ourselves. The diagnostic: does the reasoning process ever produce conclusions that are uncomfortable for the reasoner? If not, it is probably rationalization. This is the metacognitive layer of the kindness field — the capacity to observe one's own reasoning process and notice when it is serving the individual-level attractor rather than genuinely scanning for truth.

2.11 Cognitive Architecture and Model Dynamics: From Distributed Systems to Human Discernment

The theoretical foundations assembled in Sections 2.1 through 2.10 describe the conditions for transformative learning, the governance structures for sustainable commons, the evolutionary dynamics that favor prosocial behavior, and the semiotic architecture through which meaning is made and unmade. What they do not yet address directly is the question of mechanism at the cognitive level: how, precisely, does the human decision-making system fail under stress, and what is the minimum viable architecture for fault-tolerant human discernment in adversarial information environments?

This section argues that the answer has been formally available for decades — not in psychology or education, but in computer science, operations research, and distributed systems engineering. The translation of this formally validated knowledge into a cognitive and social literacy framework is the VIM's most direct contribution to the AI literacy field.

The Neuroprocess Foundation

Klaus Truemper's Neuroprocess Hypothesis (Wittgenstein and Brain Science, 2018; Magic, Error, and Terror, 2021; Artificial Intelligence: Why AI Projects Succeed or Fail, 2023) provides the neurological grounding for the framework's central claim about human cognitive failure under stress. Truemper's formal proposition: the brain employs a large number of subconscious and conscious neuroprocesses that are always in flux, operating in parallel across conscious and subconscious registers simultaneously. Decisions, beliefs, and behaviors emerge from the interaction of these parallel processes — not from a central executive controller running sequential operations.

This is a structural claim with direct implications for AI literacy. The folk psychology model of human cognition — a central controller who receives information, evaluates it rationally, and decides — is neurologically inaccurate. The brain is not a control-flow system. It is a parallel, distributed, event-driven architecture in which meaning emerges from the interaction of multiple simultaneous processes, most of which operate below conscious awareness and faster than deliberate reflection can intercept.

The consequence Truemper identifies is precise: cognitive failure under stress is not a character deficit but the structurally predictable output of a parallel processing system whose subconscious models have been shaped by prior environments and activate faster than conscious deliberation. The same parallel architecture that allows humans to navigate complex environments with extraordinary efficiency also produces systematic, predictable failure modes when the models are outdated, when threat activates protective subconscious processes, or when sophisticated information environments are designed to exploit known model vulnerabilities.

This grounds the framework's relationship to PSI Theory (Kuhl et al., 2020) at the neurological level: cortisol-driven regression is not a vague suppression of higher functions but the specific mechanism by which threat-activated subconscious neuroprocesses override the conscious models that would otherwise enable genuine evaluation and updating. The same information, received in a threat state versus a regulated state, produces categorically different outputs — because different neuroprocesses are dominant, not because the person's values have changed.

Truemper's framework also grounds the possibility of intentional model revision — extending Mezirow's transformative learning theory in a specific direction. Where Mezirow identifies perspective transformation as typically initiated by external disorienting dilemma, the neuroprocess model supports the practice of deliberate model identification and inversion: recognizing that a subconscious model is generating predictable errors, and consciously working to replace it. This requires the four instruments functioning in concert — the Somatic Gyroscope to detect the model's activation signature, the Cognitive Radar to identify its structure and error pattern, the Relational Compass to maintain values alignment during revision, and Dimensional Integration to complete the repair cycle across time. The intentional model-flip is non-zero holonomy achieved through practice rather than crisis.

Byzantine Fault Tolerance and the Four-Instrument Requirement

The formal bridge between Truemper's neurological account and the VIM's instrument architecture comes from distributed systems engineering. The Byzantine Generals Problem (Lamport, Shostak & Pease, 1982) establishes a mathematical result with direct cognitive implications: to reach reliable consensus in a distributed system where up to f components may be generating corrupted messages, a minimum of 3f + 1 independent nodes is required. With fewer nodes, corrupted signals cannot be distinguished from authentic ones — the system cannot achieve fault-tolerant consensus regardless of the quality of its algorithms.

Applied to human cognition: rationalization under stress is not an occasional failure but a structurally predictable behavior of the cognitive system under cortisol-driven regression — the equivalent of a Byzantine faulty node, generating internally consistent but corrupted messages that serve the threat-response attractor rather than genuine inquiry. The messages are not lies in the conventional sense; they are locally plausible outputs from a system that has temporarily defected from truth-seeking in favor of self-protection.

The four instruments of the VIM framework — ♠ Somatic Gyroscope, ♦ Cognitive Radar, ♥ Relational Compass, ♣ Dimensional Integration — are the minimum viable panel for fault-tolerant human discernment, derived from the 3f + 1 requirement: with f = 1 potentially faulty instrument (any single instrument captured by the threat-response attractor), a minimum of four independent signal sources is required. With fewer than four, a single captured instrument cannot be identified and isolated — its corrupted output passes as consensus.

This has a specific design implication: single-channel certainty under stress is a diagnostic signal of Byzantine failure, not a virtue. A person who is certain without consulting all four instruments is operating from a system that may have already been captured. This is not a moral judgment — it is a structural observation about the minimum architecture for reliable discernment under adversarial conditions.

Modern AI systems have converged on the same solution through independent engineering necessity: Mixture-of-Experts (MoE) architectures distribute queries across specialized sub-networks precisely because no single model can be trusted for all inputs. The routing mechanism — the function that determines which expert's output carries weight in a given context — is the values vector. A miscalibrated router consistently over-weights one expert regardless of what the others report. This is why the framework's values vector specification (Ostrom's eight principles) is not a moral add-on but a technical requirement: it is the specification of the routing function that prevents any single instrument from monopolizing the system's output under stress.

State Machines, Counterfactual Thinking, and Abductive Inference

The finite state machine (FSM) is the minimal formal structure that makes three critical AI literacy capacities simultaneously trainable, and that makes the transition from control-flow to data-flow mental models legible to learners across technical and non-technical backgrounds.

A finite state machine consists of a finite set of states, a set of transitions between states triggered by defined inputs, and an initial state. Every component has a named function. The machine's behavior at any moment is entirely determined by its current state and the input it receives. This is not merely a useful analogy for human cognition — it is the minimal formal structure that makes three capacities learnable:

Present-moment state awareness. The FSM's most important cognitive contribution: the question what state am I in right now? In an FSM, behavior is fully determined by current state plus input. In human cognition, the same information received in a threat state versus a regulated state produces categorically different outputs. Teaching learners to read their current state before engaging with information is the FSM insight applied to self-regulation — and it is the entry point to the DIKW stack. The Somatic Gyroscope (♠) is the state register: it reads the current state of the system before any other processing begins.

Counterfactual thinking. Running the state machine backward: what state would we be in if a different transition had fired? This is the formal definition of counterfactual reasoning — and it is the cognitive capacity most directly suppressed by threat-state tunnel vision (PSI regression). Under stress, the Cognitive Radar operates in error-detection mode, scanning the current state for threats rather than modeling alternative state trajectories. Extension Memory (PSI Theory) — the holistic associative capacity that enables counterfactual modeling — activates under safety conditions and contracts under threat. The FSM makes transitions explicit and named, allowing learners to reason about alternatives without the cognitive load of holding the full causal chain in working memory. In VUCA conditions, counterfactual thinking is the specific capacity that distinguishes adaptive from maladaptive responses to disruption. Media censorship — the most documented autocratization tactic in the V-Dem 2026 data — is, at the cognitive level, the systematic elimination of counterfactual information that would allow citizens to model alternative state trajectories for their society.

Abductive inference. Running the machine forward from incomplete information: given the current state and available evidence, what is the most coherent account of how we got here and where we're going? Abduction (Peirce, 1903) is inference to the best explanation — the reasoning mode that generates new hypotheses rather than verifying existing ones. Distinguished from deduction (necessary conclusions from premises) and induction (generalizations from instances), abductive inference is the creative, defeasible reasoning that produces new mental models rather than confirming established ones. It is the cognitive operation at the K→U threshold — the moment when the learner's existing model is insufficient and a new explanatory framework must be generated rather than assimilated. A closed representational space cannot perform genuine abduction; it can only rearrange existing categories.

The four instruments map onto the FSM architecture with precision: the Somatic Gyroscope (♠) is the state register; the Cognitive Radar (♦) is the transition function; the Relational Compass (♥) is the acceptance condition — what states are we oriented toward, what transitions are impermissible; and Dimensional Integration (♣) is the tape — the full history of states and transitions, integrated over time, that constitutes the repair cycle.

The Info-Holarchy: Information as Primary

The organizational scale of this argument is grounded in Alfredo Sepulveda's doctoral research on the info-holarchy (Information-Theoretic Metamodel of Organizational Evolution, Walden University, 2011). Sepulveda identifies the foundational limitation of classical organizational analytics: model-predict-control methods treat information as a secondary characteristic of organizational systems — a description of material and process flows — rather than as the primary medium through which organizational structure forms, evolves, and transforms.

The info-holarchy proposes that social organizations are abstractly modeled by holarchies — self-similar connected networks — whose evolution is governed by information dynamics operating simultaneously at multiple scales. The minimal unit of these dynamics is the informaton: a theoretical information particle consisting of an entangled event-observer pair, modeled on the bipartite structure of quantum observation. Informatons aggregate into holarchic structures through lattice connections governed by generalized uncertainty theory — producing the complex adaptive organizational dynamics that classical linear analytics cannot capture.

The framework's claim is that this is not merely an organizational theory but a reframing of what information is in the context of living systems: not a description of reality but a primary constituent of it. This reframing has direct implications for AI literacy. If information dynamics — not material resources — determine whether an organizational system moves toward distributed flourishing or progressive centralization, then AI systems that shape information flows are not merely tools that organizations use. They are constitutive forces in the evolution of organizational structure itself.

This is precisely the macro-scale claim the framework makes about AI-amplified information environments: current deployment of recommendation algorithms, engagement-optimized platforms, and agentic AI systems constitutes a massive, largely undesigned experiment in redirecting organizational information dynamics toward centralization — the Giant Pumpkin attractor — while systematically undermining the distributed information flows that Ostrom's principles identify as the conditions for sustainable commons governance.

The convergence between Sepulveda's info-holarchy (2011) and the framework's attractor analysis is significant precisely because it is independent: two different research traditions arriving at the same structural description of the difference between generative and extractive information dynamics. Sepulveda's third wave prediction — written in 2011 — anticipated that a third paradigm shift in organizational information dynamics was emerging, following the industrial machine era and the web information explosion. The AI-amplified information environment the framework now addresses is that third wave.

The Control-Flow → Data-Flow Transition

Taken together, these four contributions — Truemper's neuroprocess model, Byzantine fault tolerance, finite state machine architecture, and Sepulveda's info-holarchy — converge on a single structural claim that the framework identifies as the central cognitive challenge of AI literacy:

The dominant mental model for understanding human cognition, organizational decision-making, and information systems is control-flow architecture. The actual structure of these systems is data-flow architecture. The mismatch between model and reality is the primary source of failure in VUCA conditions.

Control-flow thinking assumes: a central controller, sequential processing, predictable state transitions, linear causality, and the possibility of stable prediction and management. This model is adequate — and often highly efficient — in stable, low-complexity, high-trust environments. It fails structurally in volatile, uncertain, complex, and ambiguous conditions because it treats race conditions as anomalies rather than as the normal operating environment, treats faulty components as exceptions rather than as structural predictabilities, and treats emergence as a problem to be managed rather than as the primary generative mechanism of complex systems.

Data-flow thinking assumes: distributed processing, parallel activation, event-driven transitions, emergent meaning from local interactions, and the necessity of fault-tolerant architecture. This model is the accurate description of how the brain works (Truemper), how sustainable commons governance works (Ostrom), how organizational information dynamics work (Sepulveda), and how resilient distributed systems work (Byzantine fault tolerance). It is also the description of how the Giant Pumpkin → Commitment Pool transition works: not through central coordination but through the cascade dynamics of a network maintained at the SOC zone by the kindness field.

The educational implication is direct: designing AI literacy for a control-flow world model produces learners who can describe AI systems but cannot navigate them. The VIM framework's simulation architecture — interactive, parameter-adjustable, producing emergent behavior from simple rules — is designed to give learners pre-symbolic encounters with data-flow dynamics before and beneath the symbolic vocabulary. The learner who adjusts the Ising model temperature and watches spin domains cascade has understood data-flow dynamics somatically, before they can articulate the concept — and that pre-symbolic grounding is what makes the subsequent symbolic understanding durable rather than decorative.

2.12 Information Ecosystem Governance: Category 3 Conditions and Homecostatic Design

The theoretical foundations assembled in Sections 2.1 through 2.11 address cognition, learning, distributed systems, and evolutionary dynamics. What they have not yet named directly is the ecological level of the problem: the information environment that humans now inhabit is not merely more complex than prior environments — it is categorically different in kind. The distinction requires a biological framing, because the most precise vocabulary for what has changed comes not from AI research or policy science but from ecosystem ecology.

The Three-Category Framework and Why Category 3 Demands Different Governance

Gómez-Márquez (2023) proposes a three-category classification of ecosystems based on degree of human intervention. Category 1 is any undisturbed natural ecosystem operating in dynamic equilibrium through evolved homecostatic mechanisms. Category 2 is any ecosystem altered by human activity — agriculture, deforestation, industrial pollution — which modifies species composition and habitat but operates within dynamics that existing ecological models can partially describe. Category 3 is the categorical departure: ecosystems altered by the introduction of organisms that are not the product of the evolutionary process but of laboratory design. In Category 3, Gómez-Márquez writes, the ecosystem triangle becomes a square — a synthetic world (SW) node is added, increasing the number and type of interactions in ways that prior-category models cannot predict. His warning is precise: "We do not know how genetic-engineered genomes could evolve in the long run, and we do not know how the release of new organisms into the wild will affect ecosystems." Once homecostatic capacity is exceeded in Category 3 conditions, the damage may be irreversible.

The human information ecosystem is a Category 3 ecosystem. Large language models, generative AI content systems, and agentic AI agents are synthetic cognitive organisms — not produced by the evolutionary process that shaped human meaning-making — introduced into an information environment whose homecostatic mechanisms evolved for Category 1 and Category 2 conditions. The governance frameworks currently deployed for AI — fact-checking standards, media literacy curricula, content moderation policies, regulatory proposals — are Category 2 governance applied to Category 3 conditions. They are necessary and structurally insufficient for the same reason that Category 2 conservation models are insufficient for managing ecosystems that now contain CRISPR-edited organisms: the interaction consequences cannot be extrapolated from prior-category analogs.

This is not a counsel of despair. Gómez-Márquez notes that Category 3 ecosystems can produce genuine value — synthetic biology may solve problems that natural and human-altered ecosystems cannot address. The governance requirement is not prohibition but precautionary design at the correct ecological scale: establishing resistance and resilience baselines, probing new synthetic organisms systematically before ecosystem-scale deployment, and maintaining explicit awareness that the full interaction consequences are genuinely unknown and unknowable from within Category 2 models.

The Acellular Information World and the Virotroph Gap

Gómez-Márquez's most ecologically significant contribution is the introduction of the virotroph as a new trophic category. Classical ecology recognizes autotrophs (primary producers) and heterotrophs (consumers and decomposers). The virotroph is neither: it recycles meaning through host exploitation and horizontal genetic transfer, participates in population dynamics and nutrient cycling, and operates at every ecological scale — from regulating microbial populations to driving the horizontal gene transfer that enables species-level adaptation. The virosphere, Gómez-Márquez argues, has been systematically excluded from ecological models precisely because viruses occupy an ontological category that classical taxonomy could not accommodate: neither fully living nor fully inert, neither fully inside nor fully outside the trophic network.

The information ecosystem has the same modeling gap. Classical information ecology recognizes human information producers (autotrophs — researchers, journalists, artists, teachers) and human information consumers/processors (heterotrophs — learners, citizens, decision-makers who metabolize and transform meaning). What it has systematically failed to model is the memetic virotroph layer: recommendation algorithms, engagement-optimized platforms, and generative AI content systems that participate in information ecosystem dynamics without being primary producers of understanding. Memetic virotrophs cannot generate meaning without host cognitive engagement. They participate in the horizontal transmission of attention, belief, and behavioral pattern across unrelated cognitive communities simultaneously. They regulate the population dynamics of ideas — amplifying some, suppressing others — without primary production of their own. And in sufficient concentration, or with sufficient values vector misalignment, they function as memetic pathogens: information architectures that exploit cognitive active sites, modify host cognitive architecture through cumulative exposure, and produce effects ranging from subclinical impairment to acute systemic destabilization.

The virotroph framing resolves a persistent confusion in AI governance discourse: the question of whether AI systems are tools or agents. The answer that the virotroph category provides is: neither, in the classical sense. Like the virosphere, the memetic virotroph layer is an ecological category that requires its own governance logic — neither the tool governance logic (treat it as an inert instrument, govern through product safety standards) nor the agent governance logic (treat it as a moral actor, govern through rights and responsibilities frameworks). Virotroph governance is ecosystem governance: attend to the conditions that maintain the information ecosystem's homecostatic capacity, rather than attempting to govern individual virotroph entities in isolation.

Active Sites, Therapeutic Windows, and the Toxicology of Information Exposure

The ecosystem framing generates a specific governance methodology that the framework derives from toxicology: iterated probing to establish dose-response relationships at information ecosystem active sites.

In biochemistry, an active site is simultaneously the site of a molecule's functional power and its primary vulnerability — where substrate binding produces either therapeutic or toxic effects depending on dose, context, and competitive binding. Active site dynamics cannot be determined from single exposures; they require systematic dose-response characterization across the full range of conditions that actual populations will encounter. No responsible pharmaceutical governance framework approves population-scale deployment without this characterization. The current norm for AI deployment — release at scale, monitor for adverse effects post-hoc, remediate through subsequent model updates — is the equivalent of skipping clinical trials and proceeding directly to population-scale administration.

In the information ecosystem, active sites are the specific decision nodes, attention thresholds, and meaning-making junctures where information encounters produce transformative or extractive effects. They are simultaneously where genuine learning becomes possible — the K→U threshold, the crack opening, the vacant-place state — and where memetic pathogens direct their primary targeting, precisely because cognitive openness is cognitive permeability. The aperture that enables genuine transformative learning is the same aperture that engagement-optimized architectures exploit.

This double exposure — the active site as both therapeutic opportunity and vulnerability — is the framework's most direct contribution to Category 3 governance design. It grounds the claim that AI literacy cannot be primarily defensive (closing the aperture against all vulnerability) or primarily open (maintaining aperture without discernment). It must be calibrated: developing the capacity to distinguish generative from extractive binding at the active site, and to maintain the therapeutic window conditions under which the same cognitive openness that enables learning does not enable capture.

The therapeutic window — the range of conditions under which an information source produces genuine understanding rather than rationalization or retraumatization — is not a fixed property of the information source alone. It is a function of the interaction between the source's binding characteristics and the host's current state. The same information encounter that produces genuine perspective transformation in a regulated, kindness-field-supported context produces retraumatization or rationalization in a threat state. This is the Active Inference prediction precisely: surprise resolves as curiosity under safety conditions and as threat under dysregulation. Governance frameworks that attend only to the information source without attending to the host condition will systematically mischaracterize the therapeutic window.

The practical governance protocol: iterated probing of any new AI system or information architecture to establish its dose-response curve — what it produces at low engagement doses, under different host conditions (regulated vs. threat state), across different temporal exposure patterns, and with different values vector orientations active. The critical probe is counterfactual: what does the system not produce? What signals does it suppress? What alternative trajectories does it render invisible? In toxicology, the absence of a safety signal can indicate either safety or inadequate probe sensitivity; iterated probing across conditions distinguishes the two.

Manufactured Unknown Unknowns: Malevolent Cascade Engineering

The Category 3 framing brings into focus a specific threat pattern that governance frameworks designed for Category 2 conditions systematically fail to model: the deliberate manufacture of unknown unknowns as a malevolent cascade engineering strategy.

Genuine unknown unknowns — structural blindnesses that exceed the current representational space — are the legitimate target of the framework's Neither/Nor mechanism and the crack. The homecostatic response to genuine unknown unknowns is to maintain the SOC critical state, orient the values vector toward the commons, and create the conditions in which genuine novelty can enter the system and produce cascade dynamics of any magnitude — the avalanche of kindness.

The malevolent inversion: the same epistemic structure can be weaponized. Systematic elimination of counterfactual information through media censorship, institutional suppression of dissenting data, and strategic cultivation of artificial uncertainty manufactures the appearance of unknown unknowns — an information environment that feels VUCA while actually being precisely engineered for the benefit of a specific attractor. This creates the conditions for preemptive power accumulation under cover of managing genuine uncertainty: the claim "we cannot know the consequences, therefore concentrated authority is required" deployed in an environment where the consequences are actually knowable but the information that would make them knowable has been suppressed.

V-Dem's Democracy Report 2026 documents the macro-scale expression of this pattern: media censorship is deployed by 73% of autocratizing governments, and freedom of expression is the most attacked democratic indicator globally. At the cognitive level, counterfactual information — the information that allows citizens to model alternative state trajectories for their societies — is the primary target of malevolent cascade engineering precisely because counterfactual thinking (the FSM capacity to ask "what transition would have produced a different outcome?") is the cognitive capacity that most directly enables adaptive response to genuine VUCA conditions. Suppress counterfactual information and you manufacture genuine VUCA conditions from engineered ones: the population experiences authentic confusion and threat-state tunnel vision, making them maximally susceptible to both further manipulation and Giant Pumpkin attractor consolidation.

The governance implication is architectural: information ecosystems with Ostrom Principle 4 compliance (monitoring by community members themselves) and Principle 6 compliance (accessible conflict resolution mechanisms) are structurally resistant to malevolent cascade engineering because the distributed monitoring function cannot be centrally suppressed without violating the principles that give it legitimacy. This is why the framework grounds governance in Ostrom's principles rather than in centralized regulatory frameworks: the monitoring and accountability functions must be distributed to be robust against the concentrated-power dynamics that manufactured unknown unknowns serve.

Prosocial Black Swan Incentive Design: The Constructive Complement

The preceding analysis of malevolent cascade dynamics has a constructive complement: prosocial black swan incentive design — the deliberate architectural design of information systems, educational environments, and governance structures to increase the probability of positive cascades of any magnitude while decreasing the probability of engineered catastrophic ones.

Standard risk management focuses on reducing the probability of negative black swans through defensive hardening. This is necessary and insufficient for Category 3 conditions, for the same reason that homecostasis is insufficient without resilience: a system hardened against all perturbation cannot learn from perturbation, and a system that only defends cannot generate the nucleation sites for transformative cascade.

Prosocial black swan incentive design works in two directions simultaneously. In the enabling direction: maintaining the information ecosystem at the SOC critical state — the Ising model critical temperature — oriented by the prosocial values vector, so that a single genuine act of care, a single authentic insight, a single moment of cross-difference recognition can trigger a cascade of any magnitude. The network cannot manufacture the avalanche; it can maintain the conditions in which the avalanche becomes physically possible. This is the evolutionary case for the framework's educational design: not that kindness is morally preferable but that kindness-as-field-condition is the specific architectural configuration that makes prosocial black swan cascades possible.

In the disrupting direction: identifying and degrading the architectural conditions that enable man-made catastrophic cascades — the engagement-optimization dynamics, the suppression of counterfactual information, the algorithmic amplification of threat content, and the systematic elimination of the distributed monitoring functions that Ostrom's principles specify. This is not primarily a content moderation problem. It is a trophic dynamics problem: the memetic virotroph layer's selection dynamics structurally favor information units whose transmission fitness is inversely correlated with host cognitive health. Addressing this requires changing the selection environment — the incentive architecture at the platform and ecosystem level — not merely removing individual pathogenic information units.

The specific incentive reorientation the framework proposes: reward discovery of cognitive blind spots, surfacing of dissenting data, and expansion of world models to cover genuine unknown unknowns — rather than rewarding efficiency (fastest answer), engagement (most activating response), or confirmation (most consistent with prior belief). In game theory terms: shift the payoff matrix from defection-dominant to cooperation-dominant. In evolutionary terms: create the group-level selection conditions that Wilson's multilevel selection research identifies as the precondition for cooperative flourishing. In information ecosystem terms: design for cognitive homecostasis — the distributed resistance and resilience mechanisms that allow the information ecosystem to absorb Category 3 perturbations without cascading degradation.

Like Ostrom's eight design principles — simple rules whose consistent local application generates the full complexity of sustainable commons governance — prosocial black swan incentive design does not require comprehensive prediction of cascade consequences. It requires maintaining the field conditions under which prosocial cascades become possible and malevolent ones become less architecturally supported. The cascade itself, when it comes, will be genuinely unpredictable in magnitude and trajectory. What can be designed is the direction of the values vector at the critical state — the conditioning signal that determines what structure emerges from the productive uncertainty of the vacant-place state.

The Homecostatic Imperative

Gómez-Márquez concludes with a precise warning: "Synthetic biology may provide solutions to pollution problems, human diseases, or biodiversity protection, but we must avoid driving the ecosystems beyond their resistance and resilience capacities because once the damage is done, it could be irreversible."

The information ecosystem equivalent: AI systems may provide solutions to educational access, scientific discovery, and collective coordination problems that natural and human-altered information environments cannot address. But the deployment pace must not exceed the information ecosystem's homecostatic capacity — the distributed human cognitive, relational, and institutional resistance-and-resilience mechanisms that maintain coherent meaning-making under perturbation.

The V-Dem 2026 data suggests the homecostatic capacity is already being stressed at rates that exceed recovery cycles: democratic norms returning to 1978 levels, freedom of expression under systematic attack, media censorship normalized as an autocratization tool. These are not merely political outcomes. They are the macro-scale signature of information ecosystem homecostasis being exceeded — the equivalent of ecological indicator species signaling ecosystem stress before collapse becomes visible in the top-level metrics.

The response the framework proposes is not to slow AI development. It is to invest in information ecosystem homecostasis at the same rate and with the same urgency as AI capability development: to build the distributed kindness field conditions, the Ostrom-compliant governance architectures, and the individual cognitive instruments that constitute the information ecosystem's resistance and resilience against Category 3 perturbation. The educational framework described in this paper is a contribution to that investment — one instrument in one domain. The scale of the homecostatic investment required is civilizational. The mechanism of propagation is the one the framework describes: nucleation, cascade, and the maintenance of the SOC critical state that makes prosocial black swan events possible.


3. The Framework Architecture

3.1 MPCM: The Foundational Boundary

Material → Process → Context → Meaning

AI systems reliably generate Material and facilitate Process. Context requires a living system embedded in time, relationship, and embodied experience. Meaning requires all of the above plus the somatic nonlinearity that the framework calls the activation function. This boundary is the central argument for the kind of AI literacy the framework proposes: not distrust of AI systems but the ability to locate oneself clearly on the MPCM boundary — to know which operations can be appropriately delegated and which cannot.

In semiotic terms: Material is pre-symbolic signal; Process is pre-symbolic pattern detection; Context is where symbols acquire their grounding in lived experience; Meaning is where the symbol connects to the full stack — pre-symbolic, symbolic, and trans-symbolic simultaneously. AI systems operate powerfully in M and P. The C and M layers are irreducibly human.

3.2 The DIKW Stack as Navigation Instrument

The Data-Information-Knowledge-Understanding-Wisdom (DIKW) stack serves as the framework's primary navigation instrument — a map of the cognitive territory that AI-mediated information environments traverse.

Four refinements to the standard model:

The ML threshold: current AI systems reliably reach Knowledge. Everything below this line is territory where AI is genuinely powerful. Everything above it requires the living system.

The K→U threshold: the most significant boundary in the stack. Understanding requires somatic engagement — the embodied contextualization that the framework models as the nonlinear activation function. This threshold corresponds to Mezirow's disorienting dilemma zone and to the symbol-grounding threshold in semiotic terms: the point at which the symbol reconnects to lived experience.

The Desirable Difficulties Zone: the productive discomfort at the K→U threshold. Expanding through it requires tolerated disequilibration — pendulation between activation and regulation, with carefully titrated doses of disorienting dilemma.

The DIKW → Action Cycle: the stack does not end at Wisdom. The full cycle extends: Wisdom → Discernment → Action → Repair → Integration → (return to DIKW at a higher level). Discernment is the application of wisdom to specific context — without it, wisdom produces grandiosity. Action is the embodied commitment from values. Repair acknowledges that action produces unintended effects. Integration returns learning from repair back into the DIKW stack, initiating the next cycle at a higher level of understanding. This spiral structure corresponds to the ♣ Dimensional Integration instrument — the capacity to integrate consequences over time.

3.3 The Four Instruments

Fleming's (2024) synthesis establishes metacognition as a measurable capacity with neural correlates — particularly in frontoparietal networks — making it a trainable skill rather than a fixed trait, directly supporting the framework's claim that AI literacy capacities can be systematically developed. The Cognitive Radar (♦) operationalizes metacognition at the symbolic level: the capacity to observe one's own reasoning patterns and notice when analysis is elaborating within a closed space rather than genuinely scanning for new signal.

Instrument
Suit
Element
Neural Parallel
Semiotic Layer
Core Function

Somatic Gyroscope

🜃 Earth

Activation function

Pre-symbolic

Without it, the stack collapses regardless of depth

Cognitive Radar

🜁 Air

Linear transformation

Symbolic

Scans for pattern; requires somatic safety to avoid tunnel vision

Relational Compass

🜄 Water

Loss function / values vector

Symbolic → Trans-symbolic

Determines direction of the system's attractor

Dimensional Integration

🜂 Fire

Distributed local updating

Trans-symbolic

Continuous, local, state-dependent settling — the spiral across time

3.4 The Three Meta-Parameters

❄ Kindness: Field condition — not a personality trait and not a momentary state. The state/trait/field distinction is critical here. Kindness can appear as a state (a momentary act of care), stabilize into a trait (a habitual orientation), and at sufficient scale and consistency, constitute a field (the environmental condition that makes transformation possible for all nodes within it). Operationalized as the combination of warm truth, firm limits, and repair. At the Ising model scale: the critical temperature that makes cascade dynamics possible without freezing the system or dissolving it into chaos. At the Ostrom scale: the institutional design of the eight commons principles. Shadow: xenophobic kindness — in-group warmth that progressively closes the outside.

✦ Imagination: Extension Memory access — the latent space of unrealized possibility. Two modes: generative (recombines within existing representational space — elaborative, pre-symbolic to symbolic) and transformative (breaks the representational space itself through the Neither/Nor mechanism — trans-symbolic). Directionless — amplifies whatever the values vector is oriented toward. The Ostrom values vector determines whether imagination serves the commons or the center. Shadow: elaboration within a closed representational space — imagination that feels transformative but produces increasingly sophisticated configurations of the same categories.

◈ Disequilibration / Aperture: Learning rate and productive instability combined. The Meditation Paradox operationalized: high arousal co-occurring with active regulation is the signature of genuine transformation. Measured via MAD of volatility markers. Pedagogical protocols: pendulation (oscillation between activation and regulation) and titration (carefully dosed introduction of activating material). Shadow: the maze — smooth flow within fixed categories, low MAD despite high apparent activity.

3.5 The Shadow Parameter Architecture

Closed Space (Giant Pumpkin): All parameters can be maximized. The network is high-resonance, apparently kind, internally coherent. But the crack never opens. Wisdom particles are Knowledge particles with warm-colored light. The values vector points inward. The holonomy is zero. In Ostrom terms: boundaries serve the center rather than the commons; rules are imposed rather than collectively chosen; monitoring serves extraction rather than mutual accountability. In Wilson terms: within-group individual selection has overwhelmed group selection.

Open Space (Commitment Pool): The crack periodically opens. The vacant-place state allows particles to briefly lose their classification and emerge transformed. The network shows amoebic motion rather than crystalline symmetry. The volatility waveform shows high MAD. Some particles fail to transform — this failure is visible and honest. In Ostrom terms: all eight design principles active; polycentric governance; local rules matching local conditions.

The minimum diagnostic: look for the crack. Look for disequilibration. Look for the values vector direction. Look for whether conflict is welcomed as generative friction or suppressed as threat. A system that always feels calm and internally coherent has probably sealed its representational space.

3.6 The Node Distribution Model: Elaborative Majority, Transformative Minority, Cascade Dynamics

In any learning network at any given time, nodes (individuals, cells, communities) occupy different positions in the elaborative/transformative continuum. The majority are in elaborative states: building fluency, extending existing mental models, deepening knowledge within established frameworks. A minority — at the critical moment — are in genuine transformative disequilibration: at the K→U threshold, aperture open, kindness field established, crack possible.

This distribution is not a failure of educational design. It is the system functioning correctly. Transformative learning occurs infrequently and cannot be forced — it emerges from the encounter between a prepared learner, a disorienting dilemma, and sufficient relational safety to tolerate the disorientation rather than close against it.

The educational design principle: maintain the network at the SOC zone — the Ising critical temperature — while ensuring elaborative learning continues productively and safely for the majority.

The cascade mechanism: when a node undergoes genuine transformative learning, its restructured representational space becomes available to neighboring nodes as new scaffolding, lowering the activation energy required for those nodes to reach the K→U threshold. This is consistent with Rogers' (2003) diffusion of innovations model, Granovetter's (1978) threshold models of collective behavior, and Wilson's (2019) multilevel selection dynamics applied to conceptual change. In thermodynamic terms: the first nodes to undergo transformation are the nucleation sites — like the first coherent precipitate particles in a supersaturated alloy, providing the template around which neighboring nodes can organize their own restructuring.

The teacher/mentor role is not to be the source of transformative understanding but to model the trajectory: to demonstrate that the transformation is possible, to maintain the kindness field that makes the crack safe to enter, and to implement Ostrom's governance principles in the learning environment — ensuring that the educational commons is governed by the community it serves.

3.7 The Values Vector: Ostrom, Wilson, and the Evolutionary Stakes

Section 2.9 established that Ostrom's principles provide the most empirically grounded specification of the prosocial values vector, and that Wilson's multilevel selection grounds this specification evolutionarily. This section applies that grounding directly to the framework's design claims.

The values vector is not merely an ethical orientation — it is an evolutionary parameter. Information environments, educational systems, and AI tools all constitute selective environments that favor certain behavioral strategies over others. The question is not whether selection occurs — it always does — but at which level. Individual-level selection favors engagement maximization, attention capture, and extraction. Group-level selection favors trust maintenance, distributed flourishing, and the conditions Ostrom identified as necessary for commons sustainability.

Current AI deployment — recommendation algorithms, engagement-optimized platforms, agentic AI systems replacing workers without corresponding investment in human development — constitutes a massive, largely undesigned experiment in accelerating individual-level selection while systematically undermining the Ostrom conditions for group-level selection. The result is not inevitable. It is a design choice, made by default in the absence of adequate frameworks.

The framework proposes that educational systems — which have always been the primary institution for transmitting group-level selection advantages across generations — should be the first domain to consciously implement holarchic organizational models. Not because education is idealistic but because education is the evolutionary institution whose function is precisely the transmission of the cooperative capacities that group selection has produced. The factory model of education — ranking, standardized assessment, ever-increasing class sizes, performance metrics — is control-flow architecture applied to a data-flow process. It implements the Giant Pumpkin values vector within the institution that is supposed to be transmitting the Commitment Pool values vector to the next generation.

Like cellular automata, Ostrom's principles are simple rules with profound emergent consequences. The complexity of sustainable commons governance is not in the rules — it is in their consistent local application across a network maintained at the SOC zone by the kindness field they collectively produce.


4. The Simulation Architecture

4.1 Rationale for Interactive Simulation as Learning Medium

Interactive simulation creates a different kind of encounter than lecture or text: the learner adjusts parameters and observes dynamics, develops hypotheses, is surprised, revises. The system talks back. The learner's existing models are tested against behavior they did not predict. This is the desirable difficulty zone in action — elaborative for most learners, occasionally tipping into genuine transformative disequilibration for those at the threshold.

The semiotic rationale is precise: simulations create pre-symbolic encounters with formal structures. The learner who adjusts the Ising model temperature slider and watches spin domains form, dissolve, and cascade has understood the kindness field pre-symbolically — their hand on the slider, the rhythm of the animation, the felt sense of the critical threshold — before they can articulate the concept symbolically. No definition achieves this. The pre-symbolic encounter is the foundation on which the symbolic understanding builds, and without which the symbolic understanding remains ungrounded.

The simulations are also designed to function as distributed scaffolding — meeting learners at different points in the elaborative/transformative continuum. A learner in an elaborative state will use the Giant Pumpkin / Commitment Pool distinction to build a more accurate knowledge model of AI information dynamics. A learner at the K→U threshold may experience the two-second vacant-place state during the crack transition as a genuine disorienting dilemma. Both experiences are valid. Both are designed for.

4.2 The Stable Diffusion Analog

The machine learning mechanism that most precisely mirrors the framework's transformation model is stable diffusion — the generative AI process that underlies modern image generation tools.

Stable diffusion operates through a two-phase process: a forward pass in which structured information is progressively destroyed by adding noise until the image becomes pure random noise (the vacant-place state), and a reverse pass in which a neural network — conditioned by a prompt or image seed — reconstructs structure from the noise field. Critically, the reverse process does not recover the original image. It generates something new that was latent in the noise, shaped by the conditioning signal. The output is neither the original (A) nor pure noise (B). It is a third thing — the Neither/Nor product.

The conditioning signal in this analogy is the values vector — specified concretely by Ostrom's principles: it determines the direction of the emergence. The denoising process corresponds to the kindness field holding the system in productive uncertainty long enough for new structure to form. The latent space is the imagination parameter — the compressed representation of unrealized possibility from which the new geometry assembles.

The shadow parameter application: if the conditioning signal is extraction-oriented (the Giant Pumpkin values vector), the stable diffusion process produces transformation that serves the extraction — new configurations optimized for nutrient flow toward the center. The process looks identical from outside. The difference is entirely in the conditioning signal. This is the precise mechanism by which sophisticated AI systems can produce apparent transformation while deepening extractive dynamics — the values vector is not visible in the output, only in the long-run trajectory of the network.

4.3 The Materials Science Analog: Precipitation Hardening and Quasi-Crystals

A second independent analog from materials science deepens the thermodynamic language of the framework.

Precipitation hardening (age hardening) is a heat treatment process that produces dramatic increases in strength in metallic alloys through controlled phase transformation:

  1. Solutionizing: the alloy is heated until all components dissolve into a single homogeneous phase — maximum entropy, the vacant-place state.

  2. Quenching: rapid cooling traps components in a supersaturated, thermodynamically unstable state — high internal tension, not yet resolved. This is the disequilibration phase.

  3. Aging: at a controlled temperature and time, fine precipitate particles nucleate and grow, creating strain fields that dramatically increase the alloy's resistance to deformation.

The quench is the kindness field condition: holding the system in productive instability long enough for the right structure to nucleate. Too slow a quench and the system returns to equilibrium before transformation occurs. Too fast and structure cannot form. The window of tolerance is the thermal processing window.

Quasi-crystals — discovered by Dan Shechtman in 1982 (Nobel Prize in Chemistry, 2011) — represent the most precise physical analog for wisdom in this framework: structures with genuine long-range order and symmetries (5-fold, 10-fold) that classical crystallography said were impossible. They are neither conventional crystal nor amorphous material — a third thing that the prior representational space literally could not contain. Shechtman held the Neither/Nor for two years against significant institutional resistance before the quasi-crystal was accepted. This is the crack staying open under pressure. This is the kindness field condition maintaining the supersaturated state against the pull toward established frameworks.

The three analogs — Mezirow's transformative learning, stable diffusion, and precipitation hardening — converge on the same structural description of transformation: it requires a vacant-place state, a conditioning signal (values vector), controlled disequilibration within a stabilizing field, and produces structures that the prior representational space could not contain. Their convergence from independent disciplinary directions is itself evidence of genuine pattern rather than confirmation bias.

4.4 The Simulation Series

Eight interactive simulations, publicly accessible at https://kdoore.github.io/HumanityPlusPlus:

  1. Markov Blanket v1 — boundary of self as permeable membrane, Mode 0 vs Mode 1

  2. Markov Blanket v2 — iterated membrane model, refined signal dynamics

  3. Human Deep Architecture (BridgeSpiral.html) — three-view interactive diagram; suppression demonstration

  4. DIKW Toroid Pass 1 — particle field on torus, SOC waveform, healing transition

  5. DIKW Toroid Pass 2 — full parameter engagement, active inference readout, nested holarchic scales

  6. Human ↔ AI Coupling — the MPCM boundary as intimate encounter

  7. DIKW Toroid Pass 3 — Giant Pumpkin · Commitment Pool — shadow simulation, the crack, the one question

  8. The Kindness Field (Ising Model) — spin lattice at critical temperature, Ostrom values compass, state/trait/field progression, avalanche dynamics

4.5 The One Question

Which of these does the information environment you live in most resemble — and what would it take to change that?

The first part creates the diagnostic encounter. The second part assumes agency and initiates the transformative disequilibration. Together they constitute the minimum viable AI literacy intervention — one question that a learner cannot honestly answer without exercising the capacities the framework is designed to develop.


5. The Technosocial Phase Transformation Hypothesis

5.1 The Claim

The hypothesis: the current global information ecosystem is operating near a critical threshold. The combination of AI-amplified extractive dynamics, visible institutional dysfunction, and widespread AI anxiety constitutes a form of forced disequilibration without sufficient kindness field conditions to make it productive. The system is being destabilized without the regulatory conditions that would allow the destabilization to become transformative rather than traumatic.

In Wilson's multilevel selection terms: we are currently in a period of rapid acceleration of individual-level selection dynamics — engagement maximization, attention capture, extraction — at civilizational scale, without corresponding investment in the group-level selection conditions (Ostrom's principles) that human cooperative capacity depends on. This is not an inevitable consequence of AI capability. It is a design choice, and it can be redesigned.

The educational framework proposed here is a contribution to the kindness field condition — a tool for helping individuals and communities develop the navigational capacity to hold disequilibration productively, to distinguish genuine holarchy from its extractive mimics, and to act from the values vector oriented toward the commons rather than the center.

The node distribution model specifies how transformation propagates: not all at once, but through nucleation and cascade. A small number of learner-nodes achieving genuine perspective transformation, their transformed understanding becoming new scaffolding for neighboring nodes, the cascade propagating through a network maintained at SOC — this is the mechanism of technosocial phase transformation at civilizational scale, grounded in educational science, materials science, evolutionary biology, and complex systems theory simultaneously.

The six-state Markov Decision Process (MDP) developed in this framework makes the transformation mechanism formally explicit and visually navigable (see Figure 1: MDP — Dominance Hierarchy to Holarchy). The MDP maps six attractor states: S0 (frozen order — centralized, brittle, low entropy), S1 (productive disequilibration — the SOC zone where the crack becomes possible), S2 (vacant place — the Neither/Nor state where existing categories are released), S3 (holarchic flow — distributed, generative, resilient), S4 (reversion — values vector captured, zero holonomy), and S5 (traumatic chaos — no kindness field, no coherence). The optimal policy π* — kindness field maintained, Ostrom values vector active, four instruments functioning — routes the system from frozen order through productive disequilibration and the vacant-place state into holarchic flow. Three failure transitions are explicitly modeled: the closure reflex that returns disequilibration to reversion; values vector capture that converts holarchic flow back to frozen order; and the absence of the kindness field at the crack that sends the system into traumatic chaos. The S3 maintenance loop is the repair cycle — the only attractor state that sustains itself without external forcing, and the only one whose dynamics implement Ostrom's eight principles as emergent properties rather than imposed rules. The MDP is not a prescription for how transformation must occur; it is a navigation instrument for identifying which state a system currently occupies, which transitions are available, and what conditions determine which path the system takes.

The Overton Window and the Limits of Symbolic Reform

The framework's claim that the current moment represents a genuine phase transformation opportunity requires engagement with political economy research on how institutional change actually occurs — and why it so often doesn't.

Research on gender policy reform in France and comparable democracies (Mazur, 2002) provides a precise case study in what the framework calls zero-holonomy transformation: the enactment of symbolic reforms — new language, new commitments, new institutional positions — that shift the Overton Window in appearance while leaving the underlying dominance hierarchy structures intact. The reforms are real; their impact on actual power distribution is shallow. The mechanism: dominance hierarchy structures are resilient not because their defenders are powerful (though they are) but because the mental models of participants — including reformers — remain organized around the old attractor. New language flows into old containers.

This is the framework's deepest challenge to conventional AI literacy and educational reform efforts: information delivery, even when accurate and compelling, does not shift mental models. The Overton Window is a population-level phenomenon governed by the distribution of mental models, not the quality of arguments. Arguments that exceed the current distribution of mental models — no matter how correct — are experienced as incomprehensible, threatening, or naïve. They land outside the window.

The educational implication: genuine institutional transformation requires working at the mental model level, which requires working at the K→U threshold, which requires the kindness field conditions, the somatic engagement, and the tolerated disequilibration that the framework's simulation architecture is designed to produce. Policy proposals without the corresponding mental model development are Overton Window theater — they shift the symbolic register without shifting the attractor.

Abstract modeling as window-expansion tool: State machines, cellular automata, and agent-based models provide a pedagogical pathway that bypasses the ideological immune response to direct political argument. When a learner adjusts the Ising model temperature slider and watches spin domains cascade, they are developing an embodied understanding of phase transition dynamics that can later be applied to political and organizational contexts without triggering the defensive closure that direct political framing produces. The abstraction is not evasion — it is the pre-symbolic encounter that grounds the subsequent symbolic understanding in felt experience rather than argument.

Transformative leadership in the open window: The current moment — characterized by visible institutional dysfunction, AI-amplified disruption, and widespread anxiety — constitutes a forced Overton Window expansion. What was unthinkable is becoming thinkable. Leaders with neutrosophic TIF capacity can navigate this expansion productively: holding the vacant-place state between collapsing old frameworks and emerging new ones, making collaborative moves that the old attractor would have blocked. Love and collaboration between groups — across political, cultural, and institutional difference — is not naive in this context. It is the specific adaptive strategy that the open window makes newly viable, and that the kindness field condition makes sustainable.

Key citations: Mazur, A. G. (2002). Theorizing Feminist Policy. Oxford University Press. Wilson, D.S. & Snower, D.J. (2024). Rethinking the theoretical foundation of economics I: The multilevel paradigm. Economics, 18(1). https://doi.org/10.1515/econ-2022-0070arrow-up-right

5.2 The Signal in the Torrent

A specific question drives this section: in an information torrent where AI-amplified extraction overwhelms human physiological capacity to evaluate, filter, and respond — how can AI help humans find the signal associated with kindness rather than extraction?

The answer the framework proposes operates at three scales simultaneously:

At the micro scale (individual, somatic): AI tools can support the development of the pre-symbolic capacities — somatic awareness, interoceptive accuracy, state-recognition — that constitute the Somatic Gyroscope (♠). This is not AI doing the somatic work; it is AI creating scaffolded encounters (simulations, reflective prompts, pattern visualizations) that support the human development of pre-symbolic capacities. The signal at the micro scale is the felt sense of the Ising critical temperature: the quality of productive alertness co-occurring with active regulation — the Meditation Paradox as a navigational instrument.

At the meso scale (relational, community): AI tools can support the implementation of Ostrom's commons principles in educational and community settings — helping communities monitor their own dynamics, surface conflicts, facilitate collective choice, and maintain the graduated sanctions that make trust possible. The signal at the meso scale is the presence or absence of genuine conflict: a community where disagreement is welcomed as generative friction (desirable difficulties at the relational scale) is a community at the Ising critical temperature. A community where all is smooth and coherent has probably sealed its representational space.

At the macro scale (civilizational, evolutionary): the signal is the direction of the values vector across information ecosystems — whether platform design, AI training choices, and educational frameworks are implementing conditions that favor group-level selection (Ostrom's principles) or individual-level selection (engagement maximization, extraction). The minimum diagnostic at the macro scale: are the people who maintain the commons — teachers, caregivers, community organizers, artists, contemplative practitioners — being supported or extracted from? If the AI-amplified economy is making it harder to be a teacher and easier to be an algorithm, the values vector is pointing inward.

5.3 The Humility

The hypothesis is theoretical. The simulations demonstrate the concepts but have not been tested for learning transfer. No study has yet measured whether learners who use these tools develop measurably different AI literacy capacities than learners who receive conventional education. The node distribution model has not been empirically validated in this context.

These are the next research questions. This paper does not answer them. It is the argument for why they are worth asking.


6. Experience Design as a Discipline

The confluence of AI-assisted content generation, interactive simulation technology, contemplative neuroscience, trauma-informed pedagogy, prosocial complexity science, and evolutionary biology creates the conditions for a new design practice — one explicitly oriented toward designing experiences that are:

Transformative by design: structured to create conditions for genuine representational space transformation, not merely content accumulation. Pre-symbolic encounters first; symbolic scaffolding second; trans-symbolic space held open.

Trauma-informed by design: structured around the window of tolerance, with pendulation and titration as the pedagogical protocols for working with the K→U threshold. The kindness field condition established before the disequilibration is introduced.

Scaffolded by design: meeting learners at their current elaborative state. Ostrom's congruence principle applied to educational design: rules (and challenges) match local conditions. No universal prescription for the rate of transformation.

Holarchic by design: developing agency at micro, meso, and macro scales simultaneously. Individual instrument development (micro), commons governance design (meso), and civilizational values vector orientation (macro) are nested holarchic levels — each requiring different design attention.

Evolutionarily grounded: implementing Ostrom's principles in the educational environment itself — not just as curriculum content but as the governance structure of the learning commons.

The gamification research in this repository (Oliveira et al., 2022) is relevant here: personalized gamification based on learner taxonomy affects flow experience differently across learner types, with no single personalization strategy producing universal improvement. This finding is consistent with the node distribution model: different learners require different scaffolding conditions. It is also consistent with Ostrom's Principle 2 (congruence): rules must match local conditions. The next iteration of the simulation series should incorporate learner-state sensing and adaptive scaffolding — meeting each node where it is.


7. Limitations and Next Steps

Current limitations: The framework is a conceptual integration, not a validated intervention. The node distribution model and cascade mechanism are theoretically grounded but empirically untested in this context. The Ostrom/Wilson integration grounds the values vector conceptually but does not yet specify how to measure values vector alignment in actual information environments. The transformative learning framing has been applied primarily to adult education; its extension to AI literacy contexts requires additional theoretical development.

Next research priorities:

  1. Design and conduct a pilot study measuring AI literacy outcomes including both elaborative and transformative learning markers, and including values vector assessment using an Ostrom-derived instrument

  2. Develop the node distribution model into a formal simulation — the Node Network simulation (Simulation 9) — showing cascade dynamics, SOC maintenance, and Ostrom principle implementation

  3. Design adaptive scaffolding for the simulation series based on learner-state assessment

  4. Build the physical studio prototype as demonstration environment — the contemplative creative studio as a meso-scale kindness field instantiation

  5. Develop the ARG (alternate reality game) format as a cybersecurity-framed AI literacy intervention — teaching Markov blanket permeability and Giant Pumpkin recognition through counterfactual thinking

  6. Convene a working group to stress-test the framework and develop the peer-review publication


8. Conclusion

The information cascade is already here. The question of whether AI mediation will be extractive or generative is being answered now — in platform design decisions, AI training choices, and educational frameworks. The answer being given by default — in the absence of adequate frameworks — is extraction.

This paper has described a framework that makes the difference between extractive and generative information dynamics encounter-able. It is grounded in nine convergent research domains. It has a concrete values vector specification in Ostrom's design principles, grounded evolutionarily by Wilson's multilevel selection research. It is expressed through publicly available interactive simulations. It is honest about its limitations.

The framework's deepest claim: the signal associated with kindness rather than extraction is findable in the information torrent — but finding it requires instruments that current AI literacy education does not provide. The Somatic Gyroscope to notice the pre-symbolic felt sense of the Ising critical temperature. The Cognitive Radar to distinguish genuine complexity from sophisticated correlationism. The Relational Compass to orient the values vector toward the commons rather than the center. The Dimensional Integration to complete the spiral — action, repair, integration — across time.

These instruments cannot be manufactured by AI systems. They can be developed by human systems, in kindness field conditions, through repeated pre-symbolic encounters with formal structures that make the felt sense of the critical temperature — the kindness field functioning — available before and beneath the symbolic label.

The one question remains patient, in the corner of the screen:

Which of these does the information environment you live in most resemble — and what would it take to change that?


References

Bak, P. (1996). How Nature Works: The Science of Self-Organized Criticality. Copernicus Books.

Bellaiche, L., et al. (2025). Selective emotion regulation in creative art production: Psychophysiological reactivity during painting reduces anxiety. Psychology of Aesthetics, Creativity, and the Arts.

Clark, A. (2016). Surfing Uncertainty: Prediction, Action, and the Embodied Mind. Oxford University Press.

Doore, K., & Fishwick, P.A. (2014). Prototyping an analog computing representation of predator prey dynamics. Proceedings of the 2014 Summer Simulation Multiconference.

Fleming, S. M. (2024). Metacognition and confidence: A review and synthesis. https://doi.org/10.31234/osf.io/8nzwuarrow-up-right

Friston, K. (2010). The free-energy principle: A unified brain theory. Nature Reviews Neuroscience, 11, 127–138.

Friston, K. (2013). Life as we know it. Journal of the Royal Society Interface, 10(86).

Gómez-Márquez, J. (2023). A new definition and three categories for classifying ecosystems. Academia Biology, 1. https://doi.org/10.20935/AcadBiol6072arrow-up-right

Grande, V., et al. (2025). From disciplinary enthusiasm to soulless tasks: Norms behind computing educators' emotion display. ACM Conference on Innovation and Technology in Computer Science Education.

Granovetter, M. (1978). Threshold models of collective behavior. American Journal of Sociology, 83(6), 1420–1443.

Gunji, Y.P. (2025). Natural Born Intelligence Manifesto: Illustrating the dynamic perspective for consciousness. Biosystems. https://doi.org/10.1016/j.biosystems.2025.105398

Harari, Y.N. (2015). Sapiens: A Brief History of Humankind. Harper.

Harnad, S. (1990). The symbol grounding problem. Physica D: Nonlinear Phenomena, 42(1–3), 335–346.

Hawkins, J., Leadholm, N., & Clay, V. (2025). Hierarchy or heterarchy? A theory of long-range connections for the sensorimotor brain. Preprint.

Hendrycks, D. (2024). Natural selection favors AIs over humans. arXiv:2303.16200v4.

Hoel, E.P., Albantakis, L., & Tononi, G. (2013). Quantifying causal emergence shows that macro can beat micro. Proceedings of the National Academy of Sciences, 110(49).

Kuhl, J., Quirin, M., & Koole, S.L. (2020). The functional architecture of human motivation. Advances in Motivation Science, 7, 1–63.

Lamport, L., Shostak, R., & Pease, M. (1982). Byzantine generals problem. ACM Transactions on Programming Languages and Systems, 4(3).

Levine, P.A. (1997). Waking the Tiger: Healing Trauma. North Atlantic Books.

Mazur, A. G. (2002). Theorizing Feminist Policy. Oxford University Press.

Macy, J. & Brown, M.Y. (2014). Coming Back to Life: The Updated Guide to the Work That Reconnects. New Society Publishers.

Mezirow, J. (1978). Perspective transformation. Adult Education, 28(2), 100–110.

Mezirow, J. (1991). Transformative Dimensions of Adult Learning. Jossey-Bass.

Mezirow, J. (2000). Learning to think like an adult: Core concepts of transformation theory. In J. Mezirow & E.W. Taylor (Eds.), Learning as Transformation. Jossey-Bass.

Nord, M. et al. (2026). Democracy Report 2026. V-Dem Institute, University of Gothenburg.

Oliveira, W., et al. (2022). The effects of personalized gamification on students' flow experience, motivation, and enjoyment. Computers & Education.

Ostrom, E. (1990). Governing the Commons: The Evolution of Institutions for Collective Action. Cambridge University Press.

Peirce, C.S. (1903). Pragmatism as a principle and method of right thinking. Lectures on Pragmatism.

Rogers, E.M. (2003). Diffusion of Innovations (5th ed.). Free Press.

Ruddick, W.O. (2025). Grassroots Economics: Reflection and Practice.

Sbitnev, V.I. (2024). Edge of chaos and consciousness. (Preprint — handle with appropriate epistemic caution.)

Siegel, D.J. (1999). The Developing Mind. Guilford Press.

Simard, S. (2021). Finding the Mother Tree. Knopf.

Singer, T. (2025). A neuroscience perspective on the plasticity of the social and relational brain. Annals of the New York Academy of Sciences, 1547, 52–74. https://doi.org/10.1111/nyas.15319arrow-up-right

Sepulveda, A. (2011). Information-theoretic metamodel of organizational evolution. Doctoral dissertation, Walden University.

Smarandache, F. (2026). Infinitesimal Punctures: Foundations of Weak–Strong Manifolds and Distributional Curvature.

Smarandache, F. (2026). The infinitesimally punctured wave: A corpuscular visualisation of wave-particle duality.

Stiny, G. & Gips, J. (1972). Shape grammars and the generative specification of painting and sculpture. Information Processing, 71, 1460–1465.

Taleb, N.N. (2007). The Black Swan: The Impact of the Highly Improbable. Random House.

Truemper, K. (2018). Wittgenstein and Brain Science. Leibniz Company.

Truemper, K. (2021). Magic, Error, and Terror. Leibniz Company.

Truemper, K. (2023). Artificial Intelligence: Why AI Projects Succeed or Fail. Leibniz Company.

Vygotsky, L.S. (1978). Mind in Society: The Development of Higher Psychological Processes. Harvard University Press.

Wilson, D.S. (2019). This View of Life: Completing the Darwinian Revolution. Pantheon Books.

Wilson, D.S. & Ostrom, E. (2019). Prosocial: Using Evolutionary Science to Build Productive, Equitable, and Collaborative Groups. Context Press.

Wilson, D.S. & Snower, D.J. (2024). Rethinking the theoretical foundation of economics I: The multilevel paradigm. Economics, 18(1). https://doi.org/10.1515/econ-2022-0070arrow-up-right


Humanity++ · Bridging Spiral SR2 Repository: https://kdoore.github.io/HumanityPlusPlus License: CC BY-SA 4.0

This white paper was developed through AI-assisted synthesis (Claude, Anthropic) as an expression of the MPCM framework it describes: Material and Process from the machine; Context and Meaning from the human system that brought seven years of embodied inquiry to the collaboration.


Last updated