References and Resources

Conceptual Grounding: How These Clusters Support TAI-KPI

The TAI-KPI framework is intentionally transdisciplinary. Rather than treating AI literacy, governance, and human development as separate silos, it weaves insights from multiple domains into a single, relational model of transformative AI. The reference clusters below show how each body of work underpins a different facet of that model.

1. Cosmic Resonance & Consciousness

The resonance and field-based models of consciousness (Forghani; Meijer and collaborators; Sbitnev) provide a macro-level ontology for TAI-KPI. They support the idea that intelligence is not just symbol manipulation or gradient descent, but a multi-scale, oscillatory phenomenon emerging from coherent patterns across physical, biological, and cognitive systems. TAI-KPI does not require learners to adopt any particular metaphysical stance; instead, this cluster legitimizes a systems-and-fields framing that makes it easier to talk about human nervous systems, AI models, and social collectives as interacting information fields with different forms of coherence and fragmentation.

Cosmic Resonance & Consciousness Forghani (RFCP); Meijer & Forghani; Meijer & Geesink (2019); Meijer & Ivaldi (2025); Meijer et al. (2020); Sbitnev (2024).

2. Social Epistemology, Veritocracy & Information Integrity

Collins, Kirmayer, and Kelley et al. frame knowledge as a social and institutional process, not a static set of facts. Their work underlines how misinformation, disinformation, and structural power shape what counts as “truth” and who gets to speak with authority. TAI-KPI builds on this by treating AI literacy as epistemic literacy: learning to ask who and what is being amplified, whose experience is being erased, and how systems—human and technical—mediate our access to reality. This cluster grounds your emphasis on veritocracy, epistemic humility, and plural perspectives as core components of kindness and prosocial intelligence, not optional add-ons.

Social Epistemology & Veritocracy Collins (2024); Kirmayer (2024); Kelley et al. (2025).

3. Organizational Evolution & Information-Theoretic Meta-Models

Gould’s work on organizational evolution and Sepulveda’s information-theoretic metamodel show how organizations themselves are evolving information-processing systems with their own feedback loops, attractors, and selection pressures. This gives TAI-KPI a formal language for talking about universities, labs, cities, and community cohorts as adaptive agents—not just backdrops. It supports your use of automata and modeling to help learners see how policies, incentives, trauma, and technology shape an organization’s “state transitions” over time, and how prosocial norms can become thermodynamically cheaper than constant conflict and repair in the long run.

Organizational Evolution & Info-Theoretic Models Gould (2024); Sepulveda (2011).

4. AI Risk, Governance & Ecosystems

Hendrycks, Miotti et al. (A Narrow Path), Nichols et al., and the EDSAFE Policy Labs materials locate AI within a global risk and governance landscape that includes existential risk, agent ecosystems, infrastructure, and policy stacks. This cluster grounds TAI-KPI’s claim that AI literacy is not just about “using tools responsibly,” but about understanding the systemic trajectories that current development incentives create. It justifies your focus on community- and institution-level practice (e.g., ArtSciLab, city partners, DEAL/Prosocial networks) as part of a broader effort to shape AI trajectories, rather than leaving governance to distant institutions and frontier labs.

AI Risk, Governance & Ecosystems Hendrycks; Miotti et al. (A Narrow Path); Nichols et al. (Secure AI Agent Ecosystem); EDSAFE Policy Labs.

5. AI Cognition, Reasoning Traces & Safety Taxonomies

Kambhampati et al.’s critique of anthropomorphizing intermediate tokens and The Compendium’s taxonomy of AI risks and safety approaches both support a non-naïve view of model behavior. They strengthen TAI-KPI’s insistence that we must distinguish human reasoning from model token dynamics, and treat chain-of-thought outputs as artifacts of training and prompting—not as windows into machine “minds.” At the same time, The Compendium provides a structured map of the AI safety landscape, which TAI-KPI can use as a scaffold for learner projects: situating local experiments in kindness, narrative modeling, and somatic literacy within an explicit global taxonomy of risks, mitigations, and research agendas.

AI Cognition, Reasoning & Safety Taxonomies Kambhampati et al. (STOP ANTHROPOMORPHIZING…); Leahy et al. (The Compendium).

6. Education, Policy & Practice Bridges

Kelley et al.’s editorial on AI in higher education and the EDSAFE Policy Stack materials function as bridges between theory and practice. They show how concerns about equity, governance, and epistemic integrity are already being taken up in educational and policy settings, even if imperfectly. TAI-KPI positions itself in this space as a pragmatic, design-oriented framework: one that can be implemented in courses, labs, and community workshops, while also informing how institutions think about policy stacks, capacity-building, and intergenerational learning around AI.

EDSAFE Policy Labs, Kelley et al. (2025).

References and Resources

Last updated