Entry question
Layer 1 · No prior knowledge required
Before anything else — a question
Think of the last time an AI told you something that felt wrong, weird, uncomfortable, or just off.
Maybe it was a confident answer that turned out to be false. Maybe it was a recommendation that felt strangely manipulative. Maybe it was a deepfake you weren't sure about. Maybe it was an article that made you anxious about what AI is doing to jobs, privacy, children, democracy, or the climate.
What did you do next?
Most people do one of four things:
They ignore the feeling and move on.
They accept the AI output anyway, feeling vaguely guilty.
They spiral into anxiety or anger.
Or they shut down and avoid thinking about it.
None of these responses are wrong. They are completely understandable human responses to an environment that is genuinely disorienting. But none of them are very useful — and none of them help you learn anything from the experience.
This framework offers something different to do with that moment.
What we call that moment
We call it an event.
An event is any moment of expectation mismatch — when AI (or any information system) produces something that disrupts your sense of what is real, true, safe, or possible. Disorientation. Shock. Moral discomfort. Sudden urgency. Numbness. Outrage.
Events are not glitches. They are not personal failures. They are inevitable features of stochastic systems at scale — which is a technical way of saying: AI systems will always, eventually, produce surprising and sometimes harmful outputs. Not because they are broken. Because of how they work.
The question is never whether events will happen.
The question is: do you have the skills to learn from them without spiraling, shutting down, or simply complying?
Three things that help
Across emergency medicine, wilderness navigation, trauma-informed care, and ancient wisdom traditions, humans have discovered the same three moves when disruption arrives.
In a world where AI is accelerating the speed, scale, and persuasive power of information — that means practicing them regularly.
Why art, story, and play belong here
Long before literacy, humans navigated VUCA conditions — volatile harvests, uncertain winters, threatening strangers — using song, ritual, fable, and play. Stone Soup. Anansi the Spider. The Emperor's New Clothes. These stories exist across cultures not by accident but because they encode exactly these three moves in forms that bypass defensiveness and reach directly into the nervous system.
Humor, in particular, is a co-regulation technology. It requires holding two contradictory things at once — which is precisely the skill that Radar training develops. It interrupts the anxiety response physiologically. It makes the unbearable survivable to look at.
This is why this framework uses cards, stories, and play rather than lectures and tests. Not because the ideas are simple. But because the ideas need to land in the body before they can change behavior.
The article exercise
One of the most powerful entry points into this framework is simple.
Find any article that makes you feel anxious, angry, confused, or hopeless about AI. It could be about surveillance, deepfakes, job displacement, algorithmic bias, AI in schools, AI in healthcare, climate costs of data centers — anything that activates a real response in you.
Bring it to a session.
We will use it as an event card — the starting point for practicing all three moves together.
You don't need to have an opinion about the article before you arrive. You don't need to agree with it or disagree with it. You just need to have a genuine reaction to it.
That reaction is where learning begins.
The Framework — Three Panels, One Cycle
Last updated