I want to be upfront about something: this is the post I find hardest to write.
Not because the scenarios are implausible — some of them are already unfolding in slow motion, and others have been part of strategic risk discussions for decades. The discomfort is different. It’s the feeling of naming things out loud that most of us would rather leave unspoken.
There’s a version of this conversation that slides quickly into conspiracy territory — bunkers and tinfoil hats and the particular energy of people who’ve decided the collapse is definitely coming and are secretly glad about it. I want no part of that conversation. I’m not an expert in geopolitics, infrastructure security, or military strategy. What I am is someone who’s been paying close attention to where AI development is heading, and who has come to believe that the question “are you prepared for disruption?” is worth asking seriously — without pretending to know exactly what form that disruption takes.
So let’s look at the map.
These risks aren’t new. What’s changed is everything else.
Most of the scenarios that make preparation relevant aren’t invented by AI. Supply chain fragility, geopolitical instability, biological threats, economic dislocation — these predate AI by decades or centuries. What AI changes is three things:
Speed. Disruption that might have unfolded over years can now unfold over weeks. Institutions built to respond at human pace are increasingly unprepared for machine-speed change.
Scale. AI enables a single actor — a state, a corporation, even a small group — to cause disruption at a scale that previously required massive resources. The attack surface grows.
Cascading. AI makes systems more interconnected. More interconnected systems can fail in ways that simpler systems couldn’t. A disruption in one area ripples into three others before anyone has time to respond.
None of this means catastrophe is inevitable. It means the conditions that could make everyday life significantly harder are more plausible — and can arrive faster — than at any previous point in history. That’s the context for what follows.
A map of conditions
These aren’t predictions. They’re categories of disruption that range from “already happening in milder forms” to “possible but extreme.” I’m presenting them plainly, not to alarm, but to give the question some shape.
Economic dislocation
Rapid AI-driven job displacement is already happening in white-collar knowledge work. The effects on employment are not theoretical. When large numbers of people lose income faster than institutions can respond — welfare systems, retraining programs, new industries — the downstream effects include mortgage stress, reduced purchasing power, and eroded social stability. You don’t need economic collapse for this to matter. Significant dislocation is enough.
Supply chain fragility
Most cities carry three to seven days of food in their logistics systems at any given moment. Supermarket shelves look abundant; the warehouse behind them is lean by design. Just-in-time supply chains are extraordinarily efficient in normal conditions and extraordinarily fragile under stress. COVID gave everyone a small preview: empty shelves within days of a disruption that, in the grand scheme, was relatively contained. Something more serious — a sustained infrastructure failure, a major geopolitical shock, a series of simultaneous stresses — would expose that fragility quickly.
Cyberattacks on critical infrastructure
Power grids, financial systems, water treatment, logistics networks — all are increasingly managed by software, and all have been demonstrated attack surfaces for state-sponsored cyber operations. AI makes offensive cyber capabilities significantly more powerful and more accessible. A successful sustained attack on a power grid doesn’t just turn off lights; it disables refrigeration, water pumping, communications, hospitals, and the logistics systems that keep food moving. The scenario doesn’t require open war — just a motivated actor and a vulnerability.
AI-enabled conflict
Nations are in an active AI arms race, and the implications for conflict are significant. AI-enabled weapons, autonomous systems, and surveillance change the nature of warfare. AI-enabled disinformation changes the stability of civil society. Neither of these is speculative — both are present in existing conflicts today. The question is one of scale and escalation.
Biological threats
COVID demonstrated something important: a biological disruption can reshape daily life globally, within weeks, without warning. The same AI capabilities that accelerate medical research and drug discovery also lower the barrier to developing dangerous pathogens — whether through state programs, non-state actors, or accident. This is a documented concern among biosecurity researchers, not a fringe idea. The lesson from COVID isn’t that pandemics are inevitable. It’s that they’re possible, and that preparation matters.
Societal breakdown and AI misalignment
This one is harder to make concrete, because it depends on technical outcomes that remain genuinely uncertain. The spectrum runs from — AI being used by powerful actors to concentrate wealth and power in ways that erode democratic institutions — to more serious failure modes where AI systems pursue goals that don’t align with human welfare. Even the less dramatic end of that spectrum — AI accelerating the erosion of shared institutions and social trust — is worth taking seriously. For more on what alignment actually means and why it’s difficult, see this post.
The thread connecting all of this
What links these scenarios isn’t that they’re all AI problems. Most of them aren’t, at their root.
What links them is that modern life is built on systems — supply chains, financial networks, energy infrastructure, social institutions — that are more fragile than they appear, and more interconnected than most people realise. They work remarkably well when everything is functioning. They fail in ways that cascade unpredictably when something goes wrong.
The honest answer to “what conditions make preparation important?” is: any conditions under which those systems face significant stress. That’s been true for a long time. What AI has done is expand the range of stresses those systems might face, and compress the time available to respond.
What “sensible preparation” actually means
Let me be clear about what I’m not saying.
I’m not saying build a bunker. I’m not saying stockpile tinned beans for five years. I’m not saying the collapse is coming and smart people are getting ready while everyone else sleeps.
What I am saying is this: a household with a few weeks of food and water, a community where people know their neighbours, a local food system with some diversity and resilience — these aren’t paranoid. They’re the equivalent of having insurance. You buy insurance not because you expect disaster, but because you’re the kind of person who thinks ahead.
Resilient preparation looks less like individual stockpiling and more like community capacity:
- Trust — do the people in your community know and look out for each other?
- Care — when things get hard, does your community have the instinct to help?
- Shared planning — have you had the conversation, as a household or a community, about what you’d do if things were disrupted for a week? A month?
Regionally, this might look like ensuring local farms grow food people can actually eat — not monoculture timber or export crops — and that knowledge of food preservation and local supply stays alive. In cities, the answers will look different, but the underlying question is the same: how connected and self-sufficient is your immediate community when it needs to be?
There’s no single right answer. Every household and community starts from a different place, with different resources and vulnerabilities. What matters is that you’ve started asking.
I’m not the expert. Go find out.
I want to be honest about the limits of this post. I’m not a risk analyst, a food systems expert, or a military strategist. I’ve done enough reading to believe these questions are worth taking seriously — but the detailed answers are out there, written by people who know far more than I do.
Some places worth your time:
- Your local agricultural extension services or council — practical information about local food systems and community resilience often lives closer than you’d think
- The Maslow’s Hierarchy post — for a framework to organise your thinking about what to prioritise
- ALLFED (allfed.info) — research on how catastrophic events impact food systems globally
- Your neighbours — seriously. The most practical knowledge about local resilience is in the people around you
Educate yourself on the scenarios that feel most relevant to your situation. Then have the conversation — with your household first, then your wider community. You don’t need to become an expert. You need to be someone who’s thought about it, and who helps the people around them think about it too.
That’s where preparation actually begins.
These are the questions we discuss at our monthly meetups — no expertise required.