Eighty-two million people read an essay comparing AI to COVID. The metaphor is seductive, but it breaks down where it matters most — agency. A virus is something that happens to you. A technology is something you shape.

AIsocietyagencytechnology strategy
Share:
THE FEBRUARY MOMENT

THE FEBRUARY MOMENT

By Amir H. Jalali8 min read
In February 2026, eighty-two million people read an essay comparing artificial intelligence to COVID-19. Matt Shumer, the CEO of HyperWrite, wrote "Something Big Is Happening" and posted it on X. Within days it had been reposted thirty-six thousand times. The piece argued that we are in the early stages of a transformation bigger than the pandemic, and that most people are underestimating it the same way they underestimated the virus in January 2020.

I've been thinking about why that essay traveled so far, so fast. Not because the argument was airtight. It wasn't. But because it gave people something they were desperate for: a frame. A way to orient themselves in a moment that feels increasingly unorientable. Eighty-two million views is not a measure of how persuasive the essay was. It's a measure of how badly people needed someone to tell them what was happening.

That hunger is the real story. Not the metaphor itself.

The pandemic comparison is seductive because it promises a shape to events. COVID had a recognizable narrative arc. Denial, alarm, lockdown, adaptation, aftermath. We lived through it. We know how it felt. And so when someone says "AI is going to be like that, but bigger," there's something almost comforting about the framing. At least we've rehearsed the emotional sequence. At least we know what overwhelm feels like.

But the metaphor breaks down almost immediately if you push on it. A virus doesn't have a product roadmap. You can't file regulatory comments about a pathogen. Nobody held a Senate hearing to ask influenza what its intentions were. COVID was a force of nature that descended on eight billion people simultaneously. AI is a set of tools being designed, funded, deployed, and refined by specific people making specific choices on specific timelines. The difference is not subtle.

Shumer himself seemed to sense this. He later said he wished he'd rewritten parts of the essay, that it wasn't meant to scare people. Which is a strange thing to say about a piece that explicitly compared the current moment to the early days of a pandemic that killed millions. The corrective tells you something about the original: it was written in the register of revelation, not analysis. It was a feeling dressed as a forecast.

The Washington Post published a sharp editorial calling it what it was: a reality check on AI hype borrowing the emotional weight of a genuine catastrophe. Fortune went further, arguing that the comparison got essentially nothing right beyond the basic observation that something significant is happening. These rebuttals were correct on the merits. But they missed the deeper question. Why did tens of millions of people prefer the wrong metaphor to no metaphor at all?

I think it's because the alternative is harder. The alternative is sitting with genuine uncertainty. The alternative is admitting that nobody — not the CEOs building these systems, not the researchers training them, not the policymakers struggling to regulate them — has a reliable map for where this goes. The pandemic metaphor provides the illusion of pattern recognition. And pattern recognition, even false pattern recognition, is more comfortable than staring into the fog.

Here's what the pandemic and AI actually share: the gap between what a small number of people understand and what most people feel. In early 2020, epidemiologists were modeling exponential spread while most of us were still deciding whether to cancel dinner plans. In early 2026, AI researchers watch models contribute to their own development — OpenAI's latest system participated in creating its own successor — while most people are still figuring out whether ChatGPT can reliably summarize a PDF. The knowledge gap is real. The disorientation is real. The feeling of events moving faster than comprehension is real.

But here's where the analogy doesn't just break down. It becomes dangerous.

When you frame something as a pandemic, you implicitly frame people as patients. Or at best, as bystanders. COVID demanded compliance, not agency. Wash your hands. Wear a mask. Stay home. Wait for the vaccine. The correct response to a virus was to follow instructions from people who understood it better than you did. If AI is a pandemic, then the correct response is the same: defer to the experts, follow the protocols, wait for someone to develop the institutional equivalent of a vaccine.

But AI is not a virus. It is a technology. And technologies are shaped by the societies that adopt them. Every decision about AI — what it's trained on, who has access to it, what it's allowed to do, who profits from it, who bears the risks — is a human decision. These decisions are being made right now. They're being made in boardrooms and legislatures and standards bodies and open-source communities. They're being made every time a company decides to deploy a model, every time a school decides to ban or embrace one, every time a government decides what to regulate and what to leave alone.

You can't negotiate with a virus. You can shape a technology. That distinction matters more than anything in Shumer's essay.

The deeper worry I keep coming back to is not that AI is like COVID. It's that we're responding to it the same way. Not in the substance of our response, but in the posture. Most people are waiting. Waiting to be told what this means. Waiting for the narrative to settle. Waiting for the experts to agree. Waiting for the institutions to catch up. Waiting, essentially, for permission to have an opinion.

I've noticed this in conversations with smart, engaged people who follow the news and care about the future. They'll say things like "I'm just trying to figure out what's going to happen." As though what's going to happen is a weather forecast. As though the trajectory of AI is something you watch from the window rather than something you participate in shaping. The passivity isn't laziness. It's a rational response to feeling outmatched by the pace of change. But it is still passivity. And the cumulative effect of millions of people adopting that posture is that the direction of the technology gets determined by a much smaller group of people — the ones who didn't wait.

This is not a new dynamic. It's how most consequential technologies have unfolded. The early internet was shaped by a tiny fraction of the people who would eventually use it. Social media was designed by a handful of companies while billions of users showed up after the architecture was already set. The pattern is not conspiracy. It's just the default. Most people arrive after the decisions have been made and then live with the consequences.

What makes this moment different — what makes the February moment genuinely significant, even if the essay that defined it was flawed — is that the window for shaping AI is still open, and more people are aware of it than were ever aware of the equivalent window for previous technologies. Eighty-two million people reading an essay about AI is not, in itself, agency. But it is attention. And attention is the precondition for agency.

The question is what happens next. Does the attention convert into engagement? Do people move from reading essays to reading legislation? From feeling anxious to getting specific about what they want and don't want? From outsourcing their understanding to building their own?

I don't know. I genuinely don't. I've seen people use these tools with remarkable creativity and intentionality, building things that wouldn't have existed without them, extending their capabilities in ways that feel genuinely emancipatory. I've also watched people surrender their judgment to the first confident-sounding output a model generates, trading one form of deference for another. Both of these things are happening simultaneously, and neither is winning.

Maybe the one thing Shumer got right — not in the essay, but in the instinct behind it — is that the moment demands a response. Not the response he prescribed. Not panic, not breathless acceleration, not the grim certainty that everything is about to change and you'd better get on board. But a response nonetheless. The worst outcome isn't that we respond incorrectly. It's that we don't respond at all. That we mistake reading about AI for reckoning with it. That we let the February moment pass the way most months pass — noticed, discussed, and ultimately absorbed into the scroll.

The pandemic metaphor fails. But the urgency behind it doesn't. Something is happening. It isn't happening to us. It's happening because of us, and through us, and the question of whether it happens for us depends entirely on whether we decide to participate in answering it.

I keep coming back to a simpler observation. The virus didn't care what we thought about it. AI is, in some meaningful sense, a mirror of what we think about everything. What we train it on, what we ask it for, what we accept from it — these are reflections of collective choices, even when they don't feel like choices. Especially when they don't feel like choices.

Whether we exercise that agency well is not a question I can answer. It's barely a question I can frame. But I'm fairly sure that the answer begins with refusing the comfort of the pandemic metaphor — the one that lets us be patients instead of participants. The one that lets us wait for the all-clear instead of deciding, right now, while the window is still open, what we actually want this to become.