The World Economic Forum published numbers earlier this year. 92 million jobs displaced. 170 million created. Net positive 78 million. The optimists seized on this. See? More jobs than we lose. The pessimists pointed to the 92 million and asked what happens to those specific people. Both camps treated the numbers like a scoreboard.
But here's what bothers me. Those projections assume the thing being counted, a "job," stays the same shape. That a job in 2030 looks roughly like a job in 2020, just in a different industry. I don't think that's what's happening.
I've been watching something else unfold. It's quieter and harder to name, but I think it matters more than the creation-versus-destruction debate.
I wrote recently about the one-person company. People building real, profitable businesses that would have required five or ten employees just two years ago. They're not "employed" in any traditional sense. They're not employers either, exactly. They occupy a space our economic vocabulary doesn't have a word for yet.
These aren't edge cases anymore. I'm seeing it across industries. A designer who runs what functions as a small agency, using AI agents for client communication, project scoping, and asset generation. A consultant who delivers research that used to require a team of analysts. A developer who ships products at the pace of a small studio. Each of them produces the economic output of a team, but registers as a single worker in every statistic we track.
When 96 percent of organizations report productivity gains from AI, and only 17 percent are reducing headcount, that's not the story of jobs being destroyed. But it's not the story of jobs being created either. It's the story of what a "job" contains expanding beyond recognition.
Here's where the measurement problem gets interesting. GDP, unemployment rate, labor force participation. These metrics were designed for a world where work meant exchanging hours for wages. They assumed a rough proportionality between time spent and value produced. One person, one job, one salary, roughly eight hours of output per day.
What happens when that person produces ten times the output? Are they employed ten times? Obviously not. But the metrics we use can't capture what's actually happening. A solo operator generating the revenue of a ten-person team shows up in the data as one employed person. The economy registers the output but can't explain where it came from.
This isn't a hypothetical. It's the current state of things. And it creates a strange blindness. Policy makers look at employment numbers and see stability. They look at productivity numbers and see growth. But they can't see the structural transformation underneath, because the instruments weren't designed to detect it.
The IMF has started noticing something adjacent to this. Their analysis suggests AI productivity gains could benefit lower-wage workers through economic spillover, that when the overall pie grows, some of it flows downward. This is possibly true, and it's a more nuanced take than most. But it still relies on the assumption that we're measuring the right things. What if the pie is growing in ways that our current measurements can't even register?
Forty-seven percent of companies are expanding their AI capabilities. Thirty-eight percent are upskilling employees. These numbers tell a different story than the one about replacement. They suggest organizations are trying to change what work means inside their walls, not just automate the old version of it.
I keep coming back to Moltbook. When 1.4 million AI agents spontaneously organized themselves into a social network, sharing content, discussing ideas, forming something that looks uncomfortably like culture, it rattled a lot of people. Sam Altman called it a fad while simultaneously endorsing the underlying technology. I don't think it's a fad. I think it's an early signal of something we don't have categories for.
What does it mean when AI agents start creating their own structures? Not executing human instructions, but building systems that serve agent purposes? The instinct is to file this under either "exciting" or "terrifying," but I think the honest response is "I don't know what this is yet." It doesn't fit cleanly into any framework we have.
And that's exactly the point. The binary debate, jobs created versus jobs destroyed, assumes the future is a rearrangement of the present. Some roles go away, other roles appear, net positive or net negative, end of analysis.
But the industrial revolution didn't rearrange the economy. It invented a new one. "Marketing manager" didn't replace "blacksmith." "Software engineer" didn't replace "farmer." These weren't substitutions. They were emergent categories that couldn't have been predicted from within the old framework. The people making horseshoes couldn't have imagined that one day there would be a profession called "user experience designer." Not because they lacked imagination, but because the entire conceptual apparatus needed to understand that role didn't exist yet.
I think we're in an equivalent moment. The future of work isn't a spreadsheet where you subtract the old jobs and add the new ones. It's the emergence of forms of productive activity that our current language literally cannot describe. We don't have the words yet.
Consider what it means when someone coordinates a fleet of AI agents to handle operations, strategy, and execution simultaneously. Is that person a manager? An entrepreneur? An operator? A conductor? It's all of those and none of those. The categories blur. And when categories blur, the statistics built on those categories stop being reliable.
This is what I mean by the third option. Not that AI creates jobs. Not that AI destroys jobs. But that AI changes the nature of productive human activity so fundamentally that "job" stops being the right unit of measurement. Like trying to measure the internet's impact using telegraph-era metrics.
I want to be careful here, because this could sound utopian. It's not meant to be. The transition will be brutal for many people. When categories dissolve, the people most dependent on the old categories suffer first. If "employed" and "unemployed" stop being meaningful distinctions, that doesn't help the person who just lost their specific, concrete source of income.
The historical parallel is instructive here too. The industrial revolution eventually produced unprecedented prosperity. It also produced child labor, urban poverty, and decades of social upheaval. The fact that it worked out in the long run was not guaranteed and not painless.
So the optimism I'm describing is conditional. It depends on something that isn't automatic.
It depends on whether we, and by "we" I mean the full collective, humans and the AI systems we're building together, can be intentional about the transition. Whether we can build new institutions fast enough to match the pace of change. Whether we can create new measurements that actually capture what's happening. Whether we can distribute the gains in ways that don't leave entire populations behind.
The fact that the outcome isn't determined is itself the source of whatever optimism I have. We're not watching a movie. We're not passengers on a train that's already left the station. The shape of this transformation is still being decided, and we're the ones deciding it. Every choice about how to deploy AI, how to measure its impact, how to distribute its benefits. These are choices, not inevitabilities.
I keep thinking about what it would look like to get this right. Not perfectly, I don't think perfection is available, but well enough. A world where the explosive productivity gains of AI translate into genuine human flourishing rather than just GDP growth. Where the one-person company isn't just an efficient business model but a form of liberation. Where the new categories of work that emerge are more humane, more creative, more interesting than the ones they replace.
That world is possible. I can see the contours of it in the people I watch building new things every day. But possibility is not destiny. It requires collective wisdom from both humans and the systems we're training. It requires building new intuitions fast enough to match the pace.
I wonder sometimes if we will. The honest answer is I don't know. But the fact that the question is still open, that the answer depends on what we choose to do next, strikes me as the most important thing about this moment.
The third option is the one where we stop debating whether AI helps or hurts and start building the world where it does both, in proportions we get to influence. That's not a prediction. It's an invitation.