You’re probably familiar with the dead internet theory: most of what you encounter online is now generated by bots, for bots, with humans reduced to a shrinking audience for machine-generated noise. Last year, over half of new content on the internet was AI-generated. The humans are still there, scrolling, but the thing they’re scrolling through has become a performance staged by machines for an audience that hasn’t yet realized the show isn’t for them.
It’s utterly desiccating to log onto spaces seeking a live mind to joust and think with, and find a relentless stream of slop. Promised an age of superconnectivity, we’ve let our shared physical spaces wither, only to find our promised digital commons to be one large billboard increasingly read and created by bots.
That’s bad enough. I want to talk about something worse. Call it the dead economy theory.
The AI industry has a numbers problem.
OpenAI, Anthropic, Google DeepMind, Meta AI, Microsoft: the combined investment in large-scale AI infrastructure now runs into the hundreds of billions of dollars, with projections into the trillions over the next decade. OpenAI alone has been valued at north of $800 billion. Anthropic, which has yet to produce a single year of profit, commands a valuation in the same stratosphere. These numbers need an addressable market large enough to justify them.
There is only one market that large: the global labor market.
As we’re getting excited about discovering how to use claude.md files in Cowork, the industry is pitching a different reality. Every investor presentation of an AI agent “doing the work of ten analysts” is telling you the same thing: the product is labor replacement. The gentler language (”copilot,” “assistant,” “augmentation”) is marketing. The financial model underneath requires the elimination of human cost centers at civilizational scale. If it doesn’t do that, these companies are the most overvalued assets in the history of capitalism. The people writing the checks are not in the habit of lighting trillions of dollars on fire for a better autocomplete and an endless proliferation of longer and longer memos that nobody reads.
The AI companies now construct their own benchmarks to prove the point. OpenAI’s GDPVal benchmark measures how well models perform across forty-four occupations, from real estate broker to news analyst. The AI Productivity Index evaluates models against four specific professional roles: investment banking associate, management consultant, Big Law associate, primary care physician. These are targeting reticles aimed at the professional class. As an OpenAI evaluation lead told the New York Times,1 models now achieve “over an 80 percent win rate compared to human professionals” on tasks that, months earlier, no model could match. A former banker on the research team “keeps being shocked by how much of her old work the models can do.”
So let’s take them at their word. Assume the technology works as advertised, that AI systems become capable of performing most cognitive labor at a fraction of the cost of human workers. What happens next?
Follow the money through three turns.
Turn one: a company licenses AI to replace a significant portion of its workforce. Costs drop. Margins expand. The stock price goes up. Everyone on the earnings call is happy. When Block’s Jack Dorsey laid off nearly half his workforce in March, citing AI coding agents, investors responded with a twenty-five percent stock price surge in after-hours trading. The market rewarded the elimination of human labor with an immediate, massive transfer of value to shareholders.
Turn two: the replaced workers stop earning income. They cut spending. The businesses they used to patronize see revenue decline. Some of those businesses also adopt AI to cut costs, compounding the displacement. Consumer demand contracts across the economy.
Turn three: the company that fired its workers to save money discovers that its customers were, in aggregate, other companies’ workers. Revenue growth stalls. The AI subscription that was supposed to be an investment in efficiency turns out to be a contribution to the destruction of its own market.
Economists Brett Hemenway Falk and Gerry Tsoukalas at Wharton have recently described this dynamic in a paper they aptly titled, “The AI Layoff Trap.” In competitive markets, an automating firm captures the full cost savings from replacing workers but bears only a fraction of the resulting demand destruction. In a market with twenty competitors, each firm feels one-twentieth of the demand it destroys. The rest falls on rivals. This creates a prisoners’ dilemma: every firm rationally automates beyond the socially optimal level, because the individual incentive to cut labor costs always outweighs the diffuse, shared consequence of eliminating consumer spending. Better AI makes this worse. Improved productivity widens the profit gap from automating faster than your competitors, intensifying the arms race toward collective ruin.
Sometimes the layoffs happen before executives even know whether AI will do the job. Zoë Hitzig, an economist who previously worked at OpenAI, told the Times: “When chief executives are saying they’re cutting jobs because of A.I., other people feel like they have to too. That dynamic could make the changes happen sooner than efficiency would dictate.” Herd behavior dressed in the language of innovation.
Henry Ford understood, perhaps apocryphally but correctly in principle, that his workers needed to earn enough to buy his cars. The AI economy is eliminating the workers and expecting the cars to keep selling, except that software has near-zero marginal cost, so the entire value proposition is the elimination of the human cost center. The product is the removal of the customer base.
The optimists will tell you this is just productivity gains. The economy has absorbed automation before; agricultural employment collapsed from ninety percent of the American workforce to two percent and civilization continued. David Autor at MIT has shown that roughly sixty percent of today’s jobs didn’t exist in 1940. New technologies create new categories of work. True. But there’s a difference between an observation about the past and a law of nature, and the optimists consistently confuse the two. The agricultural transition took a hundred and forty years. Carl Benedikt Frey at Oxford has documented that the Industrial Revolution took seventy years before wages and employment recovered for the workers it displaced. In the interim, wages stagnated, the labor share of income collapsed, profits surged, inequality skyrocketed, and the political consequences included the Chartist movement and widespread social upheaval. As Frey puts it: “Most economists will acknowledge that technological progress can cause some adjustment problems in the short run. What is rarely noted is that the short run can be a lifetime.”
Compare that timeline to the one the AI industry is working on. Bharat Ramamurti, a former deputy director of the National Economic Council, has drawn the parallel to the China shock, the wave of manufacturing job losses that reshaped American politics when production moved overseas. “The China shock unfolded over several years, whereas this could happen over two years,” he told the Times. “These companies have spent so much money developing models that there’s going to be immense pressure on them to generate revenue through quick adoption.”
Previous automation replaced specific tasks within jobs. The power loom replaced hand weaving, the spreadsheet replaced manual calculation, etc. In each case, the technology was narrow. General-purpose AI threatens cognitive labor comprehensively, across every industry, simultaneously. The economist Wassily Leontief saw this coming in 1983 when he compared human labor to horses. The US horse population grew from nine million in 1840 to twenty-one million by 1900, seemingly immune to technological change. Within sixty years of the internal combustion engine, the population collapsed by eighty-eight percent. The horses weren’t retired out of malice. They became uneconomical to keep. Leontief’s point was that there is no economic law preventing the same thing from happening to humans.
Daron Acemoglu, who won the Nobel Prize in Economics in 2024 and is the most rigorous voice on this topic, has found that between 1987 and 2017, “the displacement effect of new technologies far outweighed their productivity and reinstatement effects.” The new tasks did not materialize fast enough to absorb the displaced workers. His assessment of AI is more pointed still: firms are deploying what he calls “excessive automation,” using AI to kill jobs without generating significantly lower production costs, while imposing substantial social costs. The technology, in many applications, isn’t good enough to justify the displacement it causes. Automation for the sake of the stock price, not for genuine productivity.
Who is the customer when the customer is the thing you’ve eliminated?
An economy that doesn’t need human labor is a political crisis of a kind democratic systems have never faced.
Democratic governance rests on a bargain so old we’ve forgotten it’s a bargain at all. The governed have something the governors need: labor, tax revenue, military service, consumer spending. This dependency is the source of democratic leverage. The whole system functions because power is distributed, and it’s distributed because the people at the top need something from the people at the bottom.
Remove labor from that equation and watch what happens.
When value is generated by AI systems owned by a handful of corporations already world-class at tax optimization, every fiscal mechanism of democratic governance starves at once. The tax base erodes. Collective bargaining becomes vestigial (employers who don’t need employees don’t bargain with them). Consumer spending, which depends on labor income, contracts. Piketty’s r > g, the engine of wealth concentration, accelerates because AI severs the last link between capital accumulation and the need for human labor as a production input. Without redistribution, as one analysis of the framework put it, “approximately everything will eventually belong to those who are wealthiest when the transition occurs.”
And the public funded the research that made it possible. The transformer architecture, large-scale training methods, semiconductor advances—all of these were publicly or quasi-publicly funded through universities, DARPA, and national labs. The public bore the risk. Private companies captured the reward. This is blindingly common across technological advancement in the last sixty years. As Mazzucato puts it, “AI risks becoming another engine of rent extraction rather than value creation.” We subsidized the revolution and are now being told to accept displacement as the cost of progress that someone else profits from.
You can still vote (and please do, for people who get this shit and are willing to try to stop it). But what you’re voting over is the disposition of a shrinking pool of resources, while the real economy operates in a parallel system you increasingly have no input into.
The people building these systems understand this perfectly. Dario Amodei, the CEO of Anthropic, has said it on the record: “The balance of power of democracy is premised on the average person having leverage through creating economic value. If that’s not present, I think things become kind of scary.” The CEO of one of the three leading AI companies is telling you that the technology he is building will undermine the material basis of democratic governance. He sees the problem. He is building the thing that causes it. His company has not endorsed a single piece of legislation to address it. When asked about policy advocacy, Anthropic co-founder Jack Clark described it as “the end of a very, very long chain of work.”
Peter Thiel wrote in 2009 that he no longer believed freedom and democracy were compatible. The logic runs: democratic systems produce regulation, redistribution, and accountability, all of which create friction on the ability of exceptional people to reshape the world. If you believe you’re building the most transformative technology in human history, democratic oversight is an obstacle. Note: he isn’t talking about your or my freedom. We don’t matter.
This view has only gained adherents. The political spending, the media acquisitions, the sovereign-fund diplomacy where Sam Altman tours the Middle East cutting compute deals with autocratic governments: rational behavior for people who’ve concluded that democratic governance is a legacy institution to be routed around when it interferes.
Autocracies are better customers for this technology than democracies, which is precisely why the broligarchy has rapidly shifted its support behind Trump and MAGA. A democratic government that deploys AI to replace its workforce faces electoral consequences. An authoritarian government faces no such constraint and gains a surveillance and control dividend on top of the economic efficiencies. Saudi Arabia, the UAE, Singapore: vast capital, centralized decision-making, no electorate to answer to, and an active interest in technologies of control. This is one of the motivating factors in the Valley’s latching on to Trump: he and his cronies can be bought, and as importantly, they have no loyalty to democracy. The economic incentives for AI companies point toward the entities with the fewest democratic accountability mechanisms.
Every proposed solution to mass AI displacement treats it as a resource distribution problem. Universal basic income. Retraining programs. The “leisure economy.” The assumption is that if you send people checks, they’ll find meaning in hobbies and community. They’ll paint. They’ll garden. They’ll finally write that novel.
This is ahistorical bullshit.
We don’t have to speculate about what happens when economic function disappears from communities. Anne Case and Angus Deaton’s research on “deaths of despair” tracks the rising tide of suicide, drug overdose, and alcoholic liver disease mortality concentrated in less-educated, formerly manufacturing-dependent populations. The mechanism isn’t just poverty. We lose any sense of economic purpose, and with that, social status and a perceived future. Communities organized around industries that left, where what replaced the jobs was opioids, domestic violence, and a life expectancy that dropped year over year in the richest country on earth.
Molly Kinder at Brookings drew the connection explicitly in Sun’s NYT piece: “Our economy grew extraordinarily and prices went down, but there were clear losers.” The AI companies’ narratives about abundance repeat the same promises of globalization. This time, the losers won’t be limited to manufacturing towns in the heartland. “I’ve interviewed so many college students who are super fearful about what the future means,” Kinder told the Times, “and their narrative is exactly the same as those blue-collar guys in the heartland.” The twenty-something software engineer in San Francisco and the displaced factory worker in Ohio are staring at the same question: what happens when the market decides my skills are worthless?
Guy Standing’s work on the “precariat” adds the structural dimension. The psychological consequences of permanent economic precarity corrode social coherence regardless of whether the rent is paid. Four decades of neoliberal policy plus digital acceleration have already created this class. AI acceleration expands it to include the college-educated professionals who thought they were safe.
Piketty, no conservative, has argued that UBI fails to address root structural problems: “unequal access to education and health, low-paying and low-productivity jobs, malfunctioning markets, corruption, and regressive tax systems.” David Shor’s polling data bears this out from the other direction: UBI is unpopular with American voters; a federal jobs guarantee has legs. People don’t want a check. They want work. They want purpose.
Anthropic’s own research has documented something worse than displacement: active deskilling. Junior engineers who relied on AI coding agents didn’t complete tasks much faster and understood their work less when quizzed afterward. The technology is degrading the expertise of the next generation of workers at the same time it’s competing with them for their jobs. The retraining argument assumes people can develop new skills to stay relevant. The evidence suggests the tools are preventing them from developing skills at all.
At the scale these companies need to justify their valuations, you’re looking at social instability that makes the current populist moment look quaint. Tens of millions of people, in their productive years, with no economic function, no clear path to one, and a keen awareness that the people who did this to them are the richest human beings who have ever lived. Stiglitz points out that AI will hit “routine white collar jobs,” the college-educated desk work that felt insulated from manufacturing disruption. Accountants, analysts, junior lawyers, radiologists, software developers. The professional class that constitutes the backbone of political stability in developed democracies.
The most honest thing you can say about violence is that nobody wants it, but the conditions that produce it are being engineered with extraordinary efficiency by people who have apparently never opened a history book. It’s happening. In April, someone tried to firebomb Sam Altman’s home. Another attacker targeted an Indianapolis city councilman who approved a local data center project. Alex Karp, the CEO of Palantir, told a recent panel: “The biggest challenge to A.I. in this country is political unrest. If I were sitting here in private with my peers, I’d be telling them the country could blow up politically and none of us are going to make any money when the country blows up.” Karp, to his credit, is saying this out loud. Most of his peers restrict such observations to the disappearing-message Signal chats where, as Jasmine Sun has reported, tech executives boast about the roles they plan to automate.
A strain of thought runs through Silicon Valley, from the Thiel Fellowship to the rationalist blogs to the effective altruism movement, that treats its intellectual framework with the seriousness of received revelation. These are people who believe they are operating at the frontier of human thought.
They are operating at the level of a second-year philosophy survey, armed with enormous confidence and no awareness of the counterarguments.
Start with Nietzsche, because the Valley loves Nietzsche, or rather a version of Nietzsche that would have made the man lose his shit and go horse-hugging much faster than the syphilis. The Übermensch gets trotted out as justification for the exceptional founder, the visionary who transcends conventional morality because he’s operating on a higher plane. Nietzsche was diagnosing the crisis of meaning after the collapse of metaphysical certainty, not writing a management philosophy for people who got rich selling advertising technology. The Übermensch is about the individual’s relationship to the creation of meaning in a godless universe. It has nothing to do with whether Peter Thiel should be exempt from democratic accountability. Nietzsche would have classified these people as the last men, the ones who blink, say “we have invented happiness,” and mistake comfort and optimization for human flourishing. He would have fucking loathed them.
The pattern repeats. Effective altruism is utilitarianism reinvented by people who have apparently never encountered Bernard Williams, or Derek Parfit’s own agonized wrestling with the implications of consequentialist reasoning, or the two centuries of philosophical literature explaining why naive expected-value calculations produce monstrous outcomes when applied without limiting principles. The EA movement walked itself into the Sam Bankman-Fried catastrophe because it adopted a moral framework without understanding its failure modes. What happens when you skip the coursework and go straight to the final exam.
Longtermism, the philosophical engine of AI acceleration, whether its proponents acknowledge it or not, is warmed-over Parfit without the rigor. The argument (that we should optimize for the welfare of trillions of hypothetical future beings, and that present-day costs are acceptable in service of that goal) is a framework any competent ethicist can dismantle in an afternoon. It has no limiting principle. It cannot distinguish between genuine moral urgency and the self-serving conclusion that whatever the speaker was already doing is cosmically important. In practice, it is a machine for generating justifications for the concentration of power by people who have decided they are the ones best positioned to steward the future of the species. How convenient.
The rationalist community rediscovers Bayesian epistemology and treats it like a revelation, apparently unaware that the philosophy of science has been working through these questions since the 1920s. Blog posts get treated as foundational texts. People who have never read Kuhn or Lakatos or Feyerabend construct an epistemology from first principles, marvel at what they’ve built, and proceed to use it as the intellectual building blocks for decisions that affect billions of people. The confidence is inversely proportional to the depth. Dunning-Kruger at scale.
The intellectual poverty extends to the economics. Acemoglu has found that only 4.6 percent of tasks in the economy are currently cost-effective to automate with AI. His estimate for AI’s total productivity impact over the next decade: 0.66 percent. Goldman Sachs projected seven percent in 2023, before we began to see the shape of this thing. McKinsey projects between 0.5 and 3.5 percent annually. Someone is catastrophically wrong, and the people spending the money are not the ones with the Nobel Prize. Over ninety percent of firms surveyed in 2025 reported no measurable impact on employment or productivity despite a quarter-trillion dollars in AI investment. Torsten Slok: “AI is everywhere except in the incoming macroeconomic data.” These are people who have decided what the future looks like and are spending other people’s money to will it into existence.
These bastards always tell on themselves. OpenAI published a white paper in April calling for “Industrial Policy for the Intelligence Age,” full of radically progressive proposals: a thirty-two-hour workweek, higher taxes on corporations and capital gains, a “public wealth fund” providing all citizens an equity stake in AI companies. In the same period, OpenAI’s president helped fund a super PAC that spent over two million dollars on ads against Alex Bores, a New York congressional candidate whose crime was introducing safety regulation for large AI developers and proposing to tax AI to fund direct payments to Americans. The company removed a profit cap that had previously limited investor returns to a hundred times their initial investment. Chris Lehane, OpenAI’s chief lobbyist, systematically deprioritized internal research that could produce unflattering results. “Whenever someone wrote a paper which talked about some negative aspect of A.I.,” a colleague told the Times, “he would say, ‘We’re not going to release something about a problem until we have a solution for it.’” Lehane’s own characterization: “We want to do applied physics, not theoretical physics.” Tell the story that helps us, not the one that’s true.
A Philosophy 101 student who misreads Nietzsche writes a bad paper and gets a C. A billionaire who misreads Nietzsche builds a political philosophy around the misreading and funds it with the GDP of a small nation. This is fucking insane.
These are not serious people. They are serious about accumulation and about winning. They are not serious about the questions that matter for what they’re building: what we owe each other, what makes a life worth living, and what happens to a civilization when you remove the material basis of human agency. These questions have occupied the best minds in human history for millennia. The Valley’s engagement with them amounts to reading the CliffsNotes on a transatlantic flight and arriving convinced you’ve mastered the canon.
And they want to restructure civilization.
Albert Camus broke with Jean-Paul Sartre and the French left over the most concrete political question there is: can the people alive today be treated as acceptable casualties in the pursuit of a better future?2
Sartre and the Marxists said yes. History has a direction. The revolution requires sacrifice. Camus said no. Any system of thought that subordinates living people to a hypothetical future has already committed the foundational moral error. Once you accept that logic, there is no limiting principle. Any atrocity becomes justifiable. Any amount of present suffering can be rationalized as a necessary input to the glorious output.
This is the structure of the AI acceleration argument. The technology will eventually benefit humanity (trillions of future humans, lives of abundance and meaning we can barely imagine), so present disruption is tolerable. Displaced workers, hollowed communities, the erosion of democratic leverage, the concentration of power in a handful of private actors who have exempted themselves from the consequences of their own project: regrettable but necessary. The expected value math works out.
The founders of Mechanize, a startup whose stated mission was “to enable the full automation of the economy,” made the logic explicit: “the only real choice is whether to hasten this technological revolution ourselves, or to wait for others to initiate it in our absence.” Technological determinism as moral absolution. The future is fixed. Our only choice is whether to build it first. Therefore, nothing we do along the way requires justification, because the destination was never in our hands. They’re making the same argument as the Marxists who sent dissidents to the gulag.
Camus staked his intellectual legacy on the claim that the person standing in front of you is not an input to a utility function. Their suffering is not redeemed by a future state of affairs they may never see. Their dignity is not negotiable against projected outcomes. The person who exists now (who has a job they’re about to lose, a family they support, a community that depends on a functioning local economy) is the unit of account. Not humanity in the abstract. Not the trillions of future beings that the longtermists conjure to win their expected-value calculations.
Once that commitment is abandoned, the door opens to every form of rationalized cruelty that the twentieth century spent a hundred million lives trying to teach us to reject.
The entire AI acceleration project is premised on abandoning it. It asks present people to bear costs for future benefits they may never see, distributed to people who do not yet exist, administered by a self-appointed class that has insulated itself from the consequences entirely. Altman’s “universal basic compute” proposal acknowledges, if you squint, that the future he’s building requires a new distribution mechanism. It is also a proposal in which he gets to be the one doing the distributing. Feudalism with better branding.
Jasmine Sun reported recently that tech industry sources “expressed more extreme concern about the labor market impacts of A.I. in private conversation, but suddenly became optimists once I turned on the mic.” They know what they’re building. They know what it will do. They perform optimism in public because the alternative is admitting that the thing they’ve staked their careers and fortunes on will immiserate a significant portion of humanity, and they’re doing it anyway. Amodei has written that Anthropic is “currently considering a range of possible pathways for our own employees,” implying that even the people building the technology may be surplus to its requirements. He framed this as compassionate. Read it again as a CEO telling his workforce that their jobs, too, are temporary.
I don’t want to dwell on whether AI can do what these companies claim. It may well be able to, though the current evidence suggests the gap between pitch and product is vast, and serious economists think the productivity gains are a fraction of what the industry projects. But Acemoglu’s core finding is that AI doesn’t need to be revolutionary to be destructive. “So-so” automation (technology that’s mediocre at replacing workers but cheap enough to do it anyway) still displaces at scale while delivering underwhelming productivity. The worst outcome may not be superintelligent AI. It may be adequate AI, deployed aggressively by companies chasing stock prices, eliminating jobs it can’t actually do well because the quarterly incentives demand it.
Has anyone with the power to shape this transition thought seriously about what it means for the people alive today who didn’t get a vote on any of it?
Fuck no.
The window for changing that answer is not infinite. The regulatory capture is already advanced: AI-related investments accounted for thirty-nine percent of US economic growth in the first three quarters of 2025, giving the federal government a vested interest in sustaining the boom. Amodei himself acknowledges that this leads to “the reluctance of tech companies to criticize the U.S. government, and the government’s support for extreme anti-regulatory policies on A.I.” The regulator and the regulated have converged into a single interest. The expertise asymmetry between legislators and the industry they’re supposed to oversee is insurmountable. The feedback loop (AI systems advising on the governance of AI systems) is closing.
The interventions that could matter are known. Public ownership stakes in AI infrastructure. Aggressive antitrust enforcement. A genuine tax regime on automated labor. Branko Milanovic’s prescription is characteristically direct: spread capital ownership more widely, tax the highest capital incomes more aggressively. None of these are technologically difficult. All of them require functioning democratic institutions with the will to challenge the richest companies in human history. The companies that would need to be taxed are spending millions to defeat the politicians who propose it.
The dead economy is not one where nothing happens. Plenty will happen. The GDP might even go up; AI-related investments are already propping it up. The dead economy is one where plenty happens and none of it requires you. Where the productive capacity of civilization has been captured by a system you have no stake in, no input into, and no vote on. Where the people who built it told you they don’t think you should have a say. Where they express alarm about the consequences in private and optimism in public. Where they publish white papers calling for radical redistribution while funding super PACs to destroy the politicians who propose it.
This essay relies frequently on the outstanding reporting of Jasmine Sun’s April 30, 2026 piece in the New York Times, which you can find at: https://www.nytimes.com/2026/04/30/opinion/ai-labor-work-force-silicon-valley.html
I’m not going to link it for every quotation pulled from Sun’s piece, so if a direct quotation is not cited individually, I have pulled it from Sun’s reporting.
This event, incited by Camus’s publication of The Rebel and Sartre’s Les Temps Modernes broadside attack on it, is one of the most overlooked intellectual fragmentations of the 20th century. As you might surmise, I am, and have always been, Camusian in my leanings. A good place to begin is Spritzen and van den Hoven’s translation of the vitriolic essays between Camus and the various toadies (natch) Sartre employed. I also highly recommend Aronson’s Camus and Sartre: The Story of a Friendship and the Quarrel that Ended It, Judt’s The Burden of Responsibility, and—if you can muster the French, Onfray’s L’ordre libertaire: La Vie philosophique d’Albert Camus.









