Saturday, May 31, 2025
Home > Exchanges > What Happens When AI Replaces Workers?

What Happens When AI Replaces Workers?

On Wednesday, Anthropic CEO Dario Amodei declared AI could eliminate half of all entry level white collar jobs within five years. Last week, a senior LinkedIn executive reported that AI is already starting to take jobs from new grads. In April, Fiverr’s CEO made it clear: “AI is coming for your job. Heck, it’s coming for my job too.” Even the new Pope is warning about AI’s dramatic potential to reshape our economy.

Why do they think this?

The stated goal of the major AI companies is to build artificial general intelligence, or AGI, defined as “a highly autonomous system that outperforms humans at most economically valuable work.”

This isn’t empty rhetoric—companies are spending over a trillion dollars to build towards AGI. And governments around the world are supporting the race to develop this technology.

They’re on track to succeed. Today’s AI models can score as well as humans on many standardized tests. They are better competitive programmers than most programming professionals. They beat everyone except the top experts in science questions. 

As a result, AI industry leaders believe they could achieve AGI sometime between 2026 and 2035.

Among insiders at the top AI companies, it’s the near-consensus opinion that the day of most people’s technological unemployment, where they lose their jobs to AI, will arrive soon. AGI is coming for every part of the labor market. It will hit white collar workplaces first, and soon after will reach blue collar workplaces as robotics advances.

In the post-AGI world, an AI can likely do your work better and cheaper than you. While training a frontier AI model is expensive, running additional copies of it is cheap, and the associated costs are rapidly getting cheaper.

A commonly proposed solution for an impending era of technological unemployment is government-granted universal basic income (UBI). But this could dramatically change how citizens participate in society because work is most people’s primary bargaining chip. Our modern world is upheld with a simple exchange: you work for someone with money to pay you, because you have time or skills that they don’t have. 

The economy depends on workers’ skills, judgment, and consumption. As such, workers have historically bargained for higher wages and 40-hour work weeks because the economy depends on them.

With AGI, we are posed to change, if not entirely sever, that relationship. For the first time in human history, capital might fully substitute for labor. If this happens, workers won’t be necessary for the creation of value because machines will do it better and cheaper. As a result, your company won’t need you to increase their profits and your government won’t need you for their tax revenue.

We could face what we call “The Intelligence Curse”, which is when powerful actors such as governments and companies create AGI, and subsequently lose their incentives to invest in people. 

Just like in oil-rich states afflicted with the “resource curse,” governments won’t have to invest in their populations to sustain their power. In the worst case scenario, they won’t have to care about humans, so they won’t. 

But our technological path is not predetermined. We can build our way out of this problem.

Many of the people grappling with the other major risks from AGI—that it goes rogue, or helps terrorists create bioweapons, for example—focus on centralizing and regulatory solutions: track all the AI chips, require permits to train AI models. They want to make sure bad actors can’t get their hands on powerful AI, and no one accidentally builds AI that could literally end the world. 

However, AGI will not just be the means of mass destruction—it will be the means of production too. And centralizing the means of production is not just a security issue, it is a fundamental decision about who has power.

We should instead avert the security threats from AI by building technology that defends us. AI itself could help us make sure the code that runs our infrastructure is secure from attacks. Investments in biosecurity could block engineered pandemics. An Operation Warp Speed for AI alignment could ensure that AGI doesn’t go rogue.

And if we protect the world against the extreme threats that AGI might bring about, we can diffuse this technology broadly, to keep power in your hands.

We should accelerate human-boosting AI over human-automating AI. Steve Jobs once called computers “bicycles for the mind,” after the way they make us faster and more efficient. With AI, we should aim for a motorcycle for the mind, rather than a wholesale replacement of it. 

The market for technologies that keep and expand our power will be tremendous. Already today, the fastest-growing AI startups are those that augment rather than automate humans, such as the code editor Cursor. And as AI gets ever more powerful and autonomous, building human-boosting tools today could set the stage for human-owned tools tomorrow. AI tools could capture the tacit knowledge visible to you every day and turn it into your personal data moat.

The role of the labor of the masses can be replaced either with the AI and capital of a few, or the AI and capital of us all. We should build technologies that let regular people train their own AI models, run them on affordable hardware, and keep control of their data—instead of everything running through a few big companies. You could be the owner of a business, deploying AI you control on data you own to solve problems that feel unfathomable to you today.

Your role in the economy could move from direct labor, to managing AI systems like the CEO of a company manages their direct reports, to steering the direction of AI systems working for you like a company board weighing in on long-term direction. 

The economy could run on autopilot and superhumanly fast. Even when AI can work better than you, if you own and control your piece of it, you could be a player with real power—rather than just hoping for UBI that might never come.

To adapt the words of G. K. Chesterton, the problem with AI capitalism is if there aren’t enough capitalists. If everyone owns a piece of the AI future, all of us can win.

And of course, AGI will make good institutions and governance more important than ever. We need to strengthen democracy against corruption and the pull of economic incentives before AGI arrives, to ensure regular people can win if we reach the point where governments and large corporations don’t need us.

What’s happening right now is an AGI race, even if most of the world hasn’t woken up to it. The AI labs have an advantage in AI, but to automate everyone else they need to train their AIs in the skills and knowledge that run the economy, and then go and outcompete the people currently providing those goods and services.

Can we use AI to lift ourselves up, before the AI labs train the AIs that replace us? Can we retain control over the economy, even as AI becomes superintelligent? Can we achieve a future where power still comes from the people?

It is up to us all to answer those questions.

Source