Technology  | 
AI

AI Boss Lays Out Our Potentially Catastrophic Future

Anthropic CEO Dario Amodei warns of mass job losses, bioterror risks, authoritarianism
Posted Jan 27, 2026 7:37 AM CST
AI Boss Lays Out Our Potentially Catastrophic Future
Dario Amodei, CEO and co-founder of Anthropic, attends the annual meeting of the World Economic Forum in Davos, Switzerland, Jan. 23, 2025.   (AP Photo/Markus Schreiber, File)

Anthropic CEO Dario Amodei is trying to shock the world into paying attention. In a new 38-page essay, he warns AI systems with abilities beyond top human experts could arrive within a few years and, without fast, serious safeguards, "cause civilization-level damage." Amodei, whose company builds some of the most advanced large language models, including the new Claude Opus 4.5, compares AI in the year 2027 to "a country of geniuses in a data center"—tens of millions of virtual minds more capable than the smartest people, per Axios. Any national security chief facing that scenario, he writes, would likely call it the most serious threat in a century, "possibly ever." If current exponential gains continue, he argues, AI could surpass humans "at essentially everything" in only a few years.

His essay, a follow-up to a 2024 report outlining the benefits of AI, outlines specific risks: rapid displacement of up to half of entry-level white-collar jobs in as little as one to five years; terrorist use of AI to design biological agents, potentially enabling mass-casualty attacks targeting millions; and the prospect of AI-driven surveillance and control by authoritarian governments, with China singled out as a central concern. "AI-enabled authoritarianism terrifies me," Amodei writes. He also warns that AI companies themselves sit in a "next tier of risk," controlling huge data centers, powerful models, and direct influence over hundreds of millions of users—with the theoretical ability to manipulate or "brainwash" the public.

Amodei concedes it's "awkward" for a CEO in his position to sound this alarm, but says the bigger danger is that money and influence will keep governments and tech leaders quiet about warning signs. AI is such a "glittering prize," he writes, that society may struggle to impose any limits at all. If leaders are candid about the dangers, regulators move quickly, and wealthy individuals treat mitigating AI risk as a moral obligation, humanity might navigate through this transition just fine. But the question is whether we'll "wake up" in time, Amodei writes. Per Quartz, he calls for transparency laws, step-by-step regulation, and notes we definitely shouldn't be selling advanced chips to China—which the US just approved.

Read These Next
Get the news faster.
Tap to install our app.
X
Install the Newser News app
in two easy steps:
1. Tap in your navigation bar.
2. Tap to Add to Home Screen.

X