As AI leaps forward, concerns rise that innovation is leaving safety behind
CEO and co-founder of Anthropic Dario Amodei looks on during the 56th annual World Economic Forum meeting in Davos, Switzerland, Jan. 20, 2026.
Denis Balibouse/Reuters
When the United States military captured former Venezuelan President Nicol谩s Maduro in January, it developed by a private U.S. company. It鈥檚 unclear exactly what the tool did, but the company鈥檚 policy says its products can鈥檛 be used for violence or to develop weapons.
Now, the Pentagon is considering cutting ties with that company, Anthropic, because of its insistence on limits for how the military uses its technology, according to .
The tensions between AI safeguards and national security aren鈥檛 new. But multiple events in the last month have brought the issue of AI safety 鈥 in contexts ranging from weapons development to ethical advertising 鈥 into the spotlight.
Why We Wrote This
Artificial intelligence is developing so rapidly that some industry insiders fear safety concerns aren鈥檛 getting enough attention. That鈥檚 sparking conversation about how to balance innovation, competition, and safeguards.
鈥淎 lot of the people who鈥檝e been involved in the field of AI have been thinking about safety in various forms for a long time,鈥 says Miranda Bogen, the founding director of the Center for Democracy and Technology鈥檚 AI Governance Lab. 鈥淏ut now those conversations are happening on a much more visible stage.鈥
This month, researchers resigned from two major U.S. AI companies, citing inadequacies in the companies鈥 safeguards around things like consumer data collection. In an essay Feb. 9 titled investor Matt Shumer warned that AI will not only soon threaten Americans鈥 jobs en masse, but that it could also start to behave in ways its creators 鈥渃an鈥檛 predict or control.鈥 The essay went viral on social media.
While urging action on very real risks, many AI safety experts caution against overplaying fears about hypothetical scenarios.
鈥淭hese moments of public attention are valuable because they create openings for the kind of public debate about AI that is essential,鈥 Dr. Alondra Nelson, a former member of the United Nations High-level Advisory Body on Artificial Intelligence, wrote the Monitor in an email while attending a global AI summit in India. 鈥淏ut they are no substitute for democratic deliberation, regulation, and real public accountability.鈥
Pressure to compete
In December, President Donald Trump issued an executive order blocking 鈥渙nerous鈥 state laws regulating AI. For example, his order singled out Colorado鈥檚 law that bans 鈥渁lgorithmic discrimination鈥 in areas like hiring and education. The president鈥檚 order was supported by Republicans who said forcing AI companies to comply with excessive regulations could leave the U.S. at a competitive disadvantage with China.
That sense of competition appears to be central to Anthropic鈥檚 move away from the Pentagon. Anthropic wants to ensure its technology is not used to conduct domestic surveillance or develop weapons that fire without human input.
But the Department of Defense, which earlier this year that the U.S. military 鈥渕ust build on its lead over our adversaries in integrating [AI],鈥 wants to deploy AI technology without regard to companies鈥 individual policies, according to reporting by and .
鈥淲e constantly face pressures to set aside what matters most,鈥 wrote Mrinank Sharma, an AI safety researcher, in a publicly-posted from Anthropic last week. He did not refer to a specific event that led him to resign, but warned that, 鈥渙ur wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences.鈥
Dr. Bogen says policies designed to compel AI providers to subject their models to certain tests or to invest in safety are often diluted into disclosure requirements or nonbinding recommendations.
鈥淭he incentives are so strongly in favor of moving forward quickly, even when there鈥檚 a desire to put up guardrails,鈥 she says.
Is the world 鈥渋n peril鈥?
Those warning of AI鈥檚 dangers have sometimes used existential language.
Zo毛 Hitzig, a former researcher at OpenAI, cited 鈥渄eep reservations鈥 about the company鈥檚 strategy in an she wrote for The New York Times last week, fearing its decision to start testing ads on ChatGPT 鈥渃reates a potential for manipulating users in ways we don鈥檛 have the tools to understand, let alone prevent.鈥
Mr. Sharma鈥檚 resignation letter from Anthropic warned that 鈥渢he world is in peril.鈥
Some experts say such language is counterproductive.
鈥淚 find the framing of that 鈥榩oint of no return鈥 to be very disempowering,鈥 says Dr. Bogen.
She does worry that as people choose to turn over more of their decision-making to AI and learn to use the technology in their jobs, they鈥檙e creating dependencies that will be increasingly difficult to untangle.
But she says people are ultimately responsible for their choices and actions.
鈥淚 don鈥檛 think we鈥檒l ever get to the point where it鈥檚 truly impossible to 鈥 make decisions about how to treat this new technology,鈥 she says.
Katherine Elkins, a principal investigator representing the Modern Language Association in the National Institute of Standards and Technology AI Consortium,聽says she hopes she鈥檚 wrong about some of the risks she sees, like an AI chatbot potentially using someone鈥檚 data to manipulate them. But until she鈥檚 sure, she wants safety to remain an urgent priority.
鈥淧ersonally, I have felt it鈥檚 better to err on the cautious side and devote my time to thinking about the risks of AI鈥 than to think the technology won鈥檛 get better.
聽Editor's note: This story has been updated with Katherine Elkins' correct title.聽