Anthropic’s Daniela Amodei Believes the Market Will Reward Safe AI

The Trump administration may think regulation is crippling the AI industry, but one of the industry’s biggest players doesn’t agree.

At WIRED’s Big Interview event on Thursday, Anthropic president and cofounder Daniela Amodei told WIRED editor at large Steven Levy that even though Trump’s AI and crypto czar, David Sacks, may have tweeted that her company is “running a sophisticated regulatory capture strategy based on fear-mongering,” she’s convinced her company’s commitment to calling out the potential dangers of AI is making the industry stronger.

“We were very vocal from day one that we felt there was this incredible potential” for AI, Amodei said. “We really want to be able to have the entire world realize the potential, the positive benefits, and the upside that can come from AI, and in order to do that, we have to get the tough things right. We have to make the risks manageable. And that’s why we talk about it so much.”

More than 300,000 startups, developers, and companies use some version of Anthropolic’s Claude model and Amodei said that, through the company’s dealings with those brands, she’s learned that, while customers want their AI to be able to do great things, they also want it to be reliable and safe.

“No one says, ‘We want a less safe product,’” Amodei said, likening Anthropic’s reporting of its model’s limits and jailbreaks to that of a car company releasing crash-test studies to show how it has addressed safety concerns. It might seem shocking to see a crash-test dummy flying through a car window in a video, but learning that an automaker updated their vehicle’s safety features as a result of that test could sell a buyer on a car. Amodei said the same goes for companies using Anthropic’s AI products, making for a market that is somewhat self-regulating.

“We’re setting what you can almost think of as minimum safety standards just by what we’re putting into the economy,” she said. Companies “are now building many workflows and day-to-day tooling tasks around AI, and they’re like, ‘Well, we know that this product doesn’t hallucinate as much, it doesn’t produce harmful content, and it doesn’t do all of these bad things.’ Why would you go with a competitor that is going to score lower on that?”

Daniela Amodei attends the WIRED Big Interview event.

Photograph: Annie Noelker