Anthropic’s quest to study the negative effects of AI is under pressure

56 minutes ago 1

Today, I’m talking with Verge senior AI reporter Hayden Field about some of the people responsible for studying AI and deciding in what ways it might… well, ruin the world. Those folks work at Anthropic as part of a group called the societal impacts team, which Hayden just spent time with for a profile she published this week.

The team is just nine people out of more than 2,000 who work at Anthropic. Their only job, as the team members themselves say, is to investigate and publish quote “inconvenient truths” about how people are using AI tools, what chatbots might be doing to our mental health, and how all of that might be having broader ripple effects on the labor market, the economy, and even our elections.

That of course brings up a whole host of problems. The most important is whether this team can remain independent, or even exist at all, as it publicizes findings about Anthropic’s own products that might be unflattering or politically fraught. After all, there’s a lot of pressure on the AI industry in general and Anthropic specifically to fall in line with the Trump administration, which put out an executive order in July banning so-called “woke AI.”

Verge subscribers, don’t forget you get exclusive access to ad-free Decoder wherever you get your podcasts. Head here. Not a subscriber? You can sign up here.

If you’ve been following the tech industry, the outline of this story will feel familiar. We’ve seen this most recently with social media companies and the trust and safety teams responsible for doing content moderation. Meta went through countless cycles of this, where it dedicated resources to solving problems created by its own scale and the unpredictable nature of products like Facebook and Instagram. And then, after a while, it seems like the resources dried up, or Mark Zuckerberg got bored or more interested in MMA or just cozying up to Trump, and the products didn’t really change to reflect what the research showed.

We’re living through one of those moments right now. The social platforms have slashed investments into election integrity and other forms of content moderation. Meanwhile, Silicon Valley is working closely with the Trump White House to resist meaningful attempts to regulate AI. So as you’ll hear, that’s why Hayden was so interested in this team at Anthropic. It’s fundamentally unique in the industry right now.

In fact, Anthropic is an outlier because of how amenable CEO Dario Amodei has been to calls for AI regulation, both at the state and federal level. Anthropic is also seen as the most safety-first of the leading AI labs, because it was formed by former research executives at OpenAI who were worried their concerns about AI safety weren’t being taken seriously. There’s actually quite a few companies formed by former OpenAI people worried about the company, Sam Altman, and AI safety. It’s a real theme of the industry that Anthropic seems to be taking to the next level.

So I asked Hayden about all of these pressures, and how Anthropic’s reputation within the industry might be affecting how the societal impacts team functions — and whether it can really meaningfully study and perhaps even influence AI product development. Or, if as history suggests, this will just look good on paper, until the team quietly goes away. There’s a lot here, especially if you’re interested in how AI companies think about safety from a cultural, moral, and business perspective.

A quick announcement: We’re running a special end-of-the-year mailbag episode of Decoder later this month where we answer your questions about the show: who we should talk to, what topics we cover in 2026, what you like, what you hate. All of it. Please send your questions to [email protected] and we’ll do our best to feature as many as we can.

If you’d like to read more about what we discussed in this episode, check out these links:

  • It’s their job to keep AI from destroying everything | The Verge
  • Anthropic details how it measures Claude’s wokeness | The Verge
  • The White House orders tech companies to make AI bigoted again | The Verge
  • Chaos and lies: Why Sam Altman was booted from OpenAI | The Verge
  • Anthropic CEO Dario Amodei just made another call for AI regulation | Inc.
  • How Elon Musk Is remaking Grok in his image | NYT
  • Anthropic tries to defuse White House backlash | Axios
  • New AI battle: White House vs. Anthropic | Axios
  • Anthropic CEO says company will pursue gulf state investments after all | Wired

Questions or comments about this episode? Hit us up at [email protected]. We really do read every email!

Decoder with Nilay Patel

A podcast from The Verge about big ideas and other problems.

SUBSCRIBE NOW!

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

Read Entire Article