Sam Altman is hiring someone to worry about the dangers of AI

3 hours ago 7

Terrence O'Brien

is the Verge’s weekend editor. He has over 18 years of experience, including 10 years as managing editor at Engadget.

OpenAI is hiring a Head of Preparedness. Or, in other words, someone whose primary job is to think about all the ways AI could go horribly, horribly wrong. In a post on X, Sam Altman announced the position by acknowledging that the rapid improvement of AI models poses “some real challenges.” The post goes on to specifically call out the potential impact on people’s mental health and the dangers of AI-powered cybersecurity weapons.

The job listing says the person in the role would be responsible for:

“Tracking and preparing for frontier capabilities that create new risks of severe harm. You will be the directly responsible leader for building and coordinating capability evaluations, threat models, and mitigations that form a coherent, rigorous, and operationally scalable safety pipeline.”

Altman also says that, looking forward, this person would be responsible for executing the company’s “preparedness framework,” securing AI models for the release of “biological capabilities,” and even setting guardrails for self-improving systems. He also states that it will be a “stressful job,” which seems like an understatement.

In the wake of several high-profile cases where chatbots were implicated in the suicide of teens, it seems a little late in the game to just now be having someone focus on the potential mental health dangers posed by these models. AI psychosis is a growing concern, as chatbots feed people’s delusions, encourage conspiracy theories, and help people hide their eating disorders.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

The Verge Daily

A free daily digest of the news that matters most.

Read Entire Article