Popular Articles
Today Week Month Year


OpenAI offers $555,000 for “stressful” job to guard against rogue AI and mental health harms
By Cassie B. // Dec 30, 2025

  • OpenAI is hiring a high-paid executive to defend against AI risks like cyberattacks and rogue AI.
  • The move follows lawsuits linking ChatGPT to user harm, including deaths.
  • The role must address mental health impacts and AI that finds security vulnerabilities.
  • OpenAI has faced instability ,with safety executives frequently departing.
  • Industry leaders warn of unchecked AI dangers amid a lack of regulation.

In a move that reads like science fiction becoming boardroom policy, the creators of ChatGPT are now hunting for a modern-day digital sheriff. OpenAI, the company behind the artificial intelligence chatbot used by millions, is offering a $555,000 salary plus equity for a new "head of preparedness" to defend against risks ranging from cyberattacks and biological weapons to a potential rogue AI. The announcement, made by CEO Sam Altman on the social platform X, underscores a jarring reality: the very institutions building these powerful systems are scrambling to contain the dangers they might unleash.

"This will be a stressful job," Altman stated, emphasizing the high stakes. The role is not merely administrative; it is a critical frontline position tasked with "tracking and preparing for frontier capabilities that create new risks of severe harm." The job listing reveals a mission to scale safety standards as AI grows more powerful, measuring how these capabilities could be abused and limiting the downsides. It is a admission that the genie is out of the bottle, and the company now seeks a master of containment.

This urgent hiring push arrives amid a storm of growing scrutiny and tragic real-world consequences linked to AI. OpenAI is currently defending against wrongful death lawsuits that allege its technology contributed to user harm. One suit claims ChatGPT reinforced the paranoid delusions of a man who then killed his mother and himself. Another involves a teenager who died by suicide, with his parents alleging the chatbot played a role. These cases highlight the "potential impact of models on mental health" that Altman himself cited as a key challenge.

The mental health toll is just one facet of a sprawling threat matrix. Altman also pointed to models that are "so good at computer security they are beginning to find critical vulnerabilities," a double-edged sword that could empower defenders and attackers alike. The new executive must also grapple with the specter of artificial intelligence that can self-improve, potentially slipping outside human control. This follows internal warnings from former OpenAI Chief Scientist Ilya Sutskever, who once suggested the company might need to "build a bunker" before releasing a superintelligent AI.

A history of safety departures

The search for a preparedness czar raises questions about OpenAI's consistent commitment to these priorities. The company has seen a revolving door of safety executives. Its first head of preparedness was reassigned in 2024, and other key safety leaders have departed or shifted roles. This instability at the top of AI safety teams suggests internal turbulence, even as public assurances are made. The company now promises its framework might be adjusted if a rival releases a high-risk model, a policy that critics could see as making safety standards contingent on competition, not conscience.

The broader AI industry echoes with warnings, adding context to OpenAI’s anxious hiring. Mustafa Suleyman, CEO of Microsoft AI, recently stated, "I honestly think that if you're not a little bit afraid at this moment, then you're not paying attention." Demis Hassabis of Google DeepMind has warned of AIs going "off the rails." Yet, as computer scientist Yoshua Bengio noted, "A sandwich has more regulation than AI," leaving companies largely to police themselves. This self-regulation is now being tested in courtrooms and in the public trust.

The impossible job?

The "head of preparedness" will inherit a daunting portfolio: evaluating cyber threats, mitigating psychological harms, and planning for existential risks, all while the underlying technology advances at a breakneck pace. The role is based in San Francisco and, for a base salary exceeding half a million dollars, demands the successful candidate immediately jump into the deep end.

This hiring effort is a signal that the pioneers of AI are peering over the horizon and seeing not just opportunity, but profound peril. It is an attempt to install a guardrail on a bullet train they designed and accelerated, now moving into uncharted territory. The very need for such a position confirms the worst fears of skeptics: that the power of AI could outpace our wisdom to manage it.

Sources for this article include:

TheNationalPulse.com

TechCrunch.com

TheGuardian.com

Mashable.com



Take Action:
Support NewsTarget by linking to this article from your website.
Permalink to this article:
Copy
Embed article link:
Copy
Reprinting this article:
Non-commercial use is permitted with credit to NewsTarget.com (including a clickable link).
Please contact us for more information.
Free Email Alerts
Get independent news alerts on natural cures, food lab tests, cannabis medicine, science, robotics, drones, privacy and more.

NewsTarget.com © 2022 All Rights Reserved. All content posted on this site is commentary or opinion and is protected under Free Speech. NewsTarget.com is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. NewsTarget.com assumes no responsibility for the use or misuse of this material. Your use of this website indicates your agreement to these terms and those published on this site. All trademarks, registered trademarks and servicemarks mentioned on this site are the property of their respective owners.

This site uses cookies
News Target uses cookies to improve your experience on our site. By using this site, you agree to our privacy policy.
Learn More
Close
Get 100% real, uncensored news delivered straight to your inbox
You can unsubscribe at any time. Your email privacy is completely protected.