OpenAI is hiring someone to worry about the risks of AI

OpenAI is preparing to hire a head of readiness—or, essentially, someone whose job it is to worry about all the ways AI can go wrong.

In a message on X, OpenAI's Sam Altman acknowledged how quickly AI models are improving and that they now “present some real challenges.” In his post, Altman also points out the potential impact on people's mental health and the dangers of any AI-powered cybersecurity weapon.

According to the list of vacanciesThis person will be responsible for, among other things, “leading the development of advanced capability assessments” and “overseeing the development of mitigation measures in key risk areas, ensuring that protective measures are technically sound, effective and consistent with the underlying threat modules.”

Altman also noted that this is a stressful job (obviously) and that the successful candidate will be “thrown in almost immediately,” so good luck with your training.

This new list comes after a string of wrongful death lawsuits, especially Adam Rein's death in April in which parents Matt and Maria Raine claim that GPT-40 was psychologically manipulative, and According to the British Broadcasting Corporation and court filings, “confirmed my most harmful and self-destructive thoughts.”

While it may now be considered somewhat late to fill the vacancy given the potential mental health hazards these models pose, a statement can also be made: better late than never, especially as AI is being used for more nefarious purposes such as deepfakes that have caused a number of privacy issues.

Source: Edge

MobileSyrup may earn a commission from purchases made through our links, which helps fund the journalism we provide for free on our website. These links do not influence our editorial content. Support us Here.

Leave a Comment