Sam Altman Has Resigned From OpenAI's Safety Committee In The Wake Of Increasing scrutiny
Altman's departure follows increased scrutiny from U.S. lawmakers and former OpenAI staff. Over the summer, five U.S. senators raised questions about OpenAI’s safety policies.
OpenAI CEO Sam Altman has stepped down from the Safety and Security Committee, a key group within OpenAI responsible for overseeing critical safety decisions regarding the company's projects. The committee, originally formed in May 2024, will now transition into an independent board chaired by Zico Kolter, a professor at Carnegie Mellon University. Other members include Quora CEO Adam D’Angelo, retired U.S. Army General Paul Nakasone, and former Sony EVP Nicole Seligman—all of whom serve on OpenAI's board of directors.
This development arises from increasing worries regarding OpenAI's AI safety strategies. The committee has recently completed a safety assessment of OpenAI's newest AI model, known as o1, during the time Altman was a member. OpenAI states that the committee will persist in getting regular safety updates and maintain the authority to postpone releases if any safety concerns are detected.
Critics have highlighted OpenAI's shift towards a profit-oriented approach. The organization has notably raised its federal lobbying expenditure to $800,000 in the first half of 2024, up from $260,000 the previous year. Additionally, there are rumors that OpenAI is in the process of securing over $6.5 billion in funding, potentially valuing the company at $150 billion. This has led to conjecture about OpenAI possibly moving away from its original hybrid nonprofit structure, which aimed to limit investor profits and maintain a commitment to its foundational goal of developing artificial general intelligence for the good of humanity.
Even though Altman is no longer part of the Safety and Security Committee, there are still concerns about whether the committee's decisions will substantially affect OpenAI's commercial objectives. Earlier this year, in an op-ed, ex-OpenAI board members Helen Toner and Tasha McCauley contended that self-regulation might not be adequate for ensuring accountability given the pressures of profitability.