OpenAI CEO Sam Altman is stepping down from the Safety and Security Committee, which was formed in May to focus on “critical” safety decisions related to the company’s projects and operations. OpenAI, the artificial intelligence research laboratory, has announced a significant restructuring of its internal safety oversight mechanism.
Altman exits from Internal Safety Commission due to increasing Scrutiny
The move comes at a time when Opening faces Increasing scrutiny from lawmakers and former employees. Sam Altman’s resignation was followed after five U.S. senators recently raised questions regarding OpenAl’s policies in a letter addressed to Sam Altman. With that, nearly half of the OpenAl staff that once focused on Al’s long-term risks have left the company, along with that some ex-researchers have accused Altman of opposing “real” AI regulation in Favor of policies that advance OpenAl’s corporate aims.
Independent Board to regulate critical safety Decisions
In a blog post, OpenAl announced that the Safety and Security Committee will turn into an “Independent” board oversight group. After the Departure of Sam Altman, the new board would be headed by Zico Kolter, a Carnegie Mellon professor, joined by existing board members Quora CEO Adam D’Angelo, retired General Paul Nakasone, and ex-Sony EVP Nicole Seligman.
OpenAI indicated in its post that the commission has conducted a safety review of o1. The company said that the committee will continue to receive regular briefings from OpenAI safety and security teams and retain the power to delay releases until safety concerns are rightly addressed. OpenAl emphasised that the group will “receive regular reports on technical assessments for current and future models, as well as reports of ongoing post-launch monitoring updates.
OpenAI increases lobbying efforts with rising criticism
OpenAl has significantly increased its federal lobbying expenditures. Following the criticism that the company currently faces, OpenAl has ramped up its expenditures on federal lobbying. The company has allotted the budget of $800,000 for the first six months of 2024,
compared to $260,000 for all of last year. This rise in lobbying efforts comes as OpenAI reportedly is looking to raise $6.5+ billion in a funding round that could value the company at over $150 billion. Earlier this year Sam Altman also joined the U.S. Department of Homeland Security’s Artificial Intelligence of Safety and security Board.
Former board members of OpenAI Helen Toner and Tasha McCauley raised questions on the company’s self-governance potency. In an op-ed for The Economist they wrote, “Self-governance, based on our experience, cannot consistently withstand profit incentives.” As OpenAI faces the complexities of AI development and regulation, its new safety oversight structure denotes a major shift in addressing concerns about responsible AI development.