OpenAI Has Reassigned Its Leading AI Safety Executive, Aleksandr Madry
Companies across various sectors are rapidly adopting AI technologies to stay competitive, intensifying the focus on AI safety and governance.
OpenAI has reassigned Aleksandr Madry, a leading figure in the company's AI safety efforts, to a new role focusing on AI reasoning, according to a news report by CNBC. Previously serving as OpenAI’s Head of Preparedness, Madry’s role involved overseeing efforts to mitigate catastrophic risks associated with frontier AI models. His reassignment comes just days before a group of Democratic senators issued a letter to OpenAI CEO Sam Altman, raising questions about the company’s approach to emerging safety concerns.
Madry, also at the helm of MIT's Center for Deployable Machine Learning and the MIT AI Policy Forum, will persist in his involvement with core AI safety efforts in his new role, as stated by OpenAI. Even with his change in focus, the firm asserts that Madry's proficiency is crucial to its safety endeavors.
The timing of Madry’s reassignment is noteworthy, occurring less than a week after the Democratic senators’ letter sought detailed information from OpenAI regarding its safety practices, internal evaluations, and cybersecurity measures. The senators have requested responses by August 13.
This move adds to a summer of growing scrutiny for OpenAI, which, along with tech giants like Google, Microsoft, and Meta, is leading the generative AI sector—an industry projected to surpass $1 trillion in revenue within a decade. Companies across various sectors are rapidly adopting AI technologies to stay competitive, intensifying the focus on AI safety and governance.
Earlier this month, Microsoft relinquished its observer role on OpenAI’s board, citing satisfaction with the startup’s governance changes following a brief period of internal upheaval. Meanwhile, concerns about AI industry practices persist, as evidenced by an open letter from current and former OpenAI employees. The letter criticized the lack of effective oversight and whistleblower protections, emphasizing the substantial non-public information held by AI companies about their technology and safety measures.
In May, OpenAI disbanded its team dedicated to long-term AI risks, following the departures of key leaders Ilya Sutskever and Jan Leike. Leike had previously criticized OpenAI’s focus on product development over safety and preparedness, advocating for a "safety-first AGI company" approach. The departure of these leaders, along with the recent changes, underscores ongoing debates within the AI community about the balance between innovation and risk management.