In an era where artificial intelligence continues to evolve, OpenAI, a leading entity in the field, has taken a proactive step towards ensuring the safety and security of its technologies. Today, they announced the creation of a new Safety and Security Committee within their board, headed by notable figures such as Bret Taylor, Adam D’Angelo, Nicole Seligman, and CEO Sam Altman. The committee is tasked with offering recommendations on critical aspects of OpenAI's projects, particularly as the organization embarks on developing what it sees as the next-level artificial general intelligence (AGI) systems.
The committee’s formation comes at a landmark time. OpenAI is commencing the training of its sophisticated models, which are expected to significantly enhance capabilities and advance them closer to AGI. However, with great power comes great responsibility, and OpenAI acknowledges the gravity of this moment. Hence, it heralds a robust debate surrounding the safety and security implications of their advancements. The committee's first assignment entails scrutinizing and enhancing the current processes and safeguards over a 90-day timeline. Subsequently, their findings and recommendations will be deliberated by the full board and, following their review, OpenAI plans to communicate the outcomes with the public.
To support this ambitious agenda, the committee will pool expertise from a cross-disciplinary OpenAI team, including experts like Aleksander Madry (Head of Preparedness), Lilian Weng (Head of Safety Systems), and others occupying critical roles in the security and alignment of AI technology. Moreover, OpenAI will draw on consultation from external advisors such as Rob Joyce and John Carlin, offering insight from a cybersecurity standpoint. This approach symbolizes the organization's dedication to a comprehensive, inclusive, and collaborative process to ensure that the AI technology they pioneer is not just advanced but also safe and beneficial for the wider community.