Breaking News

OpenAI unveils new safety committee



Today, the OpenAI Board has established a Safety and Security Committee, spearheaded by directors Bret Taylor (Chair), Adam D’Angelo, Nicole Seligman, and CEO Sam Altman. This committee is tasked with advising the Board on pivotal safety and security decisions for OpenAI’s projects and operations.

OpenAI has commenced training its next-generation model, poised to elevate our capabilities toward achieving Artificial General Intelligence (AGI). As we progress, we take pride in developing and deploying models that are leaders in both capability and safety. We welcome rigorous discourse at this crucial juncture.

The Safety and Security Committee's initial mandate is to scrutinize and enhance OpenAI's procedures and safeguards over the next 90 days. At the end of this period, the committee will present their findings and recommendations to the full Board. Subsequently, following the Board’s review, OpenAI will publicly disclose an update on the recommendations in alignment with safety and security protocols.

The committee will also include OpenAI’s technical and policy experts: Aleksander Madry (Head of Preparedness), Lilian Weng (Head of Safety Systems), John Schulman (Head of Alignment Science), Matt Knight (Head of Security), and Jakub Pachocki (Chief Scientist).

Furthermore, OpenAI will engage with additional safety, security, and technical experts to bolster this initiative. Among them are former cybersecurity officials Rob Joyce, who advises OpenAI on security matters, and John Carlin.






TheTechPulseSphere|technewsvoa