Ex-OpenAI Chief Launches Safety-Focused AI Startup

Source:

TheVerge
on
June 19, 2024
Curated on

June 24, 2024

Ilya Sutskever, a prominent figure in artificial intelligence and co-founder of OpenAI, is embarking on a new venture dedicated to AI safety. On Wednesday, Sutskever revealed the formation of Safe Superintelligence Inc. (SSI), a startup with a singular mission: to develop a safe and potent AI system. The company emphasizes a balanced approach to safety and capabilities, ensuring swift advancements without compromising on security. SSI seeks to distinguish itself from AI giants like OpenAI, Google, and Microsoft, which often face external pressures and management distractions. The startup's business model is designed to insulate its safety, security, and progress from the often overwhelming short-term commercial pressures. Co-founded with Daniel Gross and Daniel Levy, both seasoned experts from Apple and OpenAI respectively, SSI aims to set new standards in AI safety. Sutskever's departure from OpenAI came amid internal conflicts and safety concerns that prompted similar exits from other key members, including Jan Leike and Gretchen Krueger. Unlike OpenAI, which is advancing its initiatives through partnerships with big tech firms like Apple and Microsoft, Sutskever has made it clear that SSI will exclusively focus on developing safe superintelligence. Their steadfast commitment stands as a powerful statement in an industry often driven by rapid innovation and commercialization.

Ready to Transform Your Organization?

Take the first step toward harnessing the power of AI for your organization. Get in touch with our experts, and let's embark on a transformative journey together.

Contact Us today