Ilya Sutskever, co-founder of OpenAI and the company’s former chief scientist, is now on a new journey. After leaving OpenAI in May, he, along with Daniel Levy, also from OpenAI, and Daniel Gross, Apple’s former AI lead, have formed Safe Superintelligence Inc. (SSI). This new startup is single-mindedly dedicated to building safe superintelligent systems, a continuation of Sutskever’s work at OpenAI. The original story was reported by Ryan Daws at the Artificial Intelligence News.
SSI’s formation comes after the brief ousting of OpenAI’s CEO Sam Altman in November 2023, a move in which Sutskever played a pivotal role before eventually expressing regret. At OpenAI, Sutskever was part of the superalignment team, which was given the responsibility of designing control methods for powerful new AI systems. After Sutskever’s high-profile departure, the group was disbanded.
SSI has a unique approach to AI safety. The founders state on their website, “We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead. This way, we can scale in peace.” They emphasize their singular focus on this goal, with no distractions from management overhead or product cycles. They also mention that their business model ensures safety, security, and progress, all insulated from short-term commercial pressures.
SSI is committed to pursuing safe superintelligence in a straight path, with a singular focus, a single goal, and a single product. This approach is in stark contrast with the diversification seen in major AI labs like OpenAI, DeepMind, and Anthropic in recent years.
Whether Sutskever and his team can make substantive progress towards their ambitious goal of safe superintelligent AI is a topic of great interest and debate. Critics argue that the challenge is as much a matter of philosophy as it is of engineering. However, given the pedigree of SSI’s founders, their efforts will be closely watched.
The inception of SSI has given rise to a resurgence of the “What did Ilya see?” meme, reflecting the curiosity and anticipation surrounding Sutskever’s new venture.
In the rapidly evolving field of artificial intelligence, the emergence of SSI is a significant event. The company’s emphasis on safety and its novel approach to superintelligence hold promise for the future of AI. While the path to safe superintelligence is fraught with challenges, both technical and ethical, the formation of dedicated ventures like SSI is a positive step towards addressing these issues.
At AI First Agency, we are closely following these developments in the AI landscape. We are committed to enabling businesses to navigate the dynamic world of AI and leverage its power to drive growth and innovation. We believe that the work being done by companies like SSI is crucial in ensuring the safe and responsible advancement of AI technologies.