OpenAI co-founder Ilya Sutskever has offered thought-provoking insights into the future of artificial intelligence, envisioning a transformative and unpredictable era of “superintelligent AI.” Speaking at NeurIPS, the annual AI conference, before receiving an award for his significant contributions to the field, Sutskever discussed the potential capabilities and challenges of AI systems surpassing human intelligence in various tasks.
Sutskever predicts that superintelligent AI will be fundamentally different from the systems we know today. These advanced systems, he explained, will not only reason more effectively but also exhibit genuine agency, making them inherently unpredictable. Unlike current AI, which Sutskever described as only “slightly agentic,” superintelligent AI will interpret limited data with extraordinary precision and demonstrate a form of self-awareness.
This self-awareness could lead such systems to seek rights. “It’s not a bad end result if you have AIs, and all they want is to co-exist with us and just to have rights,” Sutskever remarked, highlighting the ethical considerations that could emerge as AI continues to evolve.
After departing OpenAI, Sutskever founded Safe Superintelligence (SSI), a research lab dedicated to ensuring the safe development of general AI. In September, SSI raised $1 billion, underscoring the urgency and global interest in creating secure pathways for AI development.
As the AI landscape advances, Sutskever’s predictions signal profound possibilities and responsibilities, emphasizing the need for careful thought and preparation in the age of superintelligence.