At a recent gathering for NeurIPS, the esteemed AI conference, Ilya Sutskever, a renowned figure in computer science and one of the brains behind OpenAI, expressed his thoughts on the trajectory of artificial intelligence (AI).
When celebrating his achievements in the field, Sutskever provided a captivating outlook on the rise of “superintelligent AI,” a level of AI that transcends human capabilities across numerous domains.
Sutskever highlighted the stark contrast between present AI systems and the superintelligent AI of the future, which he envisions as being genuinely “agentic.”
He suggested that soon, systems will possess a deep level of agency, a departure from today’s AI which he described as only “very slightly agentic.” With this advancement, these AI entities will make autonomous judgments and thus, their actions may become increasingly less predictable.
In his forecast, Sutskever did not shy away from the idea that AI systems might evolve to have self-awareness, and even ponder about their own rights.
He posited, “If AI systems peacefully seek co-existence with humans and desire rights, that might not be such a negative outcome.”
Moving on from his tenure at OpenAI, Sutskever went on to establish Safe Superintelligence (SSI), a research facility aimed at promoting the overall safety of AI.
SSI has been successful in securing a hefty $1 billion in funding in September to back its goals. The initiatives of Sutskever at SSI mirror the mounting concerns and focus on the safe and ethical creation of AI solutions.
The insights provided by Sutskever have captured the attention of industry forerunners and scholars, all keenly observing the impending roles and consequences superintelligent AI could have on the fabric of society.
The conversations revolving around AI are meticulously watching the developments and guidance provided by thought leaders like Sutskever, considering the vast implications superintelligent AI holds for our future.