IBL News | New York
OpenAI Founder Sam Altman [in the picture], President Greg Brockman, and Chief Scientist Ilya Sutskever proposed an international regulatory body for the governance of superintelligence—future AI systems.
“It would be unintuitively risky and difficult to stop the creation of superintelligence,” they said in a blog post on their website.
This agency — “something like an IAEA for superintelligence efforts” — would inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, and track compute and energy usage.
“Within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations,” they explained.
“Major governments around the world could set up a project that many current efforts become part of, or we could collectively agree that the rate of growth in AI capability at the frontier is limited to a certain rate per year.”