IBL News | New York
Four days ahead of Sam Altman‘s ouster, several staff researchers wrote a letter to the board of directors warning of a powerful AI discovery, called Project Q*, that they said could threaten humanity. Reuters reported yesterday about it quoting as sources two people familiar with the matter.
This unreported letter was a key development for Altman’s firing, the poster child of Gen AI who would triumphantly return late Tuesday.
This letter was one factor among a longer list of grievances of the board which reflected the concerns over commercializing advances before understanding the consequences.
Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup’s search for what’s known as artificial general intelligence (AGI), one of the people told Reuters.
OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.
Researchers consider math to be a frontier of generative AI development. Currently, generative AI is good at writing and language translation by statistically predicting the next word, and answers to the same question can vary widely. But conquering the ability to do math — where there is only one right answer — implies AI would have greater reasoning capabilities resembling human intelligence. This could be applied to novel scientific research, for instance, AI researchers believe.
Unlike a calculator that can solve a limited number of operations, AGI can generalize, learn, and comprehend.
There has long been discussion among computer scientists about the danger posed by highly intelligent machines, for instance, if they might decide that the destruction of humanity was in their interest.
Altman has drawn investment and computing resources from Microsoft to get closer to AGI. Now he is back some analysts say that he may have fewer checks on power.
Please ignore the deluge of complete nonsense about Q*.
One of the main challenges to improve LLM reliability is to replace Auto-Regressive token prediction with planning.Pretty much every top lab (FAIR, DeepMind, OpenAI etc) is working on that and some have already published…
— Yann LeCun (@ylecun) November 24, 2023