IBL News | New York
Will AI now move faster and be less controlled?
It seems that the chaotic events of the last week at OpenAI have sped everything up, according to most observers.
Those warning about the risks of AI lost the battle on the drama over the control of 90 billion’s start-up OpenAI as two of the three external board members were replaced, and the outed CEO Sam Altman was reinstated.
It all took place within a week, and it resulted this way thanks to the support of Microsoft and investors — the money component — and over 90% of the employees to Altman.
Created to build a machine version of AGI, OpenAI has been building it as fast as possible while, strategically, its CEO has been pushing an anti-competitive regulatory environment by warning that the innovation on AI is becoming extremely dangerous and governments should get involved.
Many in tech think that Sam Altman was just trying to get governments to ban competition, especially the open-source models.
Those at OpenAI who think capitalism impulse should slow down and be careful with AI — the majority of a board hired to do that — mounted a coup against those who think innovation — and therefore profits — should speed up.
However, the reality is that even machine-learning scientists don’t know when AGI will be achieved, as it is mostly an abstract concept.
Part of the problem and conflict when it comes to discussing AGI is that it’s an abstract concept.
“ChatGPT might scale all the way to the Terminator in five years, or in five decades, or it might not,” wrote The Financial Times.
“Failed coups often accelerate the thing that they were trying to prevent.”