IBL News | New York
The 2023 AI Index [read in full here] — compiled by researcher from Stanford University as well as AI companies including Google, Anthropic, McKinsey, LinkedIn, and Hugging Face — suggests that AI is entering an era of corporate control, with industry players dominating over academia and government in deploying and safeguarding AI applications.
Decisions about how to deploy this technology and how to balance risk and opportunity lie firmly in the hands of corporate players, as we’ve seen over the past years with AI tools, like ChatGPT, Bing, and image-generating software Midjourney, going mainstream.
The report, released today, states: “Until 2014, most significant machine learning models were released by academia. Since then, industry has taken over. In 2022, there were 32 significant industry-produced machine learning models compared to just three produced by academia. Building state-of-the-art AI systems increasingly requires large amounts of data, compute, and money, resources that industry actors inherently possess in greater amounts compared to nonprofits and academia.”
Many experts in the AI world, mentioned by The Verge, worry that the incentives of the business world will also lead to dangerous outcomes as companies rush out products and sideline safety concerns.
As AI tools become more widespread, the number of errors and malicious use cases are increasing. Such incidents might include fatalities involving Tesla’s self-driving software; the use of audio deepfakes in corporate scams; the creation of nonconsensual deepfake nudes; and numerous cases of mistaken arrests caused by faulty facial recognition software.