IBL News | New York
The arrival of the web-based software GPT-3 as an API service for developers has started to impact the higher education landscape. One of the first applications is writing papers on the cheap. The Chronicle of Higher Ed wonders if this type of artificial intelligence program will kill college writing.
“These outputs can be astonishingly specific and tailored. When asked to write a song protesting the inhumane treatment of animals in the style of Bob Dylan, the program clearly draws on themes from Dylan’s,” says The Chronicle.
The GPT-3 software, developed by an Elon Musk-backed, San Francisco-based nonprofit called OpenAI and introduced in 2020, is a kind of omniscient, smart Siri or Alexa that can turn any prompt into prose or even can write code. There are multiple examples of experiments on YouTube with demos of AI robots interacting with humans. [Example I] [Example II] [Example III].
This technology performs a wide variety of natural language tasks, translating natural language to code. Many companies are currently building products on top of OpenAI’s flagship large language model (LLM).
In addition to banks using Davinci, the largest GPT-3 model, for its customer service chatbot, this usage of deep learning technology to produce humanlike text, video, and advice is expected to be increasingly implemented in assessments of student learning. The role of faculty, students, research, and publications will be rethought with GPT-3 by the advent of this level of human-computer interaction, according to analyst and writer Ray Schroeder.
Beyond facts and perspectives, neural networks learn and teach skills, including mathematics, computer programming, creative writing (without plagiarizing responses), and even poetry — after they have absorbed and processed terabytes of text and literature online. GPT-3 has been trained on most of what has been publicly written, including Wikipedia, books, scientific papers, and news articles.
The success of GPT-3 has encouraged other companies to launch their own LLM research projects. Google, Meta, Nvidia, and other large tech corporations have accelerated work on LLMs.
Today, there are several LLMs that match or outpace GPT-3 in size or benchmark performance, including Meta’s OPT-175B, DeepMind’s Chinchilla, Google’s PaLM, and Nvidia’s Megatron MT-NLG.
This month, Amazon’s AI researchers unveiled Alexa Teacher Models (AlexaTM 20B), claiming that it beats GPT-3 on NLP benchmarks. The model is yet to be released publicly. [GitHub repository].
GPT-3 also triggered the launch of several open-source projects that aimed to bring LLMs available to a wider audience. BigScience’s BLOOM and EleutherAI’s GPT-J are two examples of open-source LLMs available free of charge. Also, Cerebras has created a huge AI processor that can train and run LLMs with billions of parameters at a fraction of the cost.
However, OpenAI — which has recently slashed the price of its GPT-3 API service by up to two-thirds this month — is no longer the only company that is providing LLM API services. Hugging Face, Cohere, and Humanloop are some of the other players on the field.
Hugging Face provides a large variety of different transformers, all of which are available as downloadable open-source models or through API calls. Hugging Face recently released a new LLM service powered by Microsoft Azure, which OpenAI also uses for its GPT-3 API.
Many organizations can’t handle the technical challenges of training and running the models, as LLM requires dozens or even hundreds of GPUs — huge hardware costs. That’s one of the reasons that OpenAI and other companies decided to provide API access to LLMs.
• Interaction opportunities with AI: Copy.AI, Blenderbot.AI, Iamsophie.IO
• Ray Schoeder: Higher Ed, Meet GPT-3: We Will Never Be the Same!