IBL News | New York
After the release of the bot ChatGPT a year ago, the second phase of personalized, autonomous AI agents is emerging.
These agents can perform complex tasks, such as sending emails, scheduling meetings, booking flights or tables in a restaurant, or even complex tasks like buying presents for family members or negotiating a raise.
Personalized chatbots, programmed for specific tasks, that GPT creators will be able to release through the upcoming OpenAI’s GPT Store, are a prelude.
For now, these custom GPTs are easy to build without knowing how to code.
Users just answer a few simple questions about their bot — its name, its purpose, the tone used to respond — and the bot builds itself in just a few seconds. Users can upload PDF documents they want to use as reference material or easily look up Q&A. They can also connect the bot to other apps or edit its instructions.
Although these custom chatbots are far from working perfectly, they can be useful tools for answering repetitive questions in customer service departments.
Some AI safety researchers fear that giving bots more autonomy could lead to disaster, The New York Times reported. The Center for AI Safety, a nonprofit research organization, listed autonomous agents as one of its “catastrophic AI risks” this year, saying that “malicious actors could intentionally create rogue AI with dangerous goals.”
For now, these agents look harmless and limited in their scope.
Its development seems to be dependent on gradual iterative deployment, that is, small improvements at a fast pace rather than a big leap.
In the last OpenAI developer conference, Sam Atman built on stage a “start-up mentor” chatbot to give advice to aspiring founders, based on an uploaded file of a speech he had given years earlier.
The San Francisco-based research lab envisions a world where AI agents will be extensions of us, gathering information and taking action on our behalf.
This is really worth your time – a very solid technical introduction to LLMs, great if you’ve not been paying close attention but I picked up quite a few useful details from it too https://t.co/eOfOyWSxC5
— Simon Willison (@simonw) November 23, 2023
The most clearest and crisp explanation, I've ever heard, of how large language models compress and capture a "world-model" in their weights simply by learning to predict the next word accurately.
Furthermore, how the raw power of these base models can then be tamed by teaching… pic.twitter.com/0g7Z5wXOlc
— Zain Hasan (@ZainHasan6) November 21, 2023