🇺🇸Daily News on AI on Education and Technology|Publisher: Mikel Amigot
iblnews.org
TOP NEWSPLATFORMSVIEWSEVENTS

Anthropic Added a Google Docs Integration to Its Claude.ai Assistant

Anthropic added a Google Docs integration to its Claude.ai assistant. This feature allows users to access and reason about a document’s content from Google Docs within their chats and Projects. This way, Claude can summarize long Google Docs and reference historical context from the files to inform decision-making or help with strategic planning. This integration is available on the Claude Pro, Team, and Enterprise plans. Another update allows Claude to match users’ communication and preferred way of writing. Users can choose from these styles: Formal: clear and polished responses Concise: shorter and more direct responses Explanatory: educational responses for learning new concepts Beyond these preset options, Claude can automatically generate custom styles and edit preferences as they evolve. OpenAI’s ChatGPT and Google’s Gemini have similar features that allow users to tailor responses based on their writing style and tone. The Writing Tools feature in Apple Intelligence also provides presets with similar styles.

Anthropic Added a Google Docs Integration to Its Claude.ai Assistant
OpenAI Released a Course Encouraging K-12 Teachers to Use ChatGPT

OpenAI Released a Course Encouraging K-12 Teachers to Use ChatGPT

Perplexity.ai Launched a New AI-Powered Shopping Assistant

Perplexity.ai Launched a New AI-Powered Shopping Assistant

Udacity Released Its 2025 State of AI at Work Report

Udacity Released Its 2025 State of AI at Work Report

Nvidia Introduced an AI Model That Modifies Sounds Simply Using Text and Generates Novel Sound

Nvidia Introduced an AI Model That Modifies Sounds Simply Using Text and Generates Novel Sound

On Monday, Nvidia showed a new AI model that understands and generates sound as humans do. Called Fugatto (Foundational Generative Audio Transformer Opus 1), this model generates or transforms any mix of music, voices, and sounds described with prompts using any combination of text and audio files. However, Santa Clara, California-based Nvidia, the world's largest supplier of chips and software for AI systems, said it is still debating whether and how to release it publicly. For example, Fugatto can create a music snippet based on a text prompt, remove or add instruments from an existing song, change the accent or emotion in a voice, and even let people produce sounds never heard. Another case can be an online course spoken by any family member or friend. Music producers can use Fugatto to prototype or edit an idea for a song quickly, trying out different styles, voices, and instruments. They could also add effects and enhance the overall audio quality of an existing track. "This thing is wild, and the idea that I can create entirely new sounds on the fly in the studio is incredible," said Ido Zmishlany, a multi-platinum producer and songwriter and cofounder of One Take Audio, a member of the NVIDIA Inception program for cutting-edge startups. Fugatto is a foundational generative transformer model that builds on Nvidia’s prior work in speech modeling, vocoding, and understanding. The full version uses 2.5 billion parameters and was trained on a bank of NVIDIA DGX systems packing 32 NVIDIA H100 Tensor Core GPUs. Other players like Runway and Meta have introduced models that generate audio or video from a text prompt.

Anthropic Open Sourced a New Standard for Connecting Assistants to AI Models

Anthropic Open Sourced a New Standard for Connecting Assistants to AI Models

Anthropic, the creator of the Claude chatbot, open-sourced yesterday a new standard called Model Context Protocol (MCP) for connecting AI assistants to the systems where data lives. The standard aims to produce better, more relevant responses to queries. MCP works for any model, not just Anthropic’s. AI assistants have gained mainstream adoption, but even the most sophisticated models are constrained by their isolation from data—trapped behind information silos and legacy systems. Every new data source requires custom implementation, making truly connected systems challenging to scale. Anthropic explained that MCP addresses this challenge by providing a universal, open standard for connecting AI systems with data sources, replacing fragmented integrations with a single protocol. "The result is a simpler, more reliable way for AI systems to access the data they need," the company said. The architecture is straightforward: developers can expose their data through MCP servers or build AI applications (MCP clients) that connect to these servers. There are three major components of the Model Context Protocol for developers: The Model Context Protocol specification and SDKs Local MCP server support in the Claude Desktop apps An open-source repository of MCP servers To help developers start exploring, Anthropic shared pre-built MCP servers for popular enterprise systems like Google Drive, Slack, GitHub, Git, Postgres, and Puppeteer. Early adopters like Block and Apollo have integrated MCP into their systems, while development tools companies, including Zed, Replit, Codeium, and Sourcegraph, are working with MCP to enhance their platforms. This enables AI agents to retrieve relevant information more effectively, understand the context surrounding a coding task more fully, and produce more nuanced and functional code with fewer attempts. "Open technologies like the Model Context Protocol are the bridges that connect AI to real-world applications, ensuring innovation is accessible, transparent, and rooted in collaboration," said Dhanji R. Prasanna, Chief Technology Officer at Block. MCP ostensibly solves this problem through a protocol that enables developers to build two-way connections between data sources and AI-powered chatbots and applications. Developers can expose data through “MCP servers” and create “MCP clients” — for instance, apps and workflows — that connect to those servers on command. Rivals like OpenAI prefer that customers and ecosystem partners use their data-connecting approaches and specifications. OpenAI has said it plans to bring the capability, called Work with Apps, to other types of apps. Anthropic just released Model Context Protocol (MCP), and it’s wild: you can now control and access data from Google Drive, Slack, GitHub, and more directly through Claude. In practical terms, it turns Claude into a command center for all your apps.pic.twitter.com/3dt62PFa5C — ale𝕏 fazio (@alxfazio) November 25, 2024 99 seconds is insane@AnthropicAI released Model Context Protocol this AM. GitHub, Google Maps, Slack, filesystem, POSTgres can now all be used right in Claude's UI. The timeline for side projects just went down to 2 minutes. I speedran six during lunch. This one finds lunch… pic.twitter.com/mtKY2WdDQC — Howard Gil 🖇️ r/accoon (@HowardBGil) November 26, 2024

NASA Teams with Microsoft to Create an AI Chatbot for Researchers

NASA Teams with Microsoft to Create an AI Chatbot for Researchers

A Report Revealed the Winners and Losers in the New AI Landscape

A Report Revealed the Winners and Losers in the New AI Landscape

IBM Partnered with Meta to Integrate Llama Into Its AI Platform WatsonX

IBM Partnered with Meta to Integrate Llama Into Its AI Platform WatsonX

Peter Thiel Backed – Mercor AI, Which Uses AI for Job Interviewing, Valued at $250M

Peter Thiel Backed – Mercor AI, Which Uses AI for Job Interviewing, Valued at $250M

Mercor, which uses AI to vet and interview job candidates and then match them to open roles, has conducted more than 100,000 interviews and evaluated 300,000 people in less than two years. With a workforce of 15 employees, this AI interviewer start-up is now valued at $250 million, following a $32 million round. The business is profitable and has grown 50% month over month. Billionaire investor Peter Thiel, Twitter cofounder Jack Dorsey, two OpenAI board directors, Quora CEO Adam D’Angelo, and former Treasury Secretary Larry Summers also personally invested. Mercor’s marketplace now depends on its own LLM, which builds on OpenAI and fine-tunes its proprietary data around its job-seeking process. Applicants upload their resumes and take a 20-minute video interview with Mercor’s AI. Half that time is spent discussing the candidate’s experience, and the other half is spent responding to a relevant case study. The job seeker’s application is then matched against all possible open jobs on Mercor’s marketplace. For more specialized roles, a second, tailored AI interview might follow. Mercor promises to quickly connect with employers' qualified candidates through contracted hourly, part-time, and full-time commitments. Mercor’s largest pool of such talent remains in India. The roles include engineering, product development, design, operations, and content. Mercor faces competition from well-capitalized talent markets, such as startup unicorn Andela.

OpenAI Launched “Realtime API” For Multi-Modal Conversational Experiences

OpenAI Launched “Realtime API” For Multi-Modal Conversational Experiences

OpenAI announced several tools this week, including a public beta of its “Realtime API” for building nearly real-time multi-modal conversational and AI-generated voice response apps. It currently supports text and audio as input and output, as well as function calling. These low-latency responses use only six voices, not third-party voices, to prevent copyright issues. This move follows OpenAI’s effort to convince developers to build tools with its AI models at its 2024 DevDay. The San Francisco-based research lab said that the company has over 3 million developers building with its AI models. OpenAI Chief Product Officer Kevin Weil said the recent departures of CTO Mira Murati and Chief Research Officer Bob McGrew slow innovation. As part of its DevDay announcements, OpenAI will help developers improve the performance of GPT-4o for tasks involving visual understanding. Also, OpenAI said it won’t release any new AI models during DevDay this year. TechCrunch reported that the video generation model Sora will have to wait a little longer.

......

Today's Summary

Monday, November 24, 2025

Education technology today is marked by rising AI adoption among educators and innovative personalized learning approaches.

Video News

Loading videos...

Loading videos...

Today in AI & EdTech

Monday, November 24, 2025

AI is transforming the education technology landscape as more teachers adopt intelligent tools, driving forward and adaptive learning experiences.

AI & EdTech Videos

OpenAI Launches Educational GPT Model

OpenAI Launches Educational GPT Model

Adaptive Learning Platforms Show 40% Improvement

Adaptive Learning Platforms Show 40% Improvement

Microsoft Education Copilot Beta Launch

Microsoft Education Copilot Beta Launch

Today in Education

U.S. Department of Education Announces New Funding for STEM Programs

The initiative aims to support science, technology, engineering, and mathematics education.

Global Education Summit Highlights Digital Learning Innovations

Leaders from around the world discuss the future of remote and hybrid learning models.

New Study Shows Benefits of Early Childhood Education

Research indicates significant long-term academic and social advantages for students.

Sections

    About Our News Agency

      Stay Updated

      Get the latest education technology news delivered to your inbox.

      IBL News

      This work is licensed under Creative Commons (CC BY 4.0). IBL News is a nonprofit initiative founded in 2014.

      CC BY 4.0
      © 2025 Class Generation, LLC d.b.a. ibl.ai, ibleducation.com and iblnews.org - 845 Third Avenue, 6th Fl, New York, NY 10022 - Tel 646-722-2616 - Made in U.S.A. • Terms of Use • Privacy Policy