🇺🇸Daily News on AI on Education and Technology|Publisher: Mikel Amigot
iblnews.org
TOP NEWSPLATFORMSVIEWSEVENTS

AI-Powered Platform iLearningEngines, to List on NASDAQ Via Merger

Bethesda, Maryland-based training software company iLearningEngines Inc has agreed to go public on Nasdaq through a merger with blank-check company Arrowroot Acquisition Corp (ARRW.O) in a SPAC deal that values the combined company at $1.4 billion. The deal will provide iLearningEngines with $143 million in gross proceeds, some of which will be used for future acquisitions. This publicly traded special-purpose acquisition company is sponsored by Arrowroot Capital, a 10-year-old private equity firm specializing in enterprise software. iLearningEngines supplies companies with personalized training materials using AI-powered automation tools and software. Founded in 2010, the company builds "Knowledge Clouds" from an organization's internal and external content and data, creating a central repository of all enterprise intellectual property. Then, it distributes knowledge into enterprise workflows in order to drive autonomous learning, intelligent decision making, and process automation. The company is a profitable $300 million annual revenue business that provides services to companies in 12 core verticals, including industries like oil & gas, education, healthcare and insurance. Arrowroot Acquisition Corp raised $290 million through its initial public offering in 2021, with the aim of merging with companies in the enterprise software sector. iLearningEngines, a company with over 100,000 engineering research and development hours invested in its platform, priced the deal at 3.3x estimated 2023 revenue. The combined company will continue to be led by iLearningEngines’ existing CEO and founder, Harish Chidambaran. Artificial intelligence (AI) and machine learning (ML) startups globally have raised about $12.1 billion so far this year, according to PitchBook. .

AI-Powered Platform iLearningEngines, to List on NASDAQ Via Merger
Sal Khan Demoed Khanmigo AI Tutor Described As "A Teacher's Aide on Steroids" [Video]

Sal Khan Demoed Khanmigo AI Tutor Described As "A Teacher's Aide on Steroids" [Video]

Stability Releases Its New LLM, Open-Source, and Free to Use Commercially

Stability Releases Its New LLM, Open-Source, and Free to Use Commercially

Open-Source Initiatives Challenge Closed, Proprietary AI Systems With New LLMs

Open-Source Initiatives Challenge Closed, Proprietary AI Systems With New LLMs

Artificial Intelligence Enters a New Phase of Corporate Dominance

Artificial Intelligence Enters a New Phase of Corporate Dominance

The 2023 AI Index [read in full here] — compiled by researcher from Stanford University as well as AI companies including Google, Anthropic, McKinsey, LinkedIn, and Hugging Face — suggests that AI is entering an era of corporate control, with industry players dominating over academia and government in deploying and safeguarding AI applications. Decisions about how to deploy this technology and how to balance risk and opportunity lie firmly in the hands of corporate players, as we've seen over the past years with AI tools, like ChatGPT, Bing, and image-generating software Midjourney, going mainstream. The report, released today, states: "Until 2014, most significant machine learning models were released by academia. Since then, industry has taken over. In 2022, there were 32 significant industry-produced machine learning models compared to just three produced by academia. Building state-of-the-art AI systems increasingly requires large amounts of data, compute, and money, resources that industry actors inherently possess in greater amounts compared to nonprofits and academia." Many experts in the AI world, mentioned by The Verge, worry that the incentives of the business world will also lead to dangerous outcomes as companies rush out products and sideline safety concerns. As AI tools become more widespread, the number of errors and malicious use cases are increasing. Such incidents might include fatalities involving Tesla’s self-driving software; the use of audio deepfakes in corporate scams; the creation of nonconsensual deepfake nudes; and numerous cases of mistaken arrests caused by faulty facial recognition software.

TCRIL Changes Its Name Into Axim Collaborative and Names a CEO

TCRIL Changes Its Name Into Axim Collaborative and Names a CEO

The MIT and Harvard non-profit organization — Center for Reimagining Learning (or “tCRIL”) — that handles the Open edX platform named its first CEO: Stephanie Khurana [in the picture]. She assumed her role on April 3. In parallel, this organization which started by the two universities with the $800 million of proceed from the sale of edX Inc to 2U, changed its name into Axim Collaborative. Axim Collaborative’s mission is to make learning more accessible, more relevant, and more effective. The name Axim (a hybrid of the two ideas) was selected to underscore the centrality of access and impact, Khurana brings two decades of experience in social venture philanthropy and in technology innovation space. Most recently she served as managing partner and chief operating officer of Draper Richards Kaplan Foundation, a global venture philanthropy that identifies and supports innovative social ventures tackling complex societal problems. Earlier in her career, Khurana was on the founding teams of two technology start-ups: Cambridge Technology Partners (CTP) and Surebridge, both of which went on to be sold. Khurana also served in numerous roles at Harvard University, working on initiatives to support academic progress and build communities of belonging with undergraduates. Stephanie Khurana introduced herself to the Open edX community members in a town hall style which took place last Friday, March 31st, at the end of the annual developers conference. The gathering, celebrated at MIT’s Stata Center in Cambridge, Massachusetts, last week, attracted over 250 attendants, a similar number to past editions. One of the stories of the event was the acquisition of French-based company Overhang.IO, creator of the distribution tool Tutor. Pakistani American Edly purchased it for an undisclosed amount. Régis Behmo, the Founder and only developer in of Overhang, assumed the role of VP of Engineering at Edly. "Edly understands how contributing to open source creates value both for the company and for the whole edTech community. This partnership will help us drive this movement forward to serve learners and educators worldwide," Behmo said. "Régis's experience and leadership will be invaluable as we increase our impact on educational technology. In coming weeks and months, we’ll be making further announcements around our expanded roadmap for open source contributions to Open edX," said Yasser Bashir, the founder and CEO of Arbisoft LLC, that operates with Edly its edTech brand. .

Language Models that Run Themselves Accelerate the Advent of AGI

Language Models that Run Themselves Accelerate the Advent of AGI

Bloomberg Introduces a 50-Billion Parameter LLM Built For Finance

Bloomberg Introduces a 50-Billion Parameter LLM Built For Finance

The OpenAI's CEO Envisions a Universal Income Society to Compensate Jobs Replaced by AI

The OpenAI's CEO Envisions a Universal Income Society to Compensate Jobs Replaced by AI

Italy Bans ChatGPT While Elon Musk and 1,100 Signatories Call to a Pause on AI [Open Letter]

Italy Bans ChatGPT While Elon Musk and 1,100 Signatories Call to a Pause on AI [Open Letter]

Italy's data protection authority said on Friday it will immediately block and investigate OpenAI from processing data of Italian users. The order is temporary until the company respects the European Union's landmark privacy law, the General Data Protection Regulation (GDPR). Italy's ban to ChatGPT come amid calls to block OpenAI's releases over a range of risks for privacy, cybersecurity and disinformation on both Europe and the U.S. The Italian authority said reminded that ChatGPT also suffered a data breach and exposed users conversations and payment information last week. Moreover, ChatGPT has been shown producing completely false information about named individuals, apparently making up details its training data lacks. Consumer advocacy groups are saying that OpenAI is getting a "mass collection and storage of personal data to train the algorithms of ChatGPT" and is "processing data inaccurately." This week, Elon Musk and dozens of AI experts this week called for a six-month pause on training systems more powerful than GPT-4.  Over 1,100 signatories — including Steve Wozniak, Tristan Harris of the Center for Humane Technology, some engineers from Meta and Google, Stability AI CEO Emad Mostaque — signed an open letter, that was posted online, calling on "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4." • "Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable." • "AI labs have been locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control." • "The pause should be public and verifiable, and include all key actors. If it cannot be enacted quickly, governments should step in and institute a moratorium." • "AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts." • "This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities." No one from OpenAI nor anyone from Anthropic signed this letter. Wednesday, OpenAI CEO Sam Altman spoke with the WSJ, saying OpenAI has not started training GPT-5. • Pause Giant AI Experiments: An Open Letter: AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs. As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control. Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now. Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium. AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities. AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal. In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause. Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society. We can do so here. Let's enjoy a long AI summer, not rush unprepared into a fall.  

Google Shows What AI-Embedded Writing Will Look Like in Gmail and Google Docs

Google Shows What AI-Embedded Writing Will Look Like in Gmail and Google Docs

Google announced that it plans to embed generative AI in Gmail and Google Docs yesterday, as shown in the video below. These features of this "collaborative AI partner" are not out yet. They will be launched via Google's tester program, starting with English in the U.S., this month. "From there, we’ll iterate and refine the experiences before making them available more broadly to consumers, small businesses, enterprises, and educational institutions in more countries and languages," wrote Johanna Voolich Wright Vice President, of Product at Google Workspace. For now, Google says it is only "sharing our broader vision" across Gmail, Docs, Slides, Sheets, Meet, and Chat. A "help me write" box in Gmail and Google Docs will let users type what they want and AI will spit out a block of text based on that prompt. In addition, Google's "collaborative AI partner" into Workspace will result in these features: draft, reply, summarize, and prioritize your Gmail brainstorm, proofread, write, and rewrite in Docs bring your creative vision to life with auto-generated images, audio, and video in Slides go from raw data to insights and analysis via auto completion, formula generation, and contextual categorization in Sheets generate new backgrounds and capture notes in Meet enable workflows for getting things done in Chat Google Cloud also announced generative AI support in Vertex AI and Generative AI App Builder, helping businesses and governments build gen apps. So far, the company has opened up API access to a language model, but it hasn't been any real consumer product launch. Analysts interpret that Google is in total panic over the rise of ChatGPT and AI-powered text. Just like how Google put social features into every product back in the G+ days, the plan going forward is to build ChatGPT-style generative text into every Google product.   Google AI Announcement: – PaLM API & MakerSuite – AI in Gmail, Google Docs & workspace – Generative AI support in Vertex AI – Generative AI App Builder – Partnerships, programs, and resources for each segment of the ecosystem pic.twitter.com/NHZ5zVo2EK — Ben Tossell (@bentossell) March 14, 2023 Here’s our LLaMA-13B fine tuned with RLHF & SFT This has only been trained on 3% of our total dataset size, and no NSFW yet. It is better than GPT3.5 We’re open sourcing all weights and inference code in a few days after training pic.twitter.com/QEcG8OOzG9 — simp 4 satoshi (@iamgingertrash) March 16, 2023

......

Today's Summary

Friday, November 21, 2025

Education technology today is marked by rising AI adoption among educators and innovative personalized learning approaches.

Video News

Loading videos...

Loading videos...

Today in AI & EdTech

Friday, November 21, 2025

AI is transforming the education technology landscape as more teachers adopt intelligent tools, driving forward and adaptive learning experiences.

AI & EdTech Videos

OpenAI Launches Educational GPT Model

OpenAI Launches Educational GPT Model

Adaptive Learning Platforms Show 40% Improvement

Adaptive Learning Platforms Show 40% Improvement

Microsoft Education Copilot Beta Launch

Microsoft Education Copilot Beta Launch

Today in Education

U.S. Department of Education Announces New Funding for STEM Programs

The initiative aims to support science, technology, engineering, and mathematics education.

Global Education Summit Highlights Digital Learning Innovations

Leaders from around the world discuss the future of remote and hybrid learning models.

New Study Shows Benefits of Early Childhood Education

Research indicates significant long-term academic and social advantages for students.

Sections

    About Our News Agency

      Stay Updated

      Get the latest education technology news delivered to your inbox.

      IBL News

      This work is licensed under Creative Commons (CC BY 4.0). IBL News is a nonprofit initiative founded in 2014.

      CC BY 4.0
      © 2025 Class Generation, LLC d.b.a. ibl.ai, ibleducation.com and iblnews.org - 845 Third Avenue, 6th Fl, New York, NY 10022 - Tel 646-722-2616 - Made in U.S.A. • Terms of Use • Privacy Policy