A federal judge in Washington ruled that Google illegally monopolized the search market and related advertising, through exclusive deals, setting itself as the default option on phones and browsers. This violates U.S. antitrust law, said Judge Amit Mehta. Google's owner, Alphabet Inc., paid $26 billion to make its search engine the default option on smartphones and web browsers, effectively blocking any other competitor from succeeding in the market. In a 286-page ruling, Judge Mehta stated that Google has consistently raised the prices of online advertising without consequences by monopolizing distribution on phones and browsers. "The trial evidence firmly established that Google’s monopoly power, maintained by the exclusive distribution agreements, has enabled Google to increase text ads prices without any meaningful competitive constraint," he wrote. Antitrust enforcers alleged that Google has paid Apple, Samsung Electronics, and others billions over decades for prime placement on smartphones and web browsers. This default position has allowed Google to become the most used search engine in the world and fueled more than $300 billion in annual revenue, largely generated by search ads. This case is giving the Federal Government a win in its first major antitrust case against a tech giant in over two decades. "This victory against Google is a historic win for the American people," said Attorney General Merrick Garland. "No company — no matter how large or influential — is above the law. The Justice Department will continue to enforce the antitrust laws vigorously." Google said it plans to appeal the decision.
Yesterday, OpenAI announced SearchGPT, a prototype of new search features that combines the strength of its AI models with real-time information across the internet. It provides timely answers with relevant sources, becoming a meaningful threat to Google, which rushed to add AI features across its search engine. It also puts OpenAI in more direct competition with the startup Perplexity, which labels itself as an AI “answer” engine. Powered by the GPT-4 family of models, this AI-powered search engine was launched only to 10,000 test users and publishers — among them The Wall Street Journal, The Associated Press, and The Verge. OpenAI said that "in the future," it will be integrated into ChatGPT. Now, there is a waitlist. • SearchGPT prominently cites and links to publishers in searches. Responses have clear, in-line, named attribution and links. • It allows users to search in a more natural, intuitive way by asking follow-up questions or clicking the sidebar to open other relevant links. • It provides visual responses with images and video. https://iblnews.org/wp-content/uploads/2024/07/SearchGPT_Hero_Asset_V2.mp4
Publicly traded online learning company 2U Inc., owner of edX.org, filed for Chapter 11 bankruptcy protection in New York while being taken private in a deal that will wipe out about $459 million, or more than half its $945 million debt. As a private company, 2U will be backed by its existing lenders and noteholders, including funds managed by Mudrick Capital Management, LP, Greenvale Capital LLP, and Bayside Capital, LLC. The Lanham, Maryland-based company ensured that "all educational programs and services will continue seamlessly with no interruption for partners or learners." The agreement with its debtholders included infusing $110 million of new capital into 2U. To implement the transaction, 2U and its subsidiaries filed voluntary "prepackaged" Chapter 11 cases in the U.S. Bankruptcy Court for the Southern District of New York. 2U expects to complete the Chapter 11 process by the end of September. "This financing demonstrates the investors' deep belief in 2U and commitment to its essential mission," said Brian Napack, Strategic Advisor to the investment group and former CEO of John Wiley (WLY). "Today marks an important milestone for 2U. New capital and a healthier balance sheet will enable us to continue our long-standing mission," said Paul Lalljie, Chief Executive Officer of 2U.
D2L Brightspace's learning platform showcased an early version of Lumi, its AI feature for building content, assessments, and activities, during its annual gathering this month. The Toronto–based LMS described Lumi this way: "Lumi Quiz – Helps educators generate quiz questions based on their course content seamlessly. Lumi Idea – Generates intuitive suggestions for new assignments and discussions aligned with course material.  Lumi Practice – Creates practice questions in Creator+ based on course content, saving time and helping to improve learning outcomes. Lumi Chat – Creates automated answers to FAQs, surfaces resources and how-to guides for users, and helps to reduce the volume of IT tickets and shorten wait times." "In the coming months, D2L Lumi will be included throughout our core products to enhance everyday workflows, helping save time and supporting the best learning experiences possible," said Stephen Laster, President of D2L. "D2L Lumi is helping to make course creation, delivery, teaching, and learning easier and more engaging," added John Baker, Founder and CEO of D2L. > Video
NVIDIA and French startup Mistral AI released Mistral NeMo 12B this week, a new open-source LLM with 12 billion parameters and 128,000 token context window. It is intended for developers who customize and deploy enterprise applications supporting chatbots, multilingual tasks, coding, and summarization without extensive cloud resources. The open model license, Apache 2.0, allows enterprises to integrate Mistral NeMo into commercial applications seamlessly. "We have developed a model with unprecedented accuracy, flexibility, high efficiency, and enterprise-grade support and security thanks to NVIDIA AI Enterprise deployment," said Guillaume Lample, cofounder and chief scientist of Mistral AI. Mistral NeMo is trained on the NVIDIA DGX Cloud AI platform and comes packaged as an NVIDIA NIM inference microservice. To advance and optimize the process, NVIDIA TensorRT-LLM for accelerated inference performance and the NVIDIA NeMo development platform was also used. Designed to fit on the memory of a single NVIDIA L40S, NVIDIA GeForce RTX 4090, or NVIDIA RTX 4500 GPU, the Mistral NeMo NIM offers high efficiency, low compute cost, and enhanced security and privacy. With the flexibility to run anywhere — cloud, data center, or RTX workstation — Mistral NeMo is available as an NVIDIA NIM via ai.nvidia.com, with a downloadable NIM coming soon.