Three top news organizations — Yahoo News, The Wall Street Journal, and Bloomberg — are reliably generating reader-friendly summaries with Generative AI. These optimized texts at the top of the stories work well for both busy readers and Googlebot, providing a quick understanding of the content's subject matter. The news organization and aggregator Yahoo News developed a “Key Takeaways” feature for some articles on its site. The summaries are designed to extract information directly from the article itself, rather than incorporating data from across the internet. With these AI-powered features, user engagement increased by 50%, and time spent per user rose by 165% since the relaunch. (Kat Downs Mulder, general manager of Yahoo News, said the acquisition of the app Artifact “really accelerated” the newsroom’s AI development process.) AI-generated summaries at The Wall Street Journal are presented as three bullet points, referred to as “Key Points.” Every summary prominently displays a “What’s this?” button that quickly explains the feature to readers. “An artificial-intelligence tool created this summary, which was based on the text of the article and checked by an editor,” the Journal tells readers who click on the “What’s this?” button. “Read more about how we use artificial intelligence in our journalism.” The Journal first began working on the feature in early 2024. Initially, the work was scoped for the Newswires product, targeted towards B2B clients who wanted key information without reading the full article text. Google Gemini powers the key points. The Journal plans to experiment with more AI-generated features. Its chatbots (Lars, the Taxbot, and Joannabot) help readers explore topics where we have deep expertise and authority. The AI summaries on Bloomberg.com are called Takeaways. AI summaries also appear on Bloomberg stories on the Bloomberg Terminal. Bloomberg features them on longform pieces and plans to include them in its opinion pieces in the future. These summaries, which are clear and concise snapshots, are especially welcome in fast-moving news.
Anthropic, this week, introduced a beta feature that allows Claude to build, host, and share AI-powered apps within the chatbot, leveraging its Artifacts capability, as shown in the video below. Developers can see and interact with these apps, iterating faster on their creation, as Claude creates artifacts that interact with the chatbot through an API. "Simply describe what you want to create, and Claude will write the code for you," explains the start-up. "Describe any app idea to Claude—a personalized storytelling tool, coding tutor, creative writing assistant—and watch it come to life, no coding required." It's a kind of vibe coding feature, but with the ability to see results inside Claude.
Salesforce is utilizing AI tools for 30% to 50% of its software engineering and customer service work, achieving a 93% accuracy rate, according to its CEO, Marc Benioff [in the picture above]. The San Francisco-based software company is currently selling an AI product that promises to handle customer service tasks without human supervision. This automation is another example of a large company replacing labor with AI tools. Recently, executives at Microsoft Corp. and Alphabet Inc. announced that AI is generating approximately 30% of new computer software code on specific projects within their companies. "All of us have to get our heads around this idea that AI can do things that we were doing before," Benioff said. "We can move on to do higher-value work."
Researchers at MIT presented a model called SEAL (Self-Adapting Language Models) that enables LLMs to learn to generate their own synthetic training data based on the input they receive and learn from their experiences. This AI model that never stops learning tries to mimic human intelligence. Currently, the latest AI models can reason by performing more complex inference. By contrast, the MIT scheme generates new insights and then folds them into its own weights or parameters. The system includes "a reinforcement learning signal that helps guide the model toward updates that improve its overall abilities and enable it to continue learning," explained MIT at Wired. The researchers tested their approach on small and medium-sized versions of two open-source models, Meta’s Llama and Alibaba’s Qwen. They say that the approach ought to work for much larger frontier models, too. Researchers noted that SEAL is computationally intensive, and it isn’t yet clear how best to schedule new periods of learning. "Still, for all its limitations, SEAL is an exciting new path for further AI research, and it may well be something that finds its way into future frontier AI models," said these researchers at MIT.
Paris-based lab Mistral announced its first family of AI reasoning models, called Magistral, fine-tuned for multi-step logic, improved interpretability, and a traceable thought process, unlike general-purpose models. It follows the release of OpenAI’s o3 and Google’s Gemini 2.5 Pro. Magistral works through problems requiring step-by-step deliberation and analysis for improved consistency and reliability. In this regard, it mimics human thinking through logic, insight, uncertainty, and discovery. Magistral comes in two variants, both suited for a wide range of enterprise use cases, from structured calculations and programmatic logic to decision trees and rule-based systems. • Magistral Small, a 24 billion parameter open-source version, available for download from the AI dev platform Hugging Face under the Apache 2.0 license. • Magistral Medium, a more powerful, enterprise-grade version, is in preview on Mistral’s Le Chat chatbot platform and the company’s API, as well as third-party partner clouds. Purpose-built for transparent reasoning. The release of Magistral follows the debut of Mistral's "vibe coding" client, Mistral Code. Founded in 2023, Mistral builds AI-powered services, including Le Chat and mobile apps. It’s backed by venture investors like General Catalyst. It has raised over $1.24 billion to date.