Researchers at Stanford and the University of Washington said in a paper released this month that they were able to train an AI reasoning model called s1, which performed similarly to OpenAI’s o1 and DeepSeek’s R1 on math and coding. The s1 model, along with the data and code, is available on GitHub. According to the researchers, its training costs less than $50 in cloud computing credits. This team started with an off-the-shelf base model and then fine-tuned it through distillation, a process for extracting the “reasoning” capabilities from another AI model by training on its answers. The model was distilled from Gemini 2.0 Flash Thinking Experimental, offered for free via the Google AI Studio platform. Distillation is the same approach Berkeley researchers used to create an AI reasoning model for around $450 last month. OpenAI has accused DeepSeek of improperly harvesting data from its API for model distillation. Distillation is a suitable method for cheaply re-creating an AI model’s capabilities, but it doesn’t create new AI models. The s1 paper suggested that reasoning models can be distilled with a relatively small dataset using supervised fine-tuning (SFT), in which an AI model is explicitly instructed to mimic certain behaviors in a dataset. More specifically, s1 was based on a small, free AI model from Alibaba-owned Chinese AI lab Qwen. To train s1, the researchers created a dataset of just 1,000 carefully curated questions paired with answers to those questions and the “thinking” process behind each answer from Google’s Gemini 2.0 Flash Thinking Experimental. After training s1, which took less than 30 minutes using 16 Nvidia H100 GPUs, s1 achieved strong performance on specific AI benchmarks. Per the paper, researchers used a nifty trick to get s1 to double-check its work and extend its “thinking” time: They told it to wait. Adding the word “wait” during s1’s reasoning helped the model arrive at slightly more accurate answers, Experts said that s1 raises fundamental questions about the commoditization of AI models.
The OpenAI’s Board of Directors rejected yesterday the Elon Musk investment group’s unsolicited offer of $97.4 billion to gain control of the AI company. In a statement, Bret Taylor, the chairman of the OpenAI board, said, "OpenAI is not for sale, and the board has unanimously rejected Mr. Musk’s latest attempt to disrupt his competition." Bret Taylor was referring to Mr. Musk’s own AI company, xAI. OpenAI sent a letter on Friday to Marc Toberoff, the lawyer representing Musk, saying that the offer was "not in the best interests of OpenAI’s mission,” which is to build artificial intelligence that benefits “all of humanity." Toberoff said in a statement sent to The New York Times: “This comes as no surprise, given that Altman and Board chair, Taylor, already rejected Musk’s $97 billion bid while stating they had not yet received it. But we are surprised to see the Board, which has strict fiduciary duties to carefully consider the bid in good faith on behalf of the charity, use the same kind of deflective double-talk Altman used in testifying to the Senate.” Elon Musk also filed a lawsuit in federal court last year to block OpenAI’s restructuring plans. This week, Robert Bonta, California’s attorney general, said that the state was scrutinizing OpenAI’s plan to shift to a for-profit structure. xAI raised $6 billion in December, saying it would use the money to build infrastructure and accelerate research and development. BlackRock, Fidelity, Sequoia Capital, and other investors participated in the funding.
French AI start-Mistral significantly upgraded its web assistant interface, Le Chat, released a mobile app on iOS and Android and introduced a Pro tier for $14.99 monthly. The company’s flagship models, Mistral Large, and the multimodal Pixtral Large, were available for commercial use through an API or cloud, such as Azure AI Studio, Amazon Bedrock, and Google’s Vertex AI. It also released several open-weight models under the Apache 2.0 license. Mistral is trying to position itself as a credible alternative to OpenAI, Anthropic’s Claude, Google Gemini, or Microsoft Copilot. Mistral stated that Le Chat runs on “the fastest inference engines on the planet,” which can answer up to 1,000 words per second. It also claims that it generates much better images than ChatGPT or Grok, as it relies on Black Forest Labs’ Flux Ultra, one of the leading image-generation models.
IBL News | New York The HR, finance, and training platform Workday announced a platform for managing AI agents intended to help organizations manage their digital workforce, which will be available “later this year.” These agents will be coordinated by employing Orchestration Frameworks. In addition to this platform, named  “Workday Agent System of Record,” the company introduced AI autonomous agents for payroll, contracts, financial auditing, and policy management, which can be deployed through the Workday Marketplace. This new AI agent marketplace would look something like this:
U.S. Vice President JD Vance delivered a keynote speech at the Paris AI Summit on Tuesday, warning global leaders and tech industry executives that "excessive regulation will cripple the rapidly growing AI industry." The speech was a rebuke to European efforts to set strict regulations against AI’s risks and underscored a widening rift over the future of the technology. JD Vance defended the U.S. hands-off approach to fuel innovation relying on the private industry, a model opposed to European strict regulations to ensure safety and accountability and China's expansion through state-backed AI giants. Vance framed AI as an economic turning point. He highlighted that “at this moment, we face the extraordinary prospect of a new industrial revolution, one on par with the invention of the steam engine.” The U.S. was noticeably absent from an international document signed by more than 60 nations, including Europe and China, making the Trump administration an outlier in a global pledge to promote responsible AI development. The United Kingdom also declined to sign the pledge. The document pledged to “promote AI accessibility to reduce digital divides” and “ensure AI is open, inclusive, transparent, ethical, safe, secure, and trustworthy.” It also called for “making AI sustainable for people and the planet” and protecting “human rights, gender equality, linguistic diversity, consumer rights, and intellectual property.”